Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-first-step
Packt
04 Feb 2015
16 min read
Save for later

The First Step

Packt
04 Feb 2015
16 min read
The First Step In this article by Tim Chaplin, author of the book AngularJS Test-driven Development, provides an initial introductory walk-through of how to use TDD to build an AngularJS application with a controller, model, and scope. You will be able to begin the TDD journey and see the fundamentals in action. Now, we will switch gears and dive into TDD with AngularJS. This article will be the first step of TDD. This article will focus on the creation of social media comments. It will also focus on the testing associated with controllers and the use of Angular mocks to AngularJS components in a test. (For more resources related to this topic, see here.) Preparing the application's specification Create an application to enter comments. The specification of the application is as follows: Given I am posting a new comment, when I click on the submit button, the comment should be added to the to-do list Given a comment, when I click on the like button, the number of likes for the comment should be increased Now that we have the specification of application, we can create our development to-do list. It won't be easy to create an entire to-do list of the whole application. Based on the user specifications, we have an idea of what needs to be developed. Here is a rough sketch of the UI: Hold yourself back from jumping into the implementation and thinking about how you will use a controller with a service, ng-repeat, and so on. Resist, resist, resist! Although you can think of how this will be developed in the future, it is never clear until you delve into the code, and that is where you start getting into trouble. TDD and its principles are here to help you get your mind and focus in the right place. Setting up the project I will provide a list in the following section for the initial actions to get the project set up. Setting up the directory The following instructions are specific to setting up the project directory: Create a new project directory. Get angular into the project using Bower: bower install angular Get angular-mocks for testing using Bower: bower install angular-mocks Initialize the application's source directory: mkdir app Initialize the test directory: mkdir spec Initialize the unit test directory: mkdir spec/unit Initialize the end-to-end test directory: mkdir spec/e2e Once the initialization is complete, your folder structure should look as follows: Setting up Protractor In this article, we will just discuss the steps at a higher level: Install Protractor in the project: $ npm install protractor Update Selenium WebDriver: $ ./node_modules/protractor/bin/webdriver-manager update Make sure that Selenium has been installed. Copy the example chromeOnly configuration into the root of the project: $ cp ./node_modules/protractor/example/chromeOnlyConf.js . Configure the Protractor configuration using the following steps: Open the Protractor configuration. Edit the Selenium WebDriver location to reflect the relative directory to chromeDriver: chromeDriver: './node_modules/protractor/selenium/chromedriver', Edit the files section to reflect the test directory: specs: ['spec/e2e/**/*.js'], Set the default base URL: baseUrl: 'http://localhost:8080/', Excellent! Protractor should now be installed and set up. Here is the complete configuration: exports.config = { chromeOnly: true, chromeDriver: './node_modules/protractor/selenium/chromedriver', capabilities: { 'browserName': 'chrome' }, baseUrl: 'http://localhost:8080/', specs: ['spec/e2e/**/*.js'], }; Setting up Karma Here is a brief summary of the steps required to install and get your new project set up: Install Karma using the following command: npm install karma -g Initialize the Karma configuration: karma init Update the Karma configuration: files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], Now that we have set up the project directory and initialized Protractor and Karma, we can dive into the code. Here is the complete karma.conf.js file: module.exports = function(config) { config.set({ basePath: '', frameworks: ['jasmine'], files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], reporters: ['progress'], port: 9876, autoWatch: true, browsers: ['Chrome'], singleRun: false }); }; Setting up http-server A web server will be used to host the application. As this will just be for local development only, you can use http-server. The http-server module is a simple HTTP server that serves static content. It is available as an npm module. To install http-server in your project, type the following command: $ npm install http-server Once http-server is installed, you can run the server by providing it with the root directory of the web page. Here is an example: $ ./node_modules/http-server/bin/http-server Now that you have http-server installed, you can move on to the next step. Top-down or bottom-up approach From our development perspective, we have to determine where to start. The approaches that we will discuss in this article are as follows: The bottom-up approach: With this approach, we think about the different components we will need (controller, service, module, and so on) and then pick the most logical one and start coding. The top-down approach: With this approach, we work from the user scenario and UI. We then create the application around the components in the application. There are merits to both types of approaches and the choice can be based on your team, existing components, requirements, and so on. In most cases, it is best for you to make the choice based on the least resistance. In this article, the approach of specification is top-down, everything is laid out for us from the user scenario and will allow you to organically build the application around the UI. Testing a controller Before getting into the specification, and the mind-set of the feature being delivered, it is important to see the fundamentals of testing a controller. An AngularJS controller is a key component used in most applications. A simple controller test setup When testing a controller, tests are centered on the controller's scope. The tests confirm either the objects or methods in the scope. Angular mocks provide inject, which finds a particular reference and returns it for you to use. When inject is used for the controller, the controllers scope can be assigned to an outer reference for the entire test to use. Here is an example of what this would look like: describe('',function(){ var scope = {}; beforeEach(function(){ module('anyModule'); inject(function($controller){ $controller('AnyController',{$scope:scope}); }); }); }); In the preceding case, the test's scope object is assigned to the actual scope of the controller within the inject function. The scope object can now be used throughout the test, and is also reinitialized before each test. Initializing the scope In the preceding example, scope is initialized to an object {}. This is not the best approach; just like a page, a controller might be nested within another controller. This will cause inheritance of a parent scope as follows: <body ng-app='anyModule'> <div ng-controller='ParentController'> <div ng-controller='ChildController'> </div> </div> </body> As seen in the preceding code, we have this hierarchy of scopes that the ChildController function has access to. In order to test this, we have to initialize the scope object properly in the inject function. Here is how the preceding scope hierarchy can be recreated: inject(function($controller,$rootScope){ var parentScope = $rootScope.$new(); $controller('ParentController',{$scope:parentScope}); var childScope = parentScope.$new(); $controller('AnyController',{$scope: childScope}); }); There are two main things that the preceding code does: The $rootScope scope is injected into the test. The $rootScope scope is the highest level of scope that exists. Each level of scope is created with the $new() method. This method creates the child scope. In this article, we will use the simplified version and initialize the scope to an empty object; however, it is important to understand how to create the scope when required. Bring on the comments Now that the setup and approach have been decided, we can start our first test. From a testing point of view, as we will be using a top-down approach, we will write our Protractor tests first and then build the application. We will follow the same TDD life cycle we have already reviewed, that is, test first, make it run, and make it better. Test first The scenario given is in a well-specified format already and fits our Protractor testing template: describe('',function(){ beforeEach(function(){ }); it('',function(){ }); }); Placing the scenario in the template, we get the following code: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ }); it('Should then add the comment',function(){ }); }); }); Following the 3 A's (Assemble, Act, Assert), we will fit the user scenario in the template. Assemble The browser will need to point to the first page of the application. As the base URL has already been defined, we can add the following to the test: beforeEach(function(){ browser.get('/'); }); Now that the test is prepared, we can move on to the next step, Act. Act The next thing we need to do, based on the user specification, is add an actual comment. The easiest thing is to just put some text into an input box. The test for this, again without knowing what the element will be called or what it will do, is to write it based on what it should be. Here is the code to add the comment section for the application: beforeEach(function(){ ... var commentInput = $('input'); commentInput.sendKeys('a comment'); }); The last assemble component, as part of the test, is to push the Submit button. This can be easily achieved in Protractor using the click function. Even though we don't have a page yet, or any attributes, we can still name the button that will be created: beforeEach(function(){ ... var submitButton = element.all(by.buttonText('Submit')).click(); }); Finally, we will hit the crux of the test and assert the users' expectations. Assert The user expectation is that once the Submit button is clicked, the comment is added. This is a little ambiguous, but we can determine that somehow the user needs to get notified that the comment was added. The simplest approach is to display all comments on the page. In AngularJS, the easiest way to do this is to add an ng-repeat object that displays all comments. To test this, we will add the following: it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); Now, the test has been constructed and meets the user specifications. It is small and concise. Here is the completed test: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ //Assemble browser.get('/'); var commentInput = $('input'); commentInput.sendKeys('a comment'); //Act //Act var submitButton = element.all(by.buttonText('Submit')). click(); }); //Assert it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); }); }); Make it run Based on the errors and output of the test, we will build our application as we go. The first step to make the code run is to identify the errors. Before starting off the site, let's create a bare bones index.html page: <!DOCTYPE html> <html> <head> <title></title> </head> <body> </body> </html> Already anticipating the first error, add AngularJS as a dependency in the page: <script type='text/javascript' src='bower_components/angular/angular.js'></script> </body> Now, starting the web server using the following command: $ ./node_modules/http-server/bin/http-server -p 8080 Run Protractor to see the first error: $ ./node_modules/.bin/protractor chromeOnlyConf.js Our first error states that AngularJS could not be found: Error: Angular could not be found on the page http://localhost:8080/ : angular never provided resumeBootstrap This is because we need to add ng-app to the page. Let's create a module and add it to the page. The complete HTML page now looks as follows: <!DOCTYPE html> <html> <head> <title></title> </head> <body> <script src="bower_components/angular/angular.js"></script> </body> </html> Adding the module The first component that you need to define is an ng-app attribute in the index.html page. Use the following steps to add the module: Add ng-app as an attribute to the body tag: <body ng-app='comments'> Now, we can go ahead and create a simple comments module and add it to a file named comments.js: angular.module('comments',[]); Add this new file to index.html: <script src='app/commentController.js'></script> Rerun the Protractor test to get the next error: $ Error: No element found using locator: By.cssSelector('input') The test couldn't find our input locator. You need to add the input to the page. Adding the input Here are the steps you need to follow to add the input to the page: All we have to do is add a simple input tag to the page: <input type='text' /> Run the test and see what the new output is: $ Error: No element found using locator: by.buttonText('Submit') Just like the previous error, we need to add a button with the appropriate text: <button type='button'>Submit</button> Run the test again and the next error is as follows: $ Error: No element found using locator: by.repeater('comment in comments') This appears to be from our expectation that a submitted comment will be available on the page through ng-repeat. To add this to the page, we will use a controller to provide the data for the repeater. Controller As we mentioned in the preceding section, the error is because there is no comments object. In order to add the comments object, we will use a controller that has an array of comments in its scope. Use the following steps to add a comments object in the scope: Create a new file in the app directory named commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Add it to the web page after the AngularJS script: <script src='app/commentController.js'></script> Now, we can add commentController to the page: <div ng-controller='CommentController'> Then, add a repeater for the comments as follows: <ul ng-repeat='comment in comments'> <li>{{comment}}</li> </ul> Run the Protractor test and let's see where we are: $ Error: No element found using locator: by.repeater('comment in comments') Hmmm! We get the same error. Let's look at the actual page that gets rendered and see what's going on. In Chrome, go to http://localhost:8080 and open the console to see the page source (Ctrl + Shift + J). You should see something like what's shown in the following screenshot: Notice that the repeater and controller are both there; however, the repeater is commented out. Since Protractor is only looking at visible elements, it won't find the repeater. Great! Now we know why the repeater isn't visible, but we have to fix it. In order for a comment to show up, it has to exist on the controller's comments scope. The smallest change is to add something to the array to initialize it as shown in the following code snippet: .controller('CommentController',['$scope',function($scope){ $scope.comments = ['anything']; }]); Now run the test and we get the following: $ Expected 'anything' to be 'a comment'. Wow! We finally tackled all the errors and reached the expectation. Here is what the HTML code looks like so far: <!DOCTYPE html> <html> <head> <title></title> </head> <body ng-app='comments'> <div ng-controller='CommentController'> <input type='text' /> <ul> <li ng-repeat='comment in comments'> {{comment.value}} </li> </ul> </div> <script src='bower_components/angular/angular.js'></script> <script src='app/comments.js'></script> <script src='app/commentController.js'></script> </body> </html> The comments.js module looks as follows: angular.module('comments',[]); Here is commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Make it pass With TDD, you want to add the smallest possible component to make the test pass. Since we have hardcoded, for the moment, the comments to be initialized to anything, change anything to a comment; this should make the test pass. Here is the code to make the test pass: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = ['a comment']; }]); … Run the test, and bam! We get a passing test: $ 1 test, 1 assertion, 0 failures Wait a second! We still have some work to do. Although we got the test to pass, it is not done. We added some hacks just to get the test passing. The two things that stand out are: Clicking on the Submit button, which really doesn't have any functionality Hardcoded initialization of the expected value for a comment The preceding changes are critical steps we need to perform before we move forward. They will be tackled in the next phase of the TDD life cycle, that is, make it better (refactor). Summary In this article, we walked through the TDD techniques of using Protractor and Karma together. As the application was developed, you were able to see where, why, and how to apply the TDD testing tools and techniques. With the bottom-up approach, the specifications are used to build unit tests and then build the UI layer on top of that. In this article, a top-down approach was shown to focus on the user's behavior. The top-down approach tests the UI and then filters the development through the other layers. Resources for Article: Further resources on this subject: AngularJS Project [Article] Role of AngularJS [Article] Creating Our First Animation in AngularJS [Article]
Read more
  • 0
  • 0
  • 1279

article-image-servicestack-applications
Packt
21 Jan 2015
9 min read
Save for later

ServiceStack applications

Packt
21 Jan 2015
9 min read
In this article by Kyle Hodgson and Darren Reid, authors of the book ServiceStack 4 Cookbook, we'll learn about unit testing ServiceStack applications. (For more resources related to this topic, see here.) Unit testing ServiceStack applications In this recipe, we'll focus on simple techniques to test individual units of code within a ServiceStack application. We will use the ServiceStack testing helper BasicAppHost as an application container, as it provides us with some useful helpers to inject a test double for our database. Our goal is small; fast tests that test one unit of code within our application. Getting ready We are going to need some services to test, so we are going to use the PlacesToVisit application. How to do it… Create a new testing project. It's a common convention to name the testing project <ProjectName>.Tests—so in our case, we'll call it PlacesToVisit.Tests. Create a class within this project to contain the tests we'll write—let's name it PlacesServiceTests as the tests within it will focus on the PlacesService class. Annotate this class with the [TestFixture] attribute, as follows: [TestFixture]public class PlaceServiceTests{ We'll want one method that runs whenever this set of tests begins to set up the environment and another one that runs afterwards to tear the environment down. These will be annotated with the NUnit attributes of TestFixtureSetUp and TextFixtureTearDown, respectively. Let's name them FixtureInit and FixtureTearDown. In the FixtureInit method, we will use BasicAppHost to initialize our appHost test container. We'll make it a field so that we can easily access it in each test, as follows: ServiceStackHost appHost; [TestFixtureSetUp]public void FixtureInit(){appHost = new BasicAppHost(typeof(PlaceService).Assembly){   ConfigureContainer = container =>   {     container.Register<IDbConnectionFactory>(c =>       new OrmLiteConnectionFactory(         ":memory:", SqliteDialect.Provider));     container.RegisterAutoWiredAs<PlacesToVisitRepository,       IPlacesToVisitRepository>();   }}.Init();} The ConfigureContainer property on BasicAppHost allows us to pass in a function that we want AppHost to run inside of the Configure method. In this case, you can see that we're registering OrmLiteConnectionFactory with an in-memory SQLite instance. This allows us to test code that uses a database without that database actually running. This useful technique could be considered a classic unit testing approach—the mockist approach might have been to mock the database instead. The FixtureTearDown method will dispose of appHost as you might imagine. This is how the code will look: [TestFixtureTearDown]public void FixtureTearDown(){appHost.Dispose();} We haven't created any data in our in memory database yet. We'll want to ensure the data is the same prior to each test, so our TestInit method is a good place to do that—it will be run once before each and every test run as we'll annotate it with the [SetUp] attribute, as follows: [SetUp]public void TestInit(){using (var db = appHost.Container     .Resolve<IDbConnectionFactory>().Open()){   db.DropAndCreateTable<Place>();   db.InsertAll(PlaceSeedData.GetSeedPlaces());}} As our tests all focus on PlaceService, we'll make sure to create Place data. Next, we'll begin writing tests. Let's start with one that asserts that we can create new places. The first step is to create the new method, name it appropriately, and annotate it with the [Test] attribute, as follows: [Test]public void ShouldAddNewPlaces(){ Next, we'll create an instance of PlaceService that we can test against. We'll use the Funq IoC TryResolve method for this: var placeService = appHost.TryResolve<PlaceService>(); We'll want to create a new place, then query the database later to see whether the new one was added. So, it's useful to start by getting a count of how many places there are based on just the seed data. Here's how you can get the count based on the seed data: var startingCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places               .Count; Since we're testing the ability to handle a CreatePlaceToVisit request, we'll need a test object that we can send the service to. Let's create one and then go ahead and post it: var melbourne = new CreatePlaceToVisit{   Name = "Melbourne",   Description = "A nice city to holiday"}; placeService.Post(melbourne); Having done that, we can get the updated count and then assert that there is one more item in the database than there were before: var newCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places              .Count;Assert.That(newCount == startingCount + 1); Next, let's fetch the new record that was created and make an assertion that it's the one we want: var newPlace = placeService.Get(new PlaceToVisitRequest{   Id = startingCount + 1});Assert.That(newPlace.Place.Name == melbourne.Name);} With this in place, if we run the test, we'll expect it to pass both assertions. This proves that we can add new places via PlaceService registered with Funq, and that when we do that we can go and retrieve them later as expected. We can also build a similar test that asserts that on our ability to update an existing place. Adding the code is simple, following the pattern we set out previously. We'll start with the arrange section of the test, creating the variables and objects we'll need: [Test]public void ShouldUpdateExistingPlaces(){var placeService = appHost.TryResolve<PlaceService>();var startingPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var startingCount = startingPlaces.Count;  var canberra = startingPlaces     .First(c => c.Name.Equals("Canberra")); const string canberrasNewName = "Canberra, ACT";canberra.Name = canberrasNewName; Once they're in place, we'll act. In this case, the Put method on placeService has the responsibility for update operations: placeService.Put(canberra.ConvertTo<UpdatePlaceToVisit>()); Think of the ConvertTo helper method from ServiceStack as an auto-mapper, which converts our Place object for us. Now that we've updated the record for Canberra, we'll proceed to the assert section of the test, as follows: var updatedPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var updatedCanberra = updatedPlaces     .First(p => p.Id.Equals(canberra.Id));var updatedCount = updatedPlaces.Count; Assert.That(updatedCanberra.Name == canberrasNewName);Assert.That(updatedCount == startingCount);} How it works… These unit tests are using a few different patterns that help us write concise tests, including the development of our own test helpers, and with helpers from the ServiceStack.Testing namespace, for instance BasicAppHost allows us to set up an application host instance without actually hosting a web service. It also lets us provide a custom ConfigureContainer action to mock any of our dependencies for our services and seed our testing data, as follows: appHost = new BasicAppHost(typeof(PlaceService).Assembly){ConfigureContainer = container =>{   container.Register<IDbConnectionFactory>(c =>     new OrmLiteConnectionFactory(     ":memory:", SqliteDialect.Provider));    container.RegisterAutoWiredAs<PlacesToVisitRepository,     IPlacesToVisitRepository>();}}.Init(); To test any ServiceStack service, you can resolve it through the application host via TryResolve<ServiceType>().This will have the IoC container instantiate an object of the type requested. This gives us the ability to test the Get method independent of other aspects of our web service, such as validation. This is shown in the following code: var placeService = appHost.TryResolve<PlaceService>(); In this example, we are using an in-memory SQLite instance to mock our use of OrmLite for data access, which IPlacesToVisitRepository will also use as well as seeding our test data in our ConfigureContainer hook of BasicAppHost. The use of both in-memory SQLite and BasicAppHost provide fast unit tests to very quickly iterate our application services while ensuring we are not breaking any functionality specifically associated with this component. In the example provided, we are running three tests in less than 100 milliseconds. If you are using the full version of Visual Studio, extensions such as NCrunch can allow you to regularly run your unit tests while you make changes to your code. The performance of ServiceStack components and the use of these extensions results in a smooth developer experience with productivity and quality of code. There's more… In the examples in this article, we wrote out tests that would pass, ran them, and saw that they passed (no surprise). While this makes explaining things a bit simpler, it's not really a best practice. You generally want to make sure your tests fail when presented with wrong data at some point. The authors have seen many cases where subtle bugs in test code were causing a test to pass that should not have passed. One best practice is to write tests so that they fail first and then make them pass—this guarantees that the test can actually detect the defect you're guarding against. This is commonly referred to as the red/green/refactor pattern. Summary In this article, we covered some techniques to unit test ServiceStack applications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Web API and Client Integration [article] WebSockets in Wildfly [article]
Read more
  • 0
  • 0
  • 1176

article-image-creating-photo-sharing-application
Packt
16 Jan 2015
34 min read
Save for later

Creating a Photo-sharing Application

Packt
16 Jan 2015
34 min read
In this article by Rob Foster, the author of CodeIgniter Web Application Blueprints, we will create a photo-sharing application. There are quite a few image-sharing websites around at the moment. They all share roughly the same structure: the user uploads an image and that image can be shared, allowing others to view that image. Perhaps limits or constraints are placed on the viewing of an image, perhaps the image only remains viewable for a set period of time, or within set dates, but the general structure is the same. And I'm happy to announce that this project is exactly the same. We'll create an application allowing users to share pictures; these pictures are accessible from a unique URL. To make this app, we will create two controllers: one to process image uploading and one to process the viewing and displaying of images stored. We'll create a language file to store the text, allowing you to have support for multiple languages should it be needed. We'll create all the necessary view files and a model to interface with the database. In this article, we will cover: Design and wireframes Creating the database Creating the models Creating the views Creating the controllers Putting it all together So without further ado, let's get on with it. (For more resources related to this topic, see here.) Design and wireframes As always, before we start building, we should take a look at what we plan to build. First, a brief description of our intent: we plan to build an app to allow the user to upload an image. That image will be stored in a folder with a unique name. A URL will also be generated containing a unique code, and the URL and code will be assigned to that image. The image can be accessed via that URL. The idea of using a unique URL to access that image is so that we can control access to that image, such as allowing an image to be viewed only a set number of times, or for a certain period of time only. Anyway, to get a better idea of what's happening, let's take a look at the following site map: So that's the site map. The first thing to notice is how simple the site is. There are only three main areas to this project. Let's go over each item and get a brief idea of what they do: create: Imagine this as the start point. The user will be shown a simple form allowing them to upload an image. Once the user presses the Upload button, they are directed to do_upload. do_upload: The uploaded image is validated for size and file type. If it passes, then a unique eight-character string is generated. This string is then used as the name of a folder we will make. This folder is present in the main upload folder and the uploaded image is saved in it. The image details (image name, folder name, and so on) are then passed to the database model, where another unique code is generated for the image URL. This unique code, image name, and folder name are then saved to the database. The user is then presented with a message informing them that their image has been uploaded and that a URL has been created. The user is also presented with the image they have uploaded. go: This will take a URL provided by someone typing into a browser's address bar, or an img src tag, or some other method. The go item will look at the unique code in the URL, query the database to see if that code exists, and if so, fetch the folder name and image name and deliver the image back to the method that called it. Now that we have a fairly good idea of the structure and form of the site, let's take a look at the wireframes of each page. The create item The following screenshot shows a wireframe for the create item discussed in the previous section. The user is shown a simple form allowing them to upload an image. Image2 The do_upload item The following screenshot shows a wireframe from the do_upload item discussed in the previous section. The user is shown the image they have uploaded and the URL that will direct other users to that image. The go item The following screenshot shows a wireframe from the go item described in the previous section. The go controller takes the unique code in a URL, attempts to find it in the database table images, and if found, supplies the image associated with it. Only the image is supplied, not the actual HTML markup. File overview This is a relatively small project, and all in all we're only going to create seven files, which are as follows: /path/to/codeigniter/application/models/image_model.php: This provides read/write access to the images database table. This model also takes the upload information and unique folder name (which we store the uploaded image in) from the create controller and stores this to the database. /path/to/codeigniter/application/views/create/create.php: This provides us with an interface to display a form allowing the user to upload a file. This also displays any error messages to the user, such as wrong file type, file size too big, and so on. /path/to/codeigniter/application/views/create/result.php: This displays the image to the user after it has been successfully uploaded, as well as the URL required to view that image. /path/to/codeigniter/application/views/nav/top_nav.php: This provides a navigation bar at the top of the page. /path/to/codeigniter/application/controllers/create.php: This performs validation checks on the image uploaded by the user, creates a uniquely named folder to store the uploaded image, and passes this information to the model. /path/to/codeigniter/application/controllers/go.php: This performs validation checks on the URL input by the user, looks for the unique code in the URL and attempts to find this record in the database. If it is found, then it will display the image stored on disk. /path/to/codeigniter/application/language/english/en_admin_lang.php: This provides language support for the application. The file structure of the preceding seven files is as follows: application/ ├── controllers/ │   ├── create.php │   ├── go.php ├── models/ │   ├── image_model.php ├── views/create/ │   ├── create.php │   ├── result.php ├── views/nav/ │   ├── top_nav.php ├── language/english/ │   ├── en_admin_lang.php Creating the database First, we'll build the database. Copy the following MySQL code into your database: CREATE DATABASE `imagesdb`; USE `imagesdb`;   DROP TABLE IF EXISTS `images`; CREATE TABLE `images` ( `img_id` int(11) NOT NULL AUTO_INCREMENT, `img_url_code` varchar(10) NOT NULL, `img_url_created_at` timestamp NOT NULL DEFAULT     CURRENT_TIMESTAMP, `img_image_name` varchar(255) NOT NULL, `img_dir_name` varchar(8) NOT NULL, PRIMARY KEY (`img_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; Right, let's take a look at each item in every table and see what they mean: Table: images Element Description img_id This is the primary key. img_url_code This stores the unique code that we use to identify the image in the database. img_url_created_at This is the MySQL timestamp for the record. img_image_name This is the filename provided by the CodeIgniter upload functionality. img_dir_name This is the name of the directory we store the image in. We'll also need to make amends to the config/database.php file, namely setting the database access details, username, password, and so on. Open the config/database.php file and find the following lines: $db['default']['hostname'] = 'localhost'; $db['default']['username'] = 'your username'; $db['default']['password'] = 'your password'; $db['default']['database'] = 'imagesdb'; Edit the values in the preceding code ensuring you substitute those values for the ones more specific to your setup and situation—so enter your username, password, and so on. Adjusting the config.php and autoload.php files We don't actually need to adjust the config.php file in this project as we're not really using sessions or anything like that. So we don't need an encryption key or database information. So just ensure that you are not autoloading the session in the config/autoload.php file or you will get an error, as we've not set any session variables in the config/config.php file. Adjusting the routes.php file We want to redirect the user to the create controller rather than the default CodeIgniter welcome controller. To do this, we will need to amend the default controller settings in the routes.php file to reflect this. The steps are as follows: Open the config/routes.php file for editing and find the following lines (near the bottom of the file): $route['default_controller'] = "welcome"; $route['404_override'] = ''; First, we need to change the default controller. Initially, in a CodeIgniter application, the default controller is set to welcome. However, we don't need that, instead we want the default controller to be create, so find the following line: $route['default_controller'] = "welcome"; Replace it with the following lines: $route['default_controller'] = "create"; $route['404_override'] = ''; Then we need to add some rules to govern how we handle URLs coming in and form submissions. Leave a few blank lines underneath the preceding two lines of code (default controller and 404 override) and add the following three lines of code: $route['create'] = "create/index"; $route['(:any)'] = "go/index"; $route['create/do_upload'] = "create/do_upload"; Creating the model There is only one model in this project, image_model.php. It contains functions specific to creating and resetting passwords. Create the /path/to/codeigniter/application/models/image_model.php file and add the following code to it: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');   class Image_model extends CI_Model { function __construct() {    parent::__construct(); }   function save_image($data) {    do {       $img_url_code = random_string('alnum', 8);        $this->db->where('img_url_code = ', $img_url_code);      $this->db->from('images');      $num = $this->db->count_all_results();    } while ($num >= 1);      $query = "INSERT INTO `images` (`img_url_code`,       `img_image_name`, `img_dir_name`) VALUES (?,?,?) ";    $result = $this->db->query($query, array($img_url_code,       $data['image_name'], $data['img_dir_name']));      if ($result) {      return $img_url_code;    } else {      return flase;    } }   function fetch_image($img_url_code) {    $query = "SELECT * FROM `images` WHERE `img_url_code` = ? ";    $result = $this->db->query($query, array($img_url_code));      if ($result) {      return $result;    } else {      return false;    } } } There are two main functions in this model, which are as follows: save_image(): This generates a unique code that is associated with the uploaded image and saves it, with the image name and folder name, to the database. fetch_image(): This fetches an image's details from the database according to the unique code provided. Okay, let's take save_image() first. The save_image() function accepts an array from the create controller containing image_name (from the upload process) and img_dir_name (this is the folder that the image is stored in). A unique code is generated using a do…while loop as shown here: $img_url_code = random_string('alnum', 8); First a string is created, eight characters in length, containing alpha-numeric characters. The do…while loop checks to see if this code already exists in the database, generating a new code if it is already present. If it does not already exist, this code is used: do { $img_url_code = random_string('alnum', 8);   $this->db->where('img_url_code = ', $img_url_code); $this->db->from('images'); $num = $this->db->count_all_results(); } while ($num >= 1); This code and the contents of the $data array are then saved to the database using the following code: $query = "INSERT INTO `images` (`img_url_code`, `img_image_name`,   `img_dir_name`) VALUES (?,?,?) "; $result = $this->db->query($query, array($img_url_code,   $data['image_name'], $data['img_dir_name'])); The $img_url_code is returned if the INSERT operation was successful, and false if it failed. The code to achieve this is as follows: if ($result) { return $img_url_code; } else { return false; } Creating the views There are only three views in this project, which are as follows: /path/to/codeigniter/application/views/create/create.php: This displays a form to the user allowing them to upload an image. /path/to/codeigniter/application/views/create/result.php: This displays a link that the user can use to forward other people to the image, as well as the image itself. /path/to/codeigniter/application/views/nav/top_nav.php: This displays the top-level menu. In this project it's very simple, containing a project name and a link to go to the create controller. So those are our views, as I said, there are only three of them as it's a simple project. Now, let's create each view file. Create the /path/to/codeigniter/application/views/create/create.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <p><?php echo $this->lang->line('encode_instruction_1');   ?></p>   <?php echo validation_errors(); ?>   <?php if (isset($success) && $success == true) : ?> <div class="alert alert-success">    <strong><?php echo $this->lang->line('     common_form_elements_success_notifty'); ?></strong>     <?php echo $this->lang->     line('encode_encode_now_success'); ?> </div> <?php endif ; ?>   <?php if (isset($fail) && $fail == true) : ?> <div class="alert alert-danger">    <strong><?php echo $this->lang->line('     common_form_elements_error_notifty'); ?> </strong>     <?php echo $this->lang->line('encode_encode_now_error     '); ?>    <?php echo $fail ; ?> </div> <?php endif ; ?>   <?php echo form_open_multipart('create/do_upload');?> <input type="file" name="userfile" size="20" /> <br /> <input type="submit" value="upload" /> <?php echo form_close() ; ?> <br /> <?php if (isset($result) && $result == true) : ?> <div class="alert alert-info">    <strong><?php echo $this->lang->line('     encode_upload_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?> </div> <?php endif ; ?> This view file can be thought of as the main view file; it is here that the user can upload their image. Error messages are displayed here too. Create the /path/to/codeigniter/application/views/create/result.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <?php if (isset($result) && $result == true) : ?>    <strong><?php echo $this->lang->line('     encode_encoded_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?>    <br />    <img src="<?php echo base_url() . 'upload/' .       $img_dir_name . '/' . $file_name ;?>" /> <?php endif ; ?> This view will display the encoded image resource URL to the user (so they can copy and share it) and the actual image itself. Create the /path/to/codeigniter/application/views/nav/top_nav.php file and add the following code to it: <!-- Fixed navbar --> <div class="navbar navbar-inverse navbar-fixed-top"   role="navigation"> <div class="container">    <div class="navbar-header">      <button type="button" class="navbar-toggle" data- toggle="collapse" data-target=".navbar-collapse">        <span class="sr-only">Toggle navigation</span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>      </button>      <a class="navbar-brand" href="#"><?php echo $this-       >lang->line('system_system_name'); ?></a>  </div>    <div class="navbar-collapse collapse">      <ul class="nav navbar-nav">        <li class="active"><?php echo anchor('create',           'Create') ; ?></li>      </ul>    </div><!--/.nav-collapse --> </div> </div>   <div class="container theme-showcase" role="main"> This view is quite basic but still serves an important role. It displays an option to return to the index() function of the create controller. Creating the controllers We're going to create two controllers in this project, which are as follows: /path/to/codeigniter/application/controllers/create.php: This handles the creation of unique folders to store images and performs the upload of a file. /path/to/codeigniter/application/controllers/go.php: This fetches the unique code from the database, and returns any image associated with that code. These are two of our controllers for this project, let's now go ahead and create them. Create the /path/to/codeigniter/application/controllers/create.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   class Create extends MY_Controller { function __construct() {    parent::__construct();      $this->load->helper(array('string'));      $this->load->library('form_validation');      $this->load->library('image_lib');      $this->load->model('Image_model');      $this->form_validation->set_error_delimiters('<div         class="alert alert-danger">', '</div>');    }   public function index() {    $page_data = array('fail' => false,                        'success' => false);    $this->load->view('common/header');    $this->load->view('nav/top_nav');    $this->load->view('create/create', $page_data);    $this->load->view('common/footer'); }   public function do_upload() {    $upload_dir = '/filesystem/path/to/upload/folder/';    do {      // Make code      $code = random_string('alnum', 8);        // Scan upload dir for subdir with same name      // name as the code      $dirs = scandir($upload_dir);        // Look to see if there is already a      // directory with the name which we      // store in $code      if (in_array($code, $dirs)) { // Yes there is        $img_dir_name = false; // Set to false to begin again      } else { // No there isn't        $img_dir_name = $code; // This is a new name      }      } while ($img_dir_name == false);      if (!mkdir($upload_dir.$img_dir_name)) {      $page_data = array('fail' => $this->lang->       line('encode_upload_mkdir_error'),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    }      $config['upload_path'] = $upload_dir.$img_dir_name;    $config['allowed_types'] = 'gif|jpg|jpeg|png';    $config['max_size'] = '10000';    $config['max_width'] = '1024';    $config['max_height'] = '768';      $this->load->library('upload', $config);      if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload->       display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);       $this->load->view('common/footer');    } else {      $image_data = $this->upload->data();      $page_data['result'] = $this->Image_model->save_image(       array('image_name' => $image_data['file_name'],         'img_dir_name' => $img_dir_name));    $page_data['file_name'] = $image_data['file_name'];      $page_data['img_dir_name'] = $img_dir_name;        if ($page_data['result'] == false) {        // success - display image and link        $page_data = array('fail' => $this->lang->         line('encode_upload_general_error'));        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/create', $page_data);        $this->load->view('common/footer');      } else {        // success - display image and link        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/result', $page_data);        $this->load->view('common/footer');      }    } } } Let's start with the index() function. The index() function sets the fail and success elements of the $page_data array to false. This will suppress any initial messages from being displayed to the user. The views are loaded, specifically the create/create.php view, which contains the image upload form's HTML markup. Once the user submits the form in create/create.php, the form will be submitted to the do_upload() function of the create controller. It is this function that will perform the task of uploading the image to the server. First off, do_upload() defines an initial location for the upload folder. This is stored in the $upload_dir variable. Next, we move into a do…while structure. It looks something like this: do { // something } while ('…a condition is not met'); So that means do something while a condition is not being met. Now with that in mind, think about our problem—we have to save the image being uploaded in a folder. That folder must have a unique name. So what we will do is generate a random string of eight alpha-numeric characters and then look to see if a folder exists with that name. Keeping that in mind, let's look at the code in detail: do { // Make code $code = random_string('alnum', 8);   // Scan uplaod dir for subdir with same name // name as the code $dirs = scandir($upload_dir);   // Look to see if there is already a // directory with the name which we // store in $code if (in_array($code, $dirs)) { // Yes there is    $img_dir_name = false; // Set to false to begin again } else { // No there isn't    $img_dir_name = $code; // This is a new name } } while ($img_dir_name == false); So we make a string of eight characters, containing only alphanumeric characters, using the following line of code: $code = random_string('alnum', 8); We then use the PHP function scandir() to look in $upload_dir. This will store all directory names in the $dirs variable, as follows: $dirs = scandir($upload_dir); We then use the PHP function in_array() to look for the value in $code in the list of directors from scandir(): If we don't find a match, then the value in $code must not be taken, so we'll go with that. If the value is found, then we set $img_dir_name to false, which is picked up by the final line of the do…while loop: ... } while ($img_dir_name == false); Anyway, now that we have our unique folder name, we'll attempt to create it. We use the PHP function mkdir(), passing to it $upload_dir concatenated with $img_dir_name. If mkdir() returns false, the form is displayed again along with the encode_upload_mkdir_error message set in the language file, as shown here: if (!mkdir($upload_dir.$img_dir_name)) { $page_data = array('fail' => $this->lang->   line('encode_upload_mkdir_error'),                      'success' => false); $this->load->view('common/header'); $this->load->view('nav/top_nav'); $this->load->view('create/create', $page_data); $this->load->view('common/footer'); } Once the folder has been made, we then set the configuration variables for the upload process, as follows: $config['upload_path'] = $upload_dir.$img_dir_name; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; Here we are specifying that we only want to upload .gif, .jpg, .jpeg, and .png files. We also specify that an image cannot be above 10,000 KB in size (although you can set this to any value you wish—remember to adjust the upload_max_filesize and post_max_size PHP settings in your php.ini file if you want to have a really big file). We also set the minimum dimensions that an image must be. As with the file size, you can adjust this as you wish. We then load the upload library, passing to it the configuration settings, as shown here: $this->load->library('upload', $config); Next we will attempt to do the upload. If unsuccessful, the CodeIgniter function $this->upload->do_upload() will return false. We will look for this and reload the upload page if it does return false. We will also pass the specific error as a reason why it failed. This error is stored in the fail item of the $page_data array. This can be done as follows:    if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload-       >display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    } else { ... If, however, it did not fail, we grab the information generated by CodeIgniter from the upload. We'll store this in the $image_data array, as follows: $image_data = $this->upload->data(); Then we try to store a record of the upload in the database. We call the save_image function of Image_model, passing to it file_name from the $image_data array, as well as $img_dir_name, as shown here: $page_data['result'] = $this->Image_model-> save_image(array('image_name' => $image_data['file_name'],   'img_dir_name' => $img_dir_name)); We then test for the return value of the save_image() function; if it is successful, then Image_model will return the unique URL code generated in the model. If it is unsuccessful, then Image_model will return the Boolean false. If false is returned, then the form is loaded with a general error. If successful, then the create/result.php view file is loaded. We pass to it the unique URL code (for the link the user needs), and the folder name and image name, necessary to display the image correctly. Create the /path/to/codeigniter/application/controllers/go.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access allowed'); class Go extends MY_Controller {function __construct() {parent::__construct();   $this->load->helper('string');} public function index() {   if (!$this->uri->segment(1)) {     redirect (base_url());   } else {     $image_code = $this->uri->segment(1);     $this->load->model('Image_model');     $query = $this->Image_model->fetch_image($image_code);      if ($query->num_rows() == 1) {       foreach ($query->result() as $row) {         $img_image_name = $row->img_image_name;         $img_dir_name = $row->img_dir_name;       }          $url_address = base_url() . 'upload/' . $img_dir_name .'/' . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } The go controller has only one main function, index(). It is called when a user clicks on a URL or a URL is called (perhaps as the src value of an HTML img tag). Here we grab the unique code generated and assigned to an image when it was uploaded in the create controller. This code is in the first value of the URI. Usually it would occupy the third parameter—with the first and second parameters normally being used to specify the controller and controller function respectively. However, we have changed this behavior using CodeIgniter routing. This is explained fully in the Adjusting the routes.php file section of this article. Once we have the unique code, we pass it to the fetch_image() function of Image_model: $image_code = $this->uri->segment(1); $this->load->model('Image_model'); $query = $this->Image_model->fetch_image($image_code); We test for what is returned. We ask if the number of rows returned equals exactly 1. If not, we will then redirect to the create controller. Perhaps you may not want to do this. Perhaps you may want to do nothing if the number of rows returned does not equal 1. For example, if the image requested is in an HTML img tag, then if an image is not found a redirect may send someone away from the site they're viewing to the upload page of this project—something you might not want to happen. If you want to remove this functionality, remove the following lines in bold from the code excerpt: ....        $img_dir_name = $row->img_dir_name;        }          $url_address = base_url() . 'upload/' . $img_dir_name .'/'           . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } .... Anyway, if the returned value is exactly 1, then we'll loop over the returned database object and find img_image_name and img_dir_name, which we'll need to locate the image in the upload folder on the disk. This can be done as follows: foreach ($query->result() as $row) { $img_image_name = $row->img_image_name; $img_dir_name = $row->img_dir_name; } We then build the address of the image file and redirect the browser to it, as follows: $url_address = base_url() . 'upload/' . $img_dir_name .'/'   . $img_image_name; redirect (prep_url($url_address)); Creating the language file We make use of the language file to serve text to users. In this way, you can enable multiple region/multiple language support. Create the /path/to/codeigniter/application/language/english/en_admin_lang.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   // General $lang['system_system_name'] = "Image Share";   // Upload $lang['encode_instruction_1'] = "Upload your image to share it"; $lang['encode_upload_now'] = "Share Now"; $lang['encode_upload_now_success'] = "Your image was uploaded, you   can share it with this URL"; $lang['encode_upload_url'] = "Hey look at this, here's your   image:"; $lang['encode_upload_mkdir_error'] = "Cannot make temp folder"; $lang['encode_upload_general_error'] = "The Image cannot be saved   at this time"; Putting it all together Let's look at how the user uploads an image. The following is the sequence of events: CodeIgniter looks in the routes.php config file and finds the following line: $route['create'] = "create/index"; It directs the request to the create controller's index() function. The index() function loads the create/create.php view file that displays the upload form to the user. The user clicks on the Choose file button, navigates to the image file they wish to upload, and selects it. The user presses the Upload button and the form is submitted to the create controller's index() function. The index() function creates a folder in the main upload directory to store the image in, then does the actual upload. On a successful upload, index() sends the details of the upload (the new folder name and image name) to the save_image() model function. The save_model() function also creates a unique code and saves it in the images table along with the folder name and image name passed to it by the create controller. The unique code generated during the database insert is then returned to the controller and passed to the result view, where it will form part of a success message to the user. Now, let's see how an image is viewed (or fetched). The following is the sequence of events: A URL with the syntax www.domain.com/226KgfYH comes into the application—either when someone clicks on a link or some other call (<img src="">). CodeIgniter looks in the routes.php config file and finds the following line: $route['(:any)'] = "go/index"; As the incoming request does not match the other two routes, the preceding route is the one CodeIgniter applies to this request. The go controller is called and the code of 226KgfYH is passed to it as the 1st segment of uri. The go controller passes this to the fetch_image() function of the Image_model.php file. The fetch_image() function will attempt to find a matching record in the database. If found, it returns the folder name marking the saved location of the image, and its filename. This is returned and the path to that image is built. CodeIgniter then redirects the user to that image, that is, supplies that image resource to the user that requested it. Summary So here we have a basic image sharing application. It is capable of accepting a variety of images and assigning them to records in a database and unique folders in the filesystem. This is interesting as it leaves things open to you to improve on. For example, you can do the following: You can add limits on views. As the image record is stored in the database, you could adapt the database. Adding two columns called img_count and img_count_limit, you could allow a user to set a limit for the number of views per image and stop providing that image when that limit is met. You can limit views by date. Similar to the preceding point, but you could limit image views to set dates. You can have different URLs for different dimensions. You could add functionality to make several dimensions of image based on the initial upload, offering several different URLs for different image dimensions. You can report abuse. You could add an option allowing viewers of images to report unsavory images that might be uploaded. You can have terms of service. If you are planning on offering this type of application as an actual web service that members of the public could use, then I strongly recommend you add a terms of service document, perhaps even require that people agree to terms before they upload an image. In those terms, you'll want to mention that in order for someone to use the service, they first have to agree that they do not upload and share any images that could be considered illegal. You should also mention that you'll cooperate with any court if information is requested of you. You really don't want to get into trouble for owning or running a web service that stores unpleasant images; as much as possible you want to make your limits of liability clear and emphasize that it is the uploader who has provided the images. Resources for Article: Further resources on this subject: UCodeIgniter MVC – The Power of Simplicity! [article] Navigating Your Site using CodeIgniter 1.7: Part 1 [article] Navigating Your Site using CodeIgniter 1.7: Part 2 [article]
Read more
  • 0
  • 0
  • 2321
Banner background image

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 11173

article-image-ride-through-worlds-best-etl-tool-informatica-powercenter
Packt
30 Dec 2014
25 min read
Save for later

A ride through world's best ETL tool – Informatica PowerCenter

Packt
30 Dec 2014
25 min read
In this article, by Rahul Malewar, author of the book, Learning Informatica PowerCenter 9.x, we will go through the basics of Informatica PowerCenter. Informatica Corporation (Informatica), a multi-million dollar company incorporated in February 1993, is an independent provider of enterprise data integration and data quality software and services. The company enables a variety of complex enterprise data integration products, which include PowerCenter, Power Exchange, enterprise data integration, data quality, master data management, business to business (B2B) data exchange, application information lifecycle management, complex event processing, ultra messaging, and cloud data integration. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely used tool in the data integration world. (For more resources related to this topic, see here.) Informatica PowerCenter architecture PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. This lets you access the single licensed software installed on a remote machine via multiple machines. High availability functionality helps minimize service downtime due to unexpected failures or scheduled maintenance in the PowerCenter environment. Informatica architecture is divided into two sections: server and client. Server is the basic administrative unit of Informatica where we configure all services, create users, and assign authentication. Repository, nodes, Integration Service, and code page are some of the important services we configure while we work on the server side of Informatica PowerCenter. Client is the graphical interface provided to the users. Client includes PowerCenter Designer, PowerCenter Workflow Manager, PowerCenter Workflow Monitor, and PowerCenter Repository Manager. The best place to download the Informatica software for training purpose is from EDelivery (www.edelivery.com) website of Oracle. Once you download the files, start the extraction of the zipped files. After you finish extraction, install the server first and later client part of PowerCenter. For installation of Informatica PowerCenter, the minimum requirement is to have a database installed in your machine. Because Informatica uses the space from the Oracle database to store the system-related information and the metadata of the code, which you develop in client tool. Informatica PowerCenter client tools Informatica PowerCenter Designer client tool talks about working of the source files and source tables and similarly talks about working on targets. Designer tool allows import/create flat files and relational databases tables. Informatica PowerCenter allows you to work on both types of flat files, that is, delimited and fixed width files. In delimited files, the values are separated from each other by a delimiter. Any character or number can be used as delimiter but usually for better interpretation we use special characters as delimiter. In delimited files, the width of each field is not a mandatory option as each value gets separated by other using a delimiter. In fixed width files, the width of each field is fixed. The values are separated by each other by the fixed size of the column defined. There can be issues in extracting the data if the size of each column is not maintained properly. PowerCenter Designer tool allows you to create mappings using sources, targets, and transformations. Mappings contain source, target, and transformations linked to each other through links. The group of transformations which can be reused is called as mapplets. Mapplets are another important aspect of Informatica tool. The transformations are most important aspect of Informatica, which allows you to manipulate the data based on your requirements. There are various types of transformations available in Informatica PowerCenter. Every transformation performs specific functionality. Various transformations in Informatica PowerCenter The following are the various transformations in Informatica PowerCenter: Expression transformation is used for row-wise manipulation. For any type of manipulation you wish to do on an individual record, use Expression transformation. Expression transformation accepts the row-wise data, manipulates it, and passes to the target. The transformation receives the data from input port and it sends the data out from output ports. Use the Expression transformation for any row-wise calculation, like if you want to concatenate the names, get total salary, and convert in upper case. Aggregator transformation is used for calculations using aggregate functions on a column as against in the Expression transformation, which is used for row-wise manipulation. You can use aggregate functions, such as SUM, AVG, MAX, MIN, and so on in Aggregator transformation. When you use Aggregator transformation, Integration Services stores the data temporarily in cache memory. Cache memory is created because the data flows in row-wise manner in Informatica and the calculations required in Aggregator transformation is column wise. Unless we store the data temporarily in cache, we cannot perform the aggregate calculations to get the result. Using Group By option in Aggregator transformation, you can get the result of the Aggregate function based on group. Also it is always recommended that we pass sorted input to Aggregator transformation as this will enhance the performance. When you pass the sorted input to Aggregator transformation, Integration Services enhances the performance by storing less data into cache. When you pass unsorted data, Aggregator transformation stores all the data into cache which takes more time. When you pass the sorted data to Aggregator transformation, Aggregator transformation stores comparatively lesser data in the cache. Aggregator passes the result of each group as soon the data for particular group is received. Note that Aggregator transformation does not sort the data. If you have unsorted data, use Sorter transformation to sort the data and then pass the sorted data to Aggregator transformation. Sorter transformation is used to sort the data in ascending or descending order based on single or multiple key. Apart from ordering the data in ascending or descending order, you can also use Sorter transformation to remove duplicates from the data using the distinct option in the properties. Sorter can remove duplicates only if complete record is duplicate and not only particular column. Filter transformation is used to remove unwanted records from the mapping. You define the Filter condition in the Filter transformation. Based on filter condition, the records will be rejected or passed further in mapping. The default condition in Filter transformation is TRUE. Based on the condition defined, if the record returns True, the Filter transformation allows the record to pass. For each record which returns False, the Filter transformation drops those records. It is always recommended to use Filter transformation as early as possible in the mapping for better performance. Router transformation is single input group multiple output group transformation. Router can be used in place of multiple Filter transformations. Router transformation accepts the data once through input group and based on the output groups you define, it sends the data to multiple output ports. You need to define the filter condition in each output group. It is always recommended to use Router in place of multiple filters in the mapping to enhance the performance. Rank transformation is used to get top or bottom specific number of records based on the key. When you create a Rank transformation, a default output port RANKINDEX comes with the transformation. It is not mandatory to use the RANKINDEX port. Sequence Generator transformation is used to generate sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. You need to define the start value, the increment by value, and the end value in the properties. Sequence Generator transformation has only two ports: NEXTVAL and CURRVAL. Both the ports are output port. Sequence Generator does not have any input port. You cannot add or delete any port in Sequence Generator. It is recommended that you should always use the NEXTVAL port first. If the NEXTVAL port is utilized, use the CURRVAL port. You can define the value of CURRVAL in the properties of Sequence Generator transformation. Joiner transformation is used to join two heterogeneous sources. You can join data from same source type also. The basic criteria for joining the data are a matching column in both the source. Joiner transformation has two pipelines, one is called mater and other is called as detail. We do not have left or right join as we have in SQL database. It is always recommended to make table with lesser number of record as master and other one as details. This is because Integration Service picks the data from master source and scans the corresponding record in details table. So if we have lesser number of records in master table, lesser number of times the scanning will happen. This enhances the performance. Joiner transformation has four types of joins: normal join, full outer join, master outer join, details outer join. Union transformation is used the merge the data from multiple sources. Union is multiple input single output transformation. This is opposite of Router transformation, which we discussed earlier. The basic criterion for using Union transformation is that you should have data with matching data type. If you do not have data with matching data type coming from multiple sources, Union transformation will not work. Union transformation merges the data coming from multiple sources and do not remove duplicates, that is, it acts as UNION ALL of SQL statements. As mentioned earlier, Union requires data coming from multiple sources. Union reads the data concurrently from multiple sources and processes the data. You can use heterogeneous sources to merge the data using Union transformation. Source Qualifier transformation acts as virtual source in Informatica. When you drag relational table or flat file in Mapping Designer, Source Qualifier transformation comes along. Source Qualifier is the point where actually Informatica processing starts. The extraction process starts from the Source Qualifier. Lookup transformation is used to lookup of source, Source Qualifier, or target to get the relevant data. You can look up on flat file or relational tables. Lookup transformation works on the similar lines as Joiner with few differences like Lookup does not require two source. Lookup transformations can be connected and unconnected. Lookup transformation extracts the data from the lookup table or file based on the lookup condition. When you create the Lookup transformation you can configure the Lookup transformation to cache the data. Caching the data makes the processing faster since the data is stored internally after cache is created. Once you select to cache the data, Lookup transformation caches the data from the file or table once and then based on the condition defined, lookup sends the output value. Since the data gets stored internally, the processing becomes faster as it does not require checking the lookup condition in file or database. Integration Services queries the cache memory as against checking the file or table for fetching the required data. The cache is created automatically and also it is deleted automatically once the processing is complete. Lookup transformation has four different types of ports. Input ports (I) receive the data from other transformation. This port will be used in Lookup condition. You need to have at least one input port. Output port (O) passes the data out of the Lookup transformation to other transformations. Lookup port (L) is the port for which you wish to bring the data in mapping. Each column is assigned as lookup and output port when you create the Lookup transformation. If you delete the lookup port from the flat file lookup source, the session will fail. If you delete the lookup port from relational lookup table, Integration Services extracts the data only with Lookup port. This helps in reducing the data extracted from the lookup source. Return port (R) is only used in case of unconnected Lookup transformation. This port indicates which data you wish to return in the Lookup transformation. You can define only one port as return port. Return port is not used in case on connected Lookup transformation. Cache is the temporary memory, which is created when you execute the process. Cache is created automatically when the process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define in the transformation level or session level. You usually set the property as default, so as required it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available for lookup transformation. You can define the session property to create the cache either sequentially or concurrently. When you select to create the cache sequentially, Integration Service caches the data in row-wise manner as the records enters the Lookup transformation. When the first record enters the Lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way the cache stores only matching data. It helps in saving the cache space by not storing the unnecessary data. When you select to create cache concurrently, Integration Service does not wait for the data to flow from the source, but it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select concurrent cache, the performance enhances as compared to sequential cache, since the scanning happens internally using the data stored in cache. You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default Integration Service creates a static cache. Lookup cache is created as soon as the first record enters the Lookup transformation. Integration Service does not update the cache while it is processing the data. A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the Lookup transformation properties to make the cache as dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration Service keeps on updating the cache while it is processing the data. The Integration Service marks the record as insert for new row inserted in dynamic cache. For the record which is updated, it marks the record as update in the cache. For every record which no change, the Integration Service marks it as unchanged. Update Strategy transformation is used to INSERT, UPDATE, DELETE, or REJECT record based on defined condition in the mapping. Update Strategy transformation is mostly used when you design mappings for SCD. When you implement SCD, you actually decide how you wish to maintain historical data with the current data. When you wish to maintain no history, complete history, or partial history, you can either use property defined in the session task or you use Update Strategy transformation. When you use Session task, you instruct the Integration Service to treat all records in the same way, that is, either insert, update or delete. When you use Update Strategy transformation in the mapping, the control is no more with the session task. Update Strategy transformation allows you to insert, update, delete or reject record based on the requirement. When you use Update Strategy transformation, the control is no more with session task. You need to define the following functions to perform the corresponding operation: DD_INSERT: This can be used when you wish to insert the records. It is also represented by numeric 0. DD_UPDATE: This can be used when you wish to update the records. It is also represented by numeric 1. DD_DELETE: This can be used when you wish to delete the records. It is also represented by numeric 2. DD_REJECT: This can be used when you wish to reject the records. It is also represented by numeric 3. Normalizer transformation is used in place of Source Qualifier transformation when you wish to read the data from Cobol Copybook source. Also, the Normalizer transformation is used to convert column-wise data to row-wise data. This is similar to transpose feature of MS Excel. You can use this feature if your source is Cobol Copybook file or relational database tables. Normalizer transformation converts column to row and also generate index for each converted row. Stored procedure is a database component. Informatica uses the stored procedure similar to database tables. Stored procedures are set of SQL instructions, which require certain set of input values and in return stored procedure returns output value. The way you either import or create database tables, you can import or create the stored procedure in mapping. To use the Stored Procedure in mapping the stored procedure should exist in the database. Similar to Lookup transformation, stored procedure can also be connected or unconnected transformation in Informatica. When you use connected stored procedure, you pass the value to stored procedure through links. When you use unconnected stored procedure, you pass the value using :SP function. Transaction Control transformation allows you to commit or rollback individual records, based on certain condition. By default, Integration Service commits the data based on the properties you define at the session task level. Using the commit interval property Integration Service commits or rollback the data into target. Suppose you define commit interval as 10,000, Integration Service will commit the data after every 10,000 records. When you use Transaction Control transformation, you get the control at each record to commit or rollback. When you use Transaction Control transformation, you need to define the condition in expression editor of the Transaction Control transformation. When you run the process, the data enters the Transaction Control transformation in row-wise manner. The Transaction Control transformation evaluates each row, based on which it commits or rollback the data. Classification of Transformations The transformations, which we discussed are classified into two categories—active/passive and connected/unconnected. Active/Passive classification of transformations is based on the number of records at the input and output port of the transformation. If the transformation does not change the number of records at its input and output port, it is said to be passive transformation. If the transformation changes the number of records at the input and output port of the transformation, it is said to be active transformation. Also if the transformation changes the sequence of records passing through it, it will be an active transformation as in case of Union transformation. A transformation is said to be connected if it is connected to any source or any target or any other transformation by at least a link. If the transformation is not connected by any link is it classed as unconnected. Only Lookup and stored procedure transformations can be connected and unconnected, rest all transformations are connected. Advanced Features of designer screen Talking about the advanced features of PowerCenter Designer tool, debugger helps you to debug the mappings to find the error in your code. Informatica PowerCenter provides a utility called as debugger to debug the mapping so that you can easily find the issue in the mapping which you created. Using the debugger, you can see the flow of every record across the transformations. Another feature is target load plan, a functionality which allows you to load data in multiple targets in a same mapping maintaining their constraints. The reusable transformations are transformations which allow you to reuse the transformations across multiple mapping. As source and target are reusable components, transformations can also be reused. When you work on any technology, it is always advised that your code should be dynamic. This means you should use the hard coded values as less as possible in your code. It is always recommended that you use the parameters or the variable in your code so you can easily pass these values and need not frequently change the code. This functionality is achieved by using parameter file in Informatica. The value of a variable can change between the session run. The value of parameter will remain constant across the session runs. The difference is very minute so you should define parameter or variable properly as per your requirements. Informatica PowerCenter allows you to compare objects present within repository. You can compare sources, targets, transformations, mapplets, and mappings in PowerCenter Designer under Source Analyzer, Target Designer, Transformation Developer, Mapplet Designer, Mapping Designer respectively. You can compare the objects in the same repository or in multiple repositories. Tracing level in Informatica defines the amount of data you wish to write in the session log when you execute the workflow. Tracing level is a very important aspect in Informatica as it helps in analyzing the error. Tracing level is very helpful in finding the bugs in the process. You can define tracing level in every transformation. Tracing level option is present in every transformation properties window. There are four types of tracing level available: Normal: When you set the tracing level as normal, Informatica stores status information, information about errors, and information about skipped rows. You get detailed information but not at individual row level. Terse: When you set the tracing level as terse, Informatica stores error information and information of rejected records. Terse tracing level occupies lesser space as compared to normal. Verbose initialization: When you set the tracing level as verbose initialize, it stores process details related to startup, details about index and data files created and more details of transformation process in addition to details stored in normal tracing. This tracing level takes more space as compared to normal and terse. Verbose data: This is the most detailed level of tracing level. It occupies more space and takes longer time as compared to other three. It stores row level data in the session log. It writes the truncation information whenever it truncates the data. It also writes the data to error log if you enable row error logging. Default tracing level is normal. You can change the tracing level to terse to enhance the performance. Tracing level can be defined at individual transformation level or you can override the tracing level by defining it at session level. Informatica PowerCenter Workflow Manager Workflow Manager screen is the second and last phase of our development work. In the Workflow Manager session task and workflows are created, which is used to execute mapping. Workflow Manager screen allows you to work on various connections like relations, FTP, and so on. Basically, Workflow Manager contains set of instructions which we define as workflow. The basic building block of workflow is tasks. As we have multiple transformations in designer screen, we have multiple tasks in Workflow Manager Screen. When you create a workflow, you add tasks into it as per your requirement and execute the workflow to see the status in the monitor. Workflow is a combination of multiple tasks connected with links that trigger in proper sequence to execute a process. Every workflow contains start task along with other tasks. When you execute the workflow, you actually trigger start task, which in turn triggers other tasks connected in the flow. Every task performs a specific functionality. You need to use the task based on the functionality you need to achieve. Various tasks in Workflow Manager The following are the tasks in Workflow Manager: Session task is used to execute the mapping. Every session task can execute a single mapping. You need to define the path/connection of the source and target used in the mapping, so the session can extract the data from the defined path and send the data to the mapping for processing. Email task is used to send success or failure email notifications. You can configure your outlook or mailbox with the email task to directly send the notification. Command task is used to execute Unix scripts/commands or Windows commands. Timer task is used to add some time gap or to add delay between two tasks. Timer task have properties related to absolute time and relative time. Assignment task is used to assign a value to workflow variable. Control task is used to control the flow of workflow by stopping or aborting the workflow in case on some error. You can control the flow of complete workflow using control task. Decision task is used to check the status of multiple tasks and hence control the execution of workflow. Link task as against decision task can only check the status of the previous task. Event task is used to wait for a particular event to occur. Usually it is used as file watcher task. Using event wait task we can keep looking for a particular file and then trigger the next task. Evert raise task is used to trigger a particular event defined in workflow. Advanced Workflow Manager Workflow Manager screen has some very important features called as scheduling and incremental aggregation, which allows in easier and convenient processing of data. Scheduling allows you to schedule the workflow as specified timing so the workflow runs at the desired time. You need not manually run the workflow every time, schedule can do the needful. Incremental aggregation and partitioning are advanced features, which allows you to process the data faster. When you run the workflow, Integration Service extracts the data in row wise manner from the source path/connection you defined in session task and makes it flow from the mapping. The data reaches the target through the transformations you defined in the mapping. The data always flow in a row wise manner in Informatica, no matter what so ever is your calculation or manipulation. So if you have 10 records in source, there will be 10 Source to target flows while the process is executed. Informatica PowerCenter Workflow Monitor The Workflow Monitor screen allows the monitoring of the workflows execute in Workflow Manager. Workflow Monitor screen allows you check the status and log files for the Workflow. Using the logs generated, you can easily find the error and rectify the error. Workflow Manager also shows the statistics for number of records extracted from source and number of records loaded into target. Also it gives statistics of error records and bad records. Informatica PowerCenter Repository Manager Repository Manager screen is the fourth client screen, which is basically used for migration (deployment) purpose. This screen is also used for some administration related activities like configuring server with client and creating users. Performance Tuning in Informatica PowerCenter The performance tuning has the contents for the optimizations of various components of Informatica PowerCenter tool, such as source, targets, mappings, sessions, systems. Performance tuning at high level involves two stages, finding the issues called as bottleneck and resolving them. Informatica PowerCenter has features like pushdown optimization and partitioning for better performance. With the defined steps and using the best practices for coding the performance can be enhanced drastically. Slowly Changing Dimensions Using all the understanding of the different client tools you can implement the Data warehousing concepts called as SCD, slowly changing dimensions. Informatica PowerCenter provides wizards, which allow you to easily create different types of SCDs, that is, SCD1, SCD2, and SCD3. Type 1 Dimension mapping (SCD1): It keeps only current data and do not maintain historical data. Type 2 Dimension/Version Number mapping (SCD2): It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Type 2 Dimension/Flag mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 2 Dimension/Effective Date Range mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 3 Dimension mapping: It keeps current as well as historical data in the table. We maintain only partial history by adding new column. Summary With this we have discussed the complete PowerCenter tool in brief. The PowerCenter is the best fit tool for any size and any type of data, which you wish to handle. It also provides compatibility with all the files and databases for processing purpose. The transformations present allow you to manipulate any type of data in any form you wish. The advanced features make your work simple by providing convenient options. The PowerCenter tool can make your life easy and can offer you some great career path if you learn it properly as Informatica PowerCenter tool have huge demand in job market and it is one of the highly paid technologies in IT market. Just grab a book and start walking the path. The end will be a great career. We are always available for help. For any help in installation or any issues related to PowerCenter you can reach me at [email protected]. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 1535

article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 5151
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-using-frameworks
Packt
24 Dec 2014
24 min read
Save for later

Using Frameworks

Packt
24 Dec 2014
24 min read
In this article by Alex Libby, author of the book Responsive Media in HTML5, we will cover the following topics: Adding responsive media to a CMS Implementing responsive media in frameworks such as Twitter Bootstrap Using the Less CSS preprocessor to create CSS media queries Ready? Let's make a start! (For more resources related to this topic, see here.) Introducing our three examples Throughout this article, we've covered a number of simple, practical techniques to make media responsive within our sites—these are good, but nothing beats seeing these principles used in a real-world context, right? Absolutely! To prove this, we're going to look at three examples throughout this article, using technologies that you are likely to be familiar with: WordPress, Bootstrap, and Less CSS. Each demo will assume a certain level of prior knowledge, so it may be worth reading up a little first. In all three cases, we should see that with little effort, we can easily add responsive media to each one of these technologies. Let's kick off with a look at working with WordPress. Adding responsive media to a CMS We will begin the first of our three examples with a look at using the ever popular WordPress system. Created back in 2003, WordPress has been used to host sites by small independent traders all the way up to Fortune 500 companies—this includes some of the biggest names in business such as eBay, UPS, and Ford. WordPress comes in two flavors; the one we're interested in is the self-install version available at http://www.wordpress.org. This example assumes you have a local installation of WordPress installed and working; if not, then head over to http://codex.wordpress.org/Installing_WordPress and follow the tutorial to get started. We will also need a DOM inspector such as Firebug installed if you don't already have it. It can be downloaded from http://www.getfirebug.com if you need to install it. If you only have access to WordPress.com (the other flavor of WordPress), then some of the tips in this section may not work, due to limitations in that version of WordPress. Okay, assuming we have WordPress set up and running, let's make a start on making uploaded media responsive. Adding responsive media manually It's at this point that you're probably thinking we have to do something complex when working in WordPress, right? Wrong! As long as you use the Twenty Fourteen core theme, the work has already been done for you. For this exercise, and the following sections, I will assume you have installed and/or activated WordPress' Twenty Fourteen theme. Don't believe me? It's easy to verify: try uploading an image to a post or page in WordPress. Resize the browser—you should see the image shrink or grow in size as the browser window changes size. If we take a look at the code elsewhere using Firebug, we can also see the height: auto set against a number of the img tags; this is frequently done for responsive images to ensure they maintain the correct proportions. The responsive style seems to work well in the Twenty Fourteen theme; if you are using an older theme, we can easily apply the same style rule to images stored in WordPress when using that theme. Fixing a responsive issue So far, so good. Now, we have the Twenty Fourteen theme in place, we've uploaded images of various sizes, and we try resizing the browser window ... only to find that the images don't seem to grow in size above a certain point. At least not well—what gives? Well, it's a classic trap: we've talked about using percentage values to dynamically resize images, only to find that we've shot ourselves in the foot (proverbially speaking, of course!). The reason? Let's dive in and find out using the following steps: Browse to your WordPress installation and activate Firebug using F12. Switch to the HTML tab and select your preferred image. In Firebug, look for the <header class="entry-header"> line, then look for the following line in the rendered styles on the right-hand side of the window: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 474px; } The keen-eyed amongst you should hopefully spot the issue straightaway—we're using percentages to make the sizes dynamic for each image, yet we're constraining its parent container! To fix this, change the highlighted line as indicated: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 100%; } To balance the content, we need to make the same change to the comments area. So go ahead and change max-width to 100% as follows: .comments-area { margin: 48px auto; max-width: 100%;   padding: 0 10px; } If we try resizing the browser window now, we should see the image size adjust automatically. At this stage, the change is not permanent. To fix this, we would log in to WordPress' admin area, go to Appearance | Editor and add the adjusted styles at the foot of the Stylesheet (style.css) file. Let's move on. Did anyone notice two rather critical issues with the approach used here? Hopefully, you must have spotted that if a large image is used and then resized to a smaller size, we're still working with large files. The alteration we're making has a big impact on the theme, even though it is only a small change. Even though it proves that we can make images truly responsive, it is the kind of change that we would not necessarily want to make without careful consideration and plenty of testing. We can improve on this. However, making changes directly to the CSS style sheet is not ideal; they could be lost when upgrading to a newer version of the theme. We can improve on this by either using a custom CSS plugin to manage these changes or (better) using a plugin that tells WordPress to swap an existing image for a small one automatically if we resize the window to a smaller size. Using plugins to add responsive images A drawback though, of using a theme such as Twenty Fourteen, is the resizing of images. While we can grow or shrink an image when resizing the browser window, we are still technically altering the size of what could potentially be an unnecessarily large image! This is considered bad practice (and also bad manners!)—browsing on a desktop with a fast Internet connection as it might not have too much of an impact; the same cannot be said for mobile devices, where we have less choice. To overcome this, we need to take a different approach—get WordPress to automatically swap in smaller images when we reach a particular size or breakpoint. Instead of doing this manually using code, we can take advantage of one of the many plugins available that offer responsive capabilities in some format. I feel a demo coming on. Now's a good time to take a look at one such plugin in action: Let's start by downloading our plugin. For this exercise, we'll use the PictureFill.WP plugin by Kyle Ricks, which is available at https://wordpress.org/plugins/picturefillwp/. We're going to use the version that uses Picturefill.js version 2. This is available to download from https://github.com/kylereicks/picturefill.js.wp/tree/master. Click on Download ZIP to get the latest version. Log in to the admin area of your WordPress installation and click on Settings and then Media. Make sure your image settings for Thumbnail, Medium, and Large sizes are set to values that work with useful breakpoints in your design. We then need to install the plugin. In the admin area, go to Plugins | Add New to install the plugin and activate it in the normal manner. At this point, we will have installed responsive capabilities in WordPress—everything is managed automatically by the plugin; there is no need to change any settings (except maybe the image sizes we talked about in step 2). Switch back to your WordPress frontend and try resizing the screen to a smaller size. Press F12 to activate Firebug and switch to the HTML tab. Press Ctrl + Shift + C (or Cmd + Shift + C for Mac users) to toggle the element inspector; move your mouse over your resized image. If we've set the right image sizes in WordPress' admin area and the window is resized correctly, we can expect to see something like the following screenshot: To confirm we are indeed using a smaller image, right-click on the image and select View Image Info; it will display something akin to the following screenshot: We should now have a fully functioning plugin within our WordPress installation. A good tip is to test this thoroughly, if only to ensure we've set the right sizes for our breakpoints in WordPress! What happens if WordPress doesn't refresh my thumbnail images properly? This can happen. If you find this happening, get hold of and install the Regenerate Thumbnails plugin to resolve this issue; it's available at https://wordpress.org/plugins/regenerate-thumbnails/. Adding responsive videos using plugins Now that we can add responsive images to WordPress, let's turn our attention to videos. The process of adding them is a little more complex; we need to use code to achieve the best effect. Let's examine our options. If you are hosting your own videos, the simplest way is to add some additional CSS style rules. Although this removes any reliance on JavaScript or jQuery using this method, the result isn't perfect and will need additional styles to handle the repositioning of the play button overlay. Although we are working locally, we should remember the note from earlier in this article; changes to the CSS style sheet may be lost when upgrading. A custom CSS plugin should be used, if possible, to retain any changes. To use a CSS-only solution, it only requires a couple of steps: Browse to your WordPress theme folder and open a copy of styles.css in your text editor of choice. Add the following lines at the end of the file and save it: video { width: 100%; height: 100%; max-width: 100%; } .wp-video { width: 100% !important; } .wp-video-shortcode {width: 100% !important; } Close the file. You now have the basics in place for responsive videos. At this stage, you're probably thinking, "great, my videos are now responsive. I can handle the repositioning of the play button overlay myself, no problem"; sounds about right? Thought so and therein lies the main drawback of this method! Repositioning the overlay shouldn't be too difficult. The real problem is in the high costs of hardware and bandwidth that is needed to host videos of any reasonable quality and that even if we were to spend time repositioning the overlay, the high costs would outweigh any benefit of using a CSS-only solution. A far better option is to let a service such as YouTube do all the hard work for you and to simply embed your chosen video directly from YouTube into your pages. The main benefit of this is that YouTube's servers do all the hard work for you. You can take advantage of an increased audience and YouTube will automatically optimize the video for the best resolution possible for the Internet connections being used by your visitors. Although aimed at beginners, wpbeginner.com has a useful article located at http://www.wpbeginner.com/beginners-guide/why-you-should-never-upload-a-video-to-wordpress/, on the pros and cons of why self-hosting videos isn't recommended and that using an external service is preferable. Using plugins to embed videos Embedding videos from an external service into WordPress is ironically far simpler than using the CSS method. There are dozens of plugins available to achieve this, but one of the simplest to use (and my personal favorite) is FluidVids, by Todd Motto, available at http://github.com/toddmotto/fluidvids/. To get it working in WordPress, we need to follow these steps using a video from YouTube as the basis for our example: Browse to your WordPress' theme folder and open a copy of functions.php in your usual text editor. At the bottom, add the following lines: add_action ( 'wp_enqueue_scripts', 'add_fluidvid' );   function add_fluidvid() { wp_enqueue_script( 'fluidvids',     get_stylesheet_directory_uri() .     '/lib/js/fluidvids.js', array(), false, true ); } Save the file, then log in to the admin area of your WordPress installation. Navigate to Posts | Add New to add a post and switch to the Text tab of your Post Editor, then add http://www.youtube.com/watch?v=Vpg9yizPP_g&hd=1 to the editor on the page. Click on Update to save your post, then click on View post to see the video in action. There is no need to further configure WordPress—any video added from services such as YouTube or Vimeo will be automatically set as responsive by the FluidVids plugin. At this point, try resizing the browser window. If all is well, we should see the video shrink or grow in size, depending on how the browser window has been resized: To prove that the code is working, we can take a peek at the compiled results within Firebug. We will see something akin to the following screenshot: For those of us who are not feeling quite so brave (!), there is fortunately a WordPress plugin available that will achieve the same results, without configuration. It's available at https://wordpress.org/plugins/fluidvids/ and can be downloaded and installed using the normal process for WordPress plugins. Let's change track and move onto our next demo. I feel a need to get stuck in some coding, so let's take a look at how we can implement responsive images in frameworks such as Bootstrap. Implementing responsive media in Bootstrap A question—as developers, hands up if you have not heard of Bootstrap? Good—not too many hands going down Why have I asked this question, I hear you say? Easy—it's to illustrate that in popular frameworks (such as Bootstrap), it is easy to add basic responsive capabilities to media, such as images or video. The exact process may differ from framework to framework, but the result is likely to be very similar. To see what I mean, let's take a look at using Bootstrap for our second demo, where we'll see just how easy it is to add images and video to our Bootstrap-enabled site. If you would like to explore using some of the free Bootstrap templates that are available, then http://www.startbootstrap.com/ is well worth a visit! Using Bootstrap's CSS classes Making images and videos responsive in Bootstrap uses a slightly different approach to what we've examined so far; this is only because we don't have to define each style property explicitly, but instead simply add the appropriate class to the media HTML for it to render responsively. For the purposes of this demo, we'll use an edited version of the Blog Page example, available at http://www.getbootstrap.com/getting-started/#examples; a copy of the edited version is available on the code download that accompanies this article. Before we begin, go ahead and download a copy of the Bootstrap Example folder that is in the code download. Inside, you'll find the CSS, image and JavaScript files needed, along with our HTML markup file. Now that we have our files, the following is a screenshot of what we're going to achieve over the course of our demo: Let's make a start on our example using the following steps: Open up bootstrap.html and look for the following lines (around lines 34 to 35):    <p class="blog-post-meta">January 1, 2014 by <a href="#">Mark</a></p>      <p>This blog post shows a few different types of content that's supported and styled with Bootstrap.         Basic typography, images, and code are all         supported.</p> Immediately below, add the following code—this contains markup for our embedded video, using Bootstrap's responsive CSS styling: <div class="bs-example"> <div class="embed-responsive embed-responsive-16by9">    <iframe allowfullscreen="" src="http://www.youtube.com/embed/zpOULjyy-n8?rel=0" class="embed-responsive-item"></iframe> </div> </div> With the video now styled, let's go ahead and add in an image—this will go in the About section on the right. Look for these lines, on or around lines 74 and 75:    <h4>About</h4>      <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet       fermentum. Aenean lacinia bibendum nulla sed       consectetur.</p> Immediately below, add in the following markup for our image: <a href="#" class="thumbnail"> <img src="http://placehold.it/350x150" class="img-responsive"> </a> Save the file and preview the results in a browser. If all is well, we can see our video and image appear, as shown at the start of our demo. At this point, try resizing the browser—you should see the video and placeholder image shrink or grow as the window is resized. However, the great thing about Bootstrap is that the right styles have already been set for each class. All we need to do is apply the correct class to the appropriate media file—.embed-responsive embed-responsive-16by9 for videos or .img-responsive for images—for that image or video to behave responsively within our site. In this example, we used Bootstrap's .img-responsive class in the code; if we have a lot of images, we could consider using img { max-width: 100%; height: auto; } instead. So far, we've worked with two popular examples of frameworks in the form of WordPress and Bootstrap. This is great, but it can mean getting stuck into a lot of CSS styling, particularly if we're working with media queries, as we saw earlier in the article! Can we do anything about this? Absolutely! It's time for a brief look at CSS preprocessing and how this can help with adding responsive media to our pages. Using Less CSS to create responsive content Working with frameworks often means getting stuck into a lot of CSS styling; this can become awkward to manage if we're not careful! To help with this, and for our third scenario, we're going back to basics to work on an alternative way of rendering CSS using the Less CSS preprocessing language. Why? Well, as a superset (or extension) of CSS, Less allows us to write our styles more efficiently; it then compiles them into valid CSS. The aim of this example is to show that if you're already using Less, then we can still apply the same principles that we've covered throughout this article, to make our content responsive. It should be noted that this exercise does assume a certain level of prior experience using Less; if this is the first time, you may like to peruse my article, Learning Less, by Packt Publishing. There will be a few steps involved in making the changes, so the following screenshot gives a heads-up on what it will look like, once we've finished: You would be right. If we play our cards right, there should indeed be no change in appearance; working with Less is all about writing CSS more efficiently. Let's see what is involved: We'll start by extracting copies of the Less CSS example from the code download that accompanies this article—inside it, we'll find our HTML markup, reset style sheet, images, and video needed for our demo. Save the folder locally to your PC. Next, add the following styles in a new file, saving it as responsive.less in the css subfolder—we'll start with some of the styling for the base elements, such as the video and banner: #wrapper {width: 96%; max-width: 45rem; margin: auto;   padding: 2%} #main { width: 60%; margin-right: 5%; float: left } #video-wrapper video { max-width: 100%; } #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width:   100%; float: left; margin-bottom: 15px; } #skipTo { display: none; li { background: #197a8a }; }   p { font-family: "Droid Sans",sans-serif; } aside { width: 35%; float: right; } footer { border-top: 1px solid #ccc; clear: both; height:   30px; padding-top: 5px; } We need to add some basic formatting styles for images and links, so go ahead and add the following, immediately below the #skipTo rule: a { text-decoration: none; text-transform: uppercase } a, img { border: medium none; color: #000; font-weight: bold; outline: medium none; } Next up comes the navigation for our page. These styles control the main navigation and the Skip To… link that appears when viewed on smaller devices. Go ahead and add these style rules immediately below the rules for a and img: header { font-family: 'Droid Sans', sans-serif; h1 { height: 70px; float: left; display: block; fontweight: 700; font-size: 2rem; } nav { float: right; margin-top: 40px; height: 22px; borderradius: 4px; li { display: inline; margin-left: 15px; } ul { font-weight: 400; font-size: 1.1rem; } a { padding: 5px 5px 5px 5px; &:hover { background-color: #27a7bd; color: #fff; borderradius: 4px; } } } } We need to add the media query that controls the display for smaller devices, so go ahead and add the following to a new file and save it as media.less in the css subfolder. We'll start with setting the screen size for our media query: @smallscreen: ~"screen and (max-width: 30rem)";   @media @smallscreen { p { font-family: "Droid Sans", sans-serif; }      #main, aside { margin: 0 0 10px; width: 100%; }    #banner { margin-top: 150px; height: 4.85rem; max-width: 100%; background-image: url('../img/abstract-     banner-medium.jpg'); width: 45.5rem; } Next up comes the media query rule that will handle the Skip To… link at the top of our resized window:    #skipTo {      display: block; height: 18px;      a {         display: block; text-align: center; color: #fff; font-size: 0.8rem;        &:hover { background-color: #27a7bd; border-radius: 0; height: 20px }      }    } We can't forget the main navigation, so go ahead and add the following line of code immediately below the block for #skipTo:    header {      h1 { margin-top: 20px }      nav {        float: left; clear: left; margin: 0 0 10px; width:100%;        li { margin: 0; background: #efefef; display:block; margin-bottom: 3px; height: 40px; }        a {          display: block; padding: 10px; text-align:center; color: #000;          &:hover {background-color: #27a7bd; border-radius: 0; padding: 10px; height: 20px; }        }     }    } } At this point, we should then compile the Less style sheet before previewing the results of our work. If we launch responsive.html in a browser, we'll see our mocked up portfolio page appear as we saw at the beginning of the exercise. If we resize the screen to its minimum width, its responsive design kicks in to reorder and resize elements on screen, as we would expect to see. Okay, so we now have a responsive page that uses Less CSS for styling; it still seems like a lot of code, right? Working through the code in detail Although this seems like a lot of code for a simple page, the principles we've used are in fact very simple and are the ones we already used earlier in the article. Not convinced? Well, let's look at it in more detail—the focus of this article is on responsive images and video, so we'll start with video. Open the responsive.css style sheet and look for the #video-wrapper video class: #video-wrapper video { max-width: 100%; } Notice how it's set to a max-width value of 100%? Granted, we don't want to resize a large video to a really small size—we would use a media query to replace it with a smaller version. But, for most purposes, max-width should be sufficient. Now, for the image, this is a little more complicated. Let's start with the code from responsive.less: #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width: 100%; float: left; margin-bottom: 15px; } Here, we used the max-width value again. In both instances, we can style the element directly, unlike videos where we have to add a container in order to style it. The theme continues in the media query setup in media.less: @smallscreen: ~"screen and (max-width: 30rem)"; @media @smallscreen { ... #banner { margin-top: 150px; background-image: url('../img/abstract-banner-medium.jpg'); height: 4.85rem;     width: 45.5rem; max-width: 100%; } ... } In this instance, we're styling the element to cover the width of the viewport. A small point of note; you might ask why we are using the rem values instead of the percentage values when styling our image? This is a good question—the key to it is that when using pixel values, these do not scale well in responsive designs. However, the rem values do scale beautifully; we could use percentage values if we're so inclined, although they are best suited to instances where we need to fill a container that only covers part of the screen (as we did with the video for this demo). An interesting article extolling the virtues of why we should use rem units is available at http://techtime.getharvest.com/blog/in-defense-of-rem-units - it's worth a read. Of particular note is a known bug with using rem values in Mobile Safari, which should be considered when developing for mobile platforms; with all of the iPhones available, its usage could be said to be higher than Firefox! For more details, head over to http://wtfhtmlcss.com/#rems-mobile-safari. Transferring to production use Throughout this exercise, we used Less to compile our styles on the fly each time. This is okay for development purposes, but is not recommended for production use. Once we've worked out the requisite styles needed for our site, we should always look to precompile them into valid CSS before uploading the results into our site. There are a number of options available for this purpose; two of my personal favorites are Crunch! available at http://www.crunchapp.net and the Less2CSS plugin for Sublime Text available at https://github.com/timdouglas/sublime-less2css. You can learn more about precompiling Less code from my new article, Learning Less.js, by Packt Publishing. Summary Wow! We've certainly covered a lot; it shows that adding basic responsive capabilities to media need not be difficult. Let's take a moment to recap on what you learned. We kicked off this article with an introduction to three real-word scenarios that we would then cover. Our first scenario looked at using WordPress. We covered how although we can add simple CSS styling to make images and videos responsive, the preferred method is to use one of the several plugins available to achieve the same result. Our next scenario visited the all too familiar framework known as Twitter Bootstrap. In comparison, we saw that this is a much easier framework to work with, in that styles have been predefined and that all we needed to do was add the right class to the right selector. Our third and final scenario went completely the opposite way, with a look at using the Less CSS preprocessor to handle the styles that we would otherwise have manually created. We saw how easy it was to rework the styles we originally created earlier in the article to produce a more concise and efficient version that compiled into valid CSS with no apparent change in design. Well, we've now reached the end of the book; all good things must come to an end at some point! Nonetheless, I hope you've enjoyed reading the book as much as I have writing it. Hopefully, I've shown that adding responsive media to your sites need not be as complicated as it might first look and that it gives you a good grounding to develop something more complex using responsive media. Resources for Article: Further resources on this subject: Styling the Forms [article] CSS3 Animation [article] Responsive image sliders [article]
Read more
  • 0
  • 35
  • 5619

article-image-api-mongodb-and-nodejs
Packt
22 Dec 2014
26 min read
Save for later

API with MongoDB and Node.js

Packt
22 Dec 2014
26 min read
In this article by Fernando Monteiro, author of the book Learning Single-page Web Application Development, we will see how to build a solid foundation for our API. Our main aim is to discuss the techniques to build rich web applications, with the SPA approach. We will be covering the following topics in this article: The working of an API Boilerplates and generators The speakers API concept Creating the package.json file The Node server with server.js The model with the Mongoose schema Defining the API routes Using MongoDB in the cloud Inserting data with the Postman Chrome extension (For more resources related to this topic, see here.) The working of an API An API works through communication between different codes, thus defining specific behavior of certain objects on an interface. That is, the API will connect several functions on one website (such as search, images, news, authentications, and so on) to enable it to be used in other applications. Operating systems also have APIs, and they still have the same function. Windows, for example, has APIs such as the Win16 API, Win32 API, or Telephony API, in all its versions. When you run a program that involves some process of the operating system, it is likely that we make a connection with one or more Windows APIs. To clarify the concept of an API, we will give go through some examples of how it works. On Windows, it works on an application that uses the system clock to display the same function within the program. It then associates a behavior to a given clock time in another application, for example, using the Time/Clock API from Windows to use the clock functionality on your own application. Another example, is when you use the Android SDK to build mobile applications. When you use the device GPS, you are interacting with the API (android.location) to display the user location on the map through another API, in this case, Google Maps API. The following is the API example: When it comes to web APIs, the functionality can be even greater. There are many services that provide their code, so that they can be used on other websites. Perhaps, the best example is the Facebook API. Several other websites use this service within their pages, for instance a like button, share, or even authentication. An API is a set of programming patterns and instructions to access a software application based on the Web. So, when you access a page of a beer store in your town, you can log in with your Facebook account. This is accomplished through the API. Using it, software developers and web programmers can create beautiful programs and pages filled with content for their users. Boilerplates and generators On a MEAN stack environment, our ecosystem is infinitely diverse, and we can find excellent alternatives to start the construction of our API. At hand, we have simple boilerplates to complex code generators that can be used with other tools in an integrated way, or even alone. Boilerplates are usually a group of tested code that provides the basic structure to the main goal, that is to create a foundation of a web project. Besides saving us from common tasks such as assembling the basic structure of the code and organizing the files, boilerplates already have a number of scripts to make life easier for the frontend. Let's describe some alternatives that we consider as good starting points for the development of APIs with the Express framework, MongoDB database, Node server, and AngularJS for the frontend. Some more accentuated knowledge of JavaScript might be necessary for the complete understanding of the concepts covered here; so we will try to make them as clearly as possible. It is important to note that everything is still very new when we talk about Node and all its ecosystems, and factors such as scalability, performance, and maintenance are still major risk factors. Bearing in mind also that languages such as Ruby on Rails, Scala, and the Play framework have a higher reputation in building large and maintainable web applications, but without a doubt, Node and JavaScript will conquer your space very soon. That being said, we present some alternatives for the initial kickoff with MEAN, but remember that our main focus is on SPA and not directly on MEAN stack. Hackathon starter Hackathon is highly recommended for a quick start to develop with Node. This is because the boilerplate has the main necessary characteristics to develop applications with the Express framework to build RESTful APIs, as it has no MVC/MVVM frontend framework as a standard but just the Bootstrap UI framework. Thus, you are free to choose the framework of your choice, as you will not need to refactor it to meet your needs. Other important characteristics are the use of the latest version of the Express framework, heavy use of Jade templates and some middleware such as Passport - a Node module to manage authentication with various social network sites such as Twitter, Facebook, APIs for LinkedIn, Github, Last.fm, Foursquare, and many more. They provide the necessary boilerplate code to start your projects very fast, and as we said before, it is very simple to install; just clone the Git open source repository: git clone --depth=1 https://github.com/sahat/hackathon-starter.git myproject Run the NPM install command inside the project folder: npm install Then, start the Node server: node app.js Remember, it is very important to have your local database up and running, in this case MongoDB, otherwise the command node app.js will return the error: Error connecting to database: failed to connect to [localhost: 27017] MEAN.io or MEAN.JS This is perhaps the most popular and currently available boilerplate. MEAN.JS is a fork of the original project MEAN.io; both are open source, with a very peculiar similarity, both have the same author. You can check for more details at http://meanjs.org/. However, there are some differences. We consider MEAN.JS to be a more complete and robust environment. It has a structure of directories, better organized, subdivided modules, and better scalability by adopting a vertical modules development. To install it, follow the same steps as previously: Clone the repository to your machine: git clone https://github.com/meanjs/mean.git Go to the installation directory and type on your terminal: npm install Finally, execute the application; this time with the Grunt.js command: grunt If you are on Windows, type the following command: grunt.cmd Now, you have your app up and running on your localhost. The most common problem when we need to scale a SPA is undoubtedly the structure of directories and how we manage all of the frontend JavaScript files and HTML templates using MVC/MVVM. Later, we will see an alternative to deal with this on a large-scale application; for now, let's see the module structure adopted by MEAN.JS: Note that MEAN.JS leaves more flexibility to the AngularJS framework to deal with the MVC approach for the frontend application, as we can see inside the public folder. Also, note the modules approach; each module has its own structure, keeping some conventions for controllers, services, views, config, and tests. This is very useful for team development, so keep all the structure well organized. It is a complete solution that makes use of additional modules such as passport, swig, mongoose, karma, among others. The Passport module Some things about the Passport module must be said; it can be defined as a simple, unobtrusive authentication module. It is a powerful middleware to use with Node; it is very flexible and also modular. It can also adapt easily within applications that use the Express. It has more than 140 alternative authentications and support session persistence; it is very lightweight and extremely simple to be implemented. It provides us with all the necessary structure for authentication, redirects, and validations, and hence it is possible to use the username and password of social networks such as Facebook, Twitter, and others. The following is a simple example of how to use local authentication: var passport = require('passport'), LocalStrategy = require('passport-local').Strategy, User = require('mongoose').model('User');   module.exports = function() { // Use local strategy passport.use(new LocalStrategy({ usernameField: 'username', passwordField: 'password' }, function(username, password, done) { User.findOne({    username: username }, function(err, user) { if (err) { return done(err); } if (!user) {    return done(null, false, {    message: 'Unknown user'    }); } if (!user.authenticate(password)) {    return done(null, false, {    message: 'Invalid password'    }); } return done(null, user); }); } )); }; Here's a sample screenshot of the login page using the MEAN.JS boilerplate with the Passport module: Back to the boilerplates topic; most boilerplates and generators already have the Passport module installed and ready to be configured. Moreover, it has a code generator so that it can be used with Yeoman, which is another essential frontend tool to be added to your tool belt. Yeoman is the most popular code generator for scaffold for modern web applications; it's easy to use and it has a lot of generators such as Backbone, Angular, Karma, and Ember to mention a few. More information can be found at http://yeoman.io/. Generators Generators are for the frontend as gem is for Ruby on Rails. We can create the foundation for any type of application, using available generators. Here's a console output from a Yeoman generator: It is important to bear in mind that we can solve almost all our problems using existing generators in our community. However, if you cannot find the generator you need, you can create your own and make it available to the entire community, such as what has been done with RubyGems by the Rails community. RubyGem, or simply gem, is a library of reusable Ruby files, labeled with a name and a version (a file called gemspec). Keep in mind the Don't Repeat Yourself (DRY) concept; always try to reuse an existing block of code. Don't reinvent the wheel. One of the great advantages of using a code generator structure is that many of the generators that we have currently, have plenty of options for the installation process. With them, you can choose whether or not to use many alternatives/frameworks that usually accompany the generator. The Express generator Another good option is the Express generator, which can be found at https://github.com/expressjs/generator. In all versions up to Express Version 4, the generator was already pre-installed and served as a scaffold to begin development. However, in the current version, it was removed and now must be installed as a supplement. They provide us with the express command directly in terminal and are quite useful to start the basic settings for utilization of the framework, as we can see in the following commands: create : . create : ./package.json create : ./app.js create : ./public create : ./public/javascripts create : ./public/images create : ./public/stylesheets create : ./public/stylesheets/style.css create : ./routes create : ./routes/index.js create : ./routes/users.js create : ./views create : ./views/index.jade create : ./views/layout.jade create : ./views/error.jade create : ./bin create : ./bin/www   install dependencies:    $ cd . && npm install   run the app:    $ DEBUG=express-generator ./bin/www Very similar to the Rails scaffold, we can observe the creation of the directory and files, including the public, routes, and views folders that are the basis of any application using Express. Note the npm install command; it installs all dependencies provided with the package.json file, created as follows: { "name": "express-generator", "version": "0.0.1", "private": true, "scripts": {    "start": "node ./bin/www" }, "dependencies": {    "express": "~4.2.0",    "static-favicon": "~1.0.0",    "morgan": "~1.0.0",    "cookie-parser": "~1.0.1",    "body-parser": "~1.0.0",    "debug": "~0.7.4",    "jade": "~1.3.0" } } This has a simple and effective package.json file to build web applications with the Express framework. The speakers API concept Let's go directly to build the example API. To be more realistic, let's write a user story similar to a backlog list in agile methodologies. Let's understand what problem we need to solve by the API. The user history We need a web application to manage speakers on a conference event. The main task is to store the following speaker information on an API: Name Company Track title Description A speaker picture Schedule presentation For now, we need to add, edit, and delete speakers. It is a simple CRUD function using exclusively the API with JSON format files. Creating the package.json file Although not necessarily required at this time, we recommend that you install the Webstorm IDE, as we'll use it throughout the article. Note that we are using the Webstorm IDE with an integrated environment with terminal, Github version control, and Grunt to ease our development. However, you are absolutely free to choose your own environment. From now on, when we mention terminal, we are referring to terminal Integrated WebStorm, but you can access it directly by the chosen independent editor, terminal for Mac and Linux and Command Prompt for Windows. Webstorm is very useful when you are using a Windows environment, because Windows Command Prompt does not have the facility to copy and paste like Mac OS X on the terminal window. Initiating the JSON file Follow the steps to initiate the JSON file: Create a blank folder and name it as conference-api, open your terminal, and place the command: npm init This command will walk you through creating a package.json file with the baseline configuration for our application. Also, this file is the heart of our application; we can control all the dependencies' versions and other important things like author, Github repositories, development dependencies, type of license, testing commands, and much more. Almost all commands are questions that guide you to the final process, so when we are done, we'll have a package.json file very similar to this: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT" } Now, we need to add the necessary dependencies, such as Node modules, which we will use in our process. You can do this in two ways, either directly via terminal as we did here, or by editing the package.json file. Let's see how it works on the terminal first; let's start with the Express framework. Open your terminal in the api folder and type the following command: npm install [email protected] –-save This command installs the Express module, in this case, Express Version 4, and updates the package.json file and also creates dependencies automatically, as we can see: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "dependencies": {    "express": "^4.0.0" } } Now, let's add more dependencies directly in the package.json file. Open the file in your editor and add the following lines: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "engines": {        "node": "0.8.4",        "npm": "1.1.49" }, "dependencies": {    "body-parser": "^1.0.1",    "express": "^4.0.0",    "method-override": "^1.0.0",    "mongoose": "^3.6.13",    "morgan": "^1.0.0",    "nodemon": "^1.2.0" }, } It's very important when you deploy your application using some services such as Travis Cl or Heroku hosting company. It's always good to set up the Node environment. Open the terminal again and type the command: npm install You can actually install the dependencies in two different ways, either directly into the directory of your application or globally with the -g command. This way, you will have the modules installed to use them in any application. When using this option, make sure that you are the administrator of the user machine, as this command requires special permissions to write to the root directory of the user. At the end of the process, we'll have all Node modules that we need for this project; we just need one more action. Let's place our code over a version control, in our case Git. More information about the Git can be found at http://git-scm.com however, you can use any version control as subversion or another. We recommend using Git, as we will need it later to deploy our application in the cloud, more specificly, on Heroku cloud hosting. At this time, our project folder must have the same structure as that of the example shown here: We must point out the utilization of an important module called the Nodemon module. Whenever a file changes it restarts the server automatically; otherwise, you will have to restart the server manually every time you make a change to a file, especially in a development environment that is extremely useful, as it constantly updates our files. Node server with server.js With this structure formed, we will start the creation of the server itself, which is the creation of a main JavaScript file. The most common name used is server.js, but it is also very common to use the app.js name, especially in older versions. Let's add this file to the root folder of the project and we will start with the basic server settings. There are many ways to configure our server, and probably you'll find the best one for yourself. As we are still in the initial process, we keep only the basics. Open your editor and type in the following code: // Import the Modules installed to our server var express   = require('express'); var bodyParser = require('body-parser');   // Start the Express web framework var app       = express();   // configure app app.use(bodyParser());   // where the application will run var port     = process.env.PORT || 8080;   // Import Mongoose var mongoose   = require('mongoose');   // connect to our database // you can use your own MongoDB installation at: mongodb://127.0.0.1/databasename mongoose.connect('mongodb://username:[email protected]:10073/node-api');   // Start the Node Server app.listen(port); console.log('Magic happens on port ' + port); Realize that the line-making connection with MongoDB on our localhost is commented, because we are using an instance of MongoDB in the cloud. In our case, we use MongoHQ, a MongoDB-hosting service. Later on, will see how to connect with MongoHQ. Model with the Mongoose schema Now, let's create our model, using the Mongoose schema to map our speakers on MongoDB. // Import the Mongoose module. var mongoose     = require('mongoose'); var Schema       = mongoose.Schema;   // Set the data types, properties and default values to our Schema. var SpeakerSchema   = new Schema({    name:           { type: String, default: '' },    company:       { type: String, default: '' },    title:         { type: String, default: '' },    description:   { type: String, default: '' },    picture:       { type: String, default: '' },    schedule:       { type: String, default: '' },    createdOn:     { type: Date,   default: Date.now} }); module.exports = mongoose.model('Speaker', SpeakerSchema); Note that on the first line, we added the Mongoose module using the require() function. Our schema is pretty simple; on the left-hand side, we have the property name and on the right-hand side, the data type. We also we set the default value to nothing, but if you want, you can set a different value. The next step is to save this file to our project folder. For this, let's create a new directory named server; then inside this, create another folder called models and save the file as speaker.js. At this point, our folder looks like this: The README.md file is used for Github; as we are using the Git version control, we host our files on Github. Defining the API routes One of the most important aspects of our API are routes that we take to create, read, update, and delete our speakers. Our routes are based on the HTTP verb used to access our API, as shown in the following examples: To create record, use the POST verb To read record, use the GET verb To update record, use the PUT verb To delete records, use the DELETE verb So, our routes will be as follows: Routes Verb and Action /api/speakers GET retrieves speaker's records /api/speakers/ POST inserts speakers' record /api/speakers/:speaker_id GET retrieves a single record /api/speakers/:speaker_id PUT updates a single record /api/speakers/:speaker_id DELETE deletes a single record Configuring the API routes: Let's start defining the route and a common message for all requests: var Speaker     = require('./server/models/speaker');   // Defining the Routes for our API   // Start the Router var router = express.Router();   // A simple middleware to use for all Routes and Requests router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); });   // Default message when access the API folder through the browser router.get('/', function(req, res) { // Give some Hello there message res.json({ message: 'Hello SPA, the API is working!' }); }); Now, let's add the route to insert the speakers when the HTTP verb is POST: // When accessing the speakers Routes router.route('/speakers')   // create a speaker when the method passed is POST .post(function(req, res) {   // create a new instance of the Speaker model var speaker = new Speaker();   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully created!' }); }); }) For the HTTP GET method, we need this: // get all the speakers when a method passed is GET .get(function(req, res) { Speaker.find(function(err, speakers) {    if (err)      res.send(err);      res.json(speakers); }); }); Note that in the res.json() function, we send all the object speakers as an answer. Now, we will see the use of different routes in the following steps: To retrieve a single record, we need to pass speaker_id, as shown in our previous table, so let's build this function: // on accessing speaker Route by id router.route('/speakers/:speaker_id')   // get the speaker by id .get(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,    speaker) {    if (err)      res.send(err);      res.json(speaker);    }); }) To update a specific record, we use the PUT HTTP verb and then insert the function: // update the speaker by id .put(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,     speaker) {      if (err)      res.send(err);   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);      // give some success message      res.json({ message: 'speaker successfully       updated!'}); });   }); }) To delete a specific record by its id: // delete the speaker by id .delete(function(req, res) { Speaker.remove({    _id: req.params.speaker_id }, function(err, speaker) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully deleted!' }); }); }); Finally, register the Routes on our server.js file: // register the route app.use('/api', router); All necessary work to configure the basic CRUD routes has been done, and we are ready to run our server and begin creating and updating our database. Open a small parenthesis here, for a quick step-by-step process to introduce another tool to create a database using MongoDB in the cloud. There are many companies that provide this type of service but we will not go into individual merits here; you can choose your preference. We chose Compose (formerly MongoHQ) that has a free sandbox for development, which is sufficient for our examples. Using MongoDB in the cloud Today, we have many options to work with MongoDB, from in-house services to hosting companies that provide Platform as a Service (PaaS) and Software as a Service (SaaS). We will present a solution called Database as a Service (DbaaS) that provides database services for highly scalable web applications. Here's a simple step-by-step process to start using a MongoDB instance with a cloud service: Go to https://www.compose.io/. Create your free account. On your dashboard panel, click on add Database. On the right-hand side, choose Sandbox Database. Name your database as node-api. Add a user to your database. Go back to your database title, click on admin. Copy the connection string. The string connection looks like this: mongodb://<user>:<password>@kahana.mongohq.com:10073/node-api. Let's edit the server.js file using the following steps: Place your own connection string to the Mongoose.connect() function. Open your terminal and input the command: nodemon server.js Open your browser and place http://localhost:8080/api. You will see a message like this in the browser: { Hello SPA, the API is working! } Remember the api folder was defined on the server.js file when we registered the routes: app.use('/api', router); But, if you try to access http://localhost:8080/api/speakers, you must have something like this: [] This is an empty array, because we haven't input any data into MongoDB. We use an extension for the Chrome browser called JSONView. This way, we can view the formatted and readable JSON files. You can install this for free from the Chrome Web Store. Inserting data with Postman To solve our empty database and before we create our frontend interface, let's add some data with the Chrome extension Postman. By the way, it's a very useful browser interface to work with RESTful APIs. As we already know that our database is empty, our first task is to insert a record. To do so, perform the following steps: Open Postman and enter http://localhost:8080/api/speakers. Select the x-www-form-urlencoded option and add the properties of our model: var SpeakerSchema   = new Schema({ name:           { type: String, default: '' }, company:       { type: String, default: '' }, title:         { type: String, default: '' }, description:   { type: String, default: '' }, picture:       { type: String, default: '' }, schedule:       { type: String, default: '' }, createdOn:     { type: Date,   default: Date.now} }); Now, click on the blue button at the end to send the request. With everything going as expected, you should see message: speaker successfully created! at the bottom of the screen, as shown in the following screenshot: Now, let's try http://localhost:8080/api/speakers in the browser again. Now, we have a JSON file like this, instead of an empty array: { "_id": "53a38ffd2cd34a7904000007", "__v": 0, "createdOn": "2014-06-20T02:20:31.384Z", "schedule": "10:20", "picture": "fernando.jpg", "description": "Lorem ipsum dolor sit amet, consectetur     adipisicing elit, sed do eiusmod...", "title": "MongoDB", "company": "Newaeonweb", "name": "Fernando Monteiro" } When performing the same action on Postman, we see the same result, as shown in the following screenshot: Go back to Postman, copy _id from the preceding JSON file and add to the end of the http://localhost:8080/api/speakers/53a38ffd2cd34a7904000005 URL and click on Send. You will see the same object on the screen. Now, let's test the method to update the object. In this case, change the method to PUT on Postman and click on Send. The output is shown in the following screenshot: Note that on the left-hand side, we have three methods under History; now, let's perform the last operation and delete the record. This is very simple to perform; just keep the same URL, change the method on Postman to DELETE, and click on Send. Finally, we have the last method executed successfully, as shown in the following screenshot: Take a look at your terminal, you can see four messages that are the same: An action was performed by the server. We configured this message in the server.js file when we were dealing with all routes of our API. router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); }); This way, we can monitor all interactions that take place at our API. Now that we have our API properly tested and working, we can start the development of the interface that will handle all this data. Summary In this article, we have covered almost all modules of the Node ecosystem to develop the RESTful API. Resources for Article: Further resources on this subject: Web Application Testing [article] A look into responsive design frameworks [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 5554

article-image-recursive-directives
Packt
22 Dec 2014
13 min read
Save for later

Recursive directives

Packt
22 Dec 2014
13 min read
In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. (For more resources related to this topic, see here.) Recursive directives In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. Getting ready Suppose you had a recursive data object in your controller as follows: (app.js)   angular.module('myApp', []) .controller('MainCtrl', function($scope) { $scope.data = {    text: 'Primates',    items: [      {        text: 'Anthropoidea',        items: [          {            text: 'New World Anthropoids'          },          {            text: 'Old World Anthropoids',            items: [              {                text: 'Apes',                items: [                 {                    text: 'Lesser Apes'                  },                  {                    text: 'Greater Apes'                  }                ]              },              {                text: 'Monkeys'              }            ]          }        ]      },      {        text: 'Prosimii'      }    ] }; }); How to do it… As you might imagine, iteratively constructing a view or only partially using directives to accomplish this will become extremely messy very quickly. Instead, it would be better if you were able to create a directive that would seamlessly break apart the data recursively, and define and render the sub-HTML fragments cleanly. By cleverly using directives and the $compile service, this exact directive functionality is possible. The ideal directive in this scenario will be able to handle the recursive object without any additional parameters or outside assistance in parsing and rendering the object. So, in the main view, your directive will look something like this: <recursive value="nestedObject"></recursive> The directive is accepting an isolate scope = binding to the parent scope object, which will remain structurally identical as the directive descends through the recursive object. The $compile service You will need to inject the $compile service in order to make the recursive directive work. The reason for this is that each level of the directive can instantiate directives inside it and convert them from an uncompiled template to real DOM material. The angular.element() method The angular.element() method can be thought of as the jQuery $() equivalent. It accepts a string template or DOM fragment and returns a jqLite object that can be modified, inserted, or compiled for your purposes. If the jQuery library is present when the application is initialized, AngularJS will use that instead of jqLite. If you use the AngularJS template cache, retrieved templates will already exist as if you had called the angular.element() method on the template text. The $templateCache Inside a directive, it's possible to create a template using angular.element() and a string of HTML similar to an underscore.js template. However, it's completely unnecessary and quite unwieldy to use compared to AngularJS templates. When you declare a template and register it with AngularJS, it can be accessed through the injected $templateCache, which acts as a key-value store for your templates. The recursive template is as follows: <script type="text/ng-template" id="recursive.html"> <span>{{ val.text }}</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <tree val="item" parent-data="val.items"></tree>    </li> </ul> </script> The <span> and <button> elements are present at each instance of a node, and they present the data at that node as well as an interface to the click event (which we will define in a moment) that will destroy it and all its children. Following these, the conditional <ul> element renders only if the isParent flag is set in the scope, and it repeats through the items array, recursing the child data and creating new instances of the directive. Here, you can see the full template definition of the directive: <tree val="item" parent-data="val.items"></tree> Not only does the directive take a val attribute for the local node data, but you can also see its parent-data attribute, which is the point of scope indirection that allows the tree structure. To make more sense of this, examine the following directive code: (app.js)   .directive('tree', function($compile, $templateCache) { return {    restrict: 'E',    scope: {      val: '=',      parentData: '='    },    link: function(scope, el, attrs) {      scope.isParent = angular.isArray(scope.val.items)      scope.delSubtree = function() {        if(scope.parentData) {            scope.parentData.splice(            scope.parentData.indexOf(scope.val),            1          );        }        scope.val={};      }        el.replaceWith(        $compile(          $templateCache.get('recursive.html')        )(scope)      );      } }; }); With all of this, if you provide the recursive directive with the data object provided at the beginning of this article, it will result in the following (presented here without the auto-added AngularJS comments and directives): (index.html – uncompiled)   <div ng-app="myApp"> <div ng-controller="MainCtrl">    <tree val="data"></tree> </div>    <script type="text/ng-template" id="recursive.html">    <span>{{ val.text }}</span>    <button ng-click="deleteSubtree()">delete</button>    <ul ng-if="isParent" style="margin-left:30px">      <li ng-repeat="item in val.items">        <tree val="item" parent-data="val.items"></tree>      </li>    </ul> </script> </div> The recursive nature of the directive templates enables nesting, and when compiled using the recursive data object located in the wrapping controller, it will compile into the following HTML: (index.html - compiled)   <div ng-controller="MainController"> <span>Primates</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <span>Anthropoidea</span>      <button ng-click="delSubtree()">delete</button>      <ul ng-if="isParent" style="margin-left:30px">        <li ng-repeat="item in val.items">          <span>New World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>        </li>        <li ng-repeat="item in val.items">          <span>Old World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>          <ul ng-if="isParent" style="margin-left:30px">            <li ng-repeat="item in val.items">              <span>Apes</span>              <button ng-click="delSubtree()">delete</button>              <ul ng-if="isParent" style="margin-left:30px">                <li ng-repeat="item in val.items">                  <span>Lesser Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>                <li ng-repeat="item in val.items">                  <span>Greater Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>              </ul>            </li>            <li ng-repeat="item in val.items">              <span>Monkeys</span>              <button ng-click="delSubtree()">delete</button>            </li>          </ul>         </li>      </ul>    </li>    <li ng-repeat="item in val.items">      <span>Prosimii</span>      <button ng-click="delSubtree()">delete</button>    </li> </ul> </div> JSFiddle: http://jsfiddle.net/msfrisbie/ka46yx4u/ How it works… The definition of the isolate scope through the nested directives described in the previous section allows all or part of the recursive objects to be bound through parentData to the appropriate directive instance, all the while maintaining the nested connectedness afforded by the directive hierarchy. When a parent node is deleted, the lower directives are still bound to the data object and the removal propagates through cleanly. The meatiest and most important part of this directive is, of course, the link function. Here, the link function determines whether the node has any children (which simply checks for the existence of an array in the local data node) and declares the deleting method, which simply removes the relevant portion from the recursive object and cleans up the local node. Up until this point, there haven't been any recursive calls, and there shouldn't need to be. If your directive is constructed correctly, AngularJS data binding and inherent template management will take care of the template cleanup for you. This, of course, leads into the final line of the link function, which is broken up here for readability: el.replaceWith( $compile(    $templateCache.get('recursive.html') )(scope) ); Recall that in a link function, the second parameter is the jqLite-wrapped DOM object that the directive is linking—here, the <tree> element. This exposes to you a subset of jQuery object methods, including replaceWith(), which you will use here. The top-level instance of the directive will be replaced by the recursively-defined template, and this will carry down through the tree. At this point, you should have an idea of how the recursive structure is coming together. The element parameter needs to be replaced with a recursively-compiled template, and for this, you will employ the $compile service. This service accepts a template as a parameter and returns a function that you will invoke with the current scope inside the directive's link function. The template is retrieved from $templateCache by the recursive.html key, and then it's compiled. When the compiler reaches the nested <tree> directive, the recursive directive is realized all the way down through the data in the recursive object. Summary This article demonstrates the power of constructing a directive to convert a complex data object into a large DOM object. Relevant portions can be broken into individual templates, handled with distributed directive logic, and combined together in an elegant fashion to maximize modularity and reusability. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 1929

article-image-adding-websockets
Packt
22 Dec 2014
22 min read
Save for later

Adding WebSockets

Packt
22 Dec 2014
22 min read
In this article, Michal Cmil, Michal Matloka and Francesco Marchioni, authors for the book Java EE 7 Development with WildFly we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking whether someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), the peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we have used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server will recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its connected WebSocket clients, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an Upgrade: websocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows you to reuse the HTTP port (80/8080) for other protocols and therefore minimise the number of required ports that should be configured. If the server can "understand" the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see this in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first annotation @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7) UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-7) Deploying javax.ws.rs.core.Application: class com.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow] (MSC service thread 1-7) JBAS017534: Registered web context: /ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen The connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when an error occurs. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object.  There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. Since we're just adding a feature to our previous app, we will only describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets")public class TicketEndpoint {   @Inject   private SessionRegistry sessionRegistry;   @OnOpen   public void open(Session session, EndpointConfig conf) {       sessionRegistry.add(session);   }   @OnClose   public void close(Session session, CloseReason reason) {       sessionRegistry.remove(session);   }   public void send(@Observes Seat seat) {       sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));   }   private String toJson(Seat seat) {       final JsonObject jsonObject = Json.createObjectBuilder()               .add("id", seat.getId())             .add("booked", seat.isBooked())               .build();       return jsonObject.toString();   } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the string, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {            if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot:   However, if the server fails after opening the website, you might get an error as shown in the following screenshot:   Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON command to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the Gson object before creation. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. And that's all. Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only id and booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot:   Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients too. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 3555
article-image-deep-customization-bootstrap
Packt
19 Dec 2014
8 min read
Save for later

Deep Customization of Bootstrap

Packt
19 Dec 2014
8 min read
This article is written by Aravind Shenoy and Ulrich Sossou, the authors of the book, Learning Bootstrap. It will introduce you to the concept of deep customization of Bootstrap. (For more resources related to this topic, see here.) Adding your own style sheet works when you are trying to do something quick or when the modifications are minimal. Customizing Bootstrap beyond small changes involves using the uncompiled Bootstrap source code. The Bootstrap CSS source code is written in LESS with variables and mixins to allow easy customization. LESS is an open source CSS preprocessor with cool features used to speed up your development time. LESS allows you to engage an efficient and modular style of working making it easier to maintain your CSS styling in your projects. The advantages of using variables in LESS are profound. You can reuse the same code many times thereby following the write once, use anywhere paradigm. Variables can be globally declared, which allows you to specify certain values in a single place. This needs to be updated only once if changes are required. LESS variables allow you to specify widely-used values such as colors, font family, and sizes in a single file. By modifying a single variable, the changes will be reflected in all the Bootstrap components that use it; for example, to change the background color of the body element to green (#00FF00 is the hexadecimal code for green), all you need to do is change the value of the variable called @body-bg in Bootstrap as shown in the following code: @body-bg: #00FF00; Mixins are similar to variables but for whole classes. Mixins enable you to embed the properties of a class into another. It allows you to group multiple code lines together so that it can be used numerous times across the style sheet. Mixins can also be used alongside variables and functions resulting in multiple inheritances; for example, to add clearfix to an article, you can use the .clearfix mixin as shown in the left column of the following table. It will result in all clearfix declarations included in the compiled CSS code shown in the right column: Mixin CSS code article { .clearfix; }   { article:before, article:after { content: " "; // 1 display: table; // 2 } article:after { clear: both; } }   A clearfix mixin is a way for an element to automatically clear after itself, so that you don't need to add additional markup. It's generally used in float layouts, where elements are floated to be stacked horizontally. Let's look at a pragmatic example to understand how this kind of customization is used in a real-time scenario: Download and unzip the Bootstrap files into a folder. Create an HTML file called bootstrap_example and save it in the same folder where you saved the Bootstrap files. Add the following code to it: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of this code upon execution will be as follows: The Bootstrap folder includes the following folders and file:     css     fonts     js     bootstrap_example.html This Bootstrap folder is shown in the following screenshot: Since we are going to use the Bootstrap source code now, let's download the ZIP file and keep it at any location. Unzip it, and we can see the contents of the folder as shown in the following screenshot: Let's now create a new folder called bootstrap in the css folder. The contents of our css folder will appear as displayed in the following screenshot: Copy the contents of the less folder from the source code and paste it into the newly created bootstrap folder inside the css folder. Thus, the contents of the same bootstrap folder within the css folder will appear as displayed in the following screenshot: In the bootstrap folder, look for the variable.less file and open it using Notepad or Notepad++. In this example, we are using a simple Notepad, and on opening the variable.less file with Notepad, we can see the contents of the file as shown in the following screenshot: Currently, we can see @body-bg is assigned the default value #fff as the color code. Change the background color of the body element to green by assigning the value #00ff00 to it. Save the file and later on, look for the bootstrap.less file in the bootstrap folder. In the next step, we are going to use WinLess. Open WinLess and add the contents of the bootstrap folder to it. In the folder pane, you will see all the less files loaded as shown in the following screenshot:   Now, we need to uncheck all the files and only select the bootstrap.less file as shown in following screenshot:  Click on Compile. This will compile your bootstrap.less file to bootstrap.css. Copy the newly compiled bootstrap.css file from the bootstrap folder and paste it into the css folder thereby replacing the original bootstrap.css file. Now that we have the updated bootstrap.css file, go back to bootstrap_example.html and execute it. Upon execution, the output of the code would be as follows:  Thus, we can see that the background color of the <body> element turns to green as we have altered it globally in the variables.less file that was linked to the bootstrap.less file, which was later compiled to bootstrap.css by WinLess. We can also use LESS variables and mixins to customize Bootstrap. We can import the Bootstrap files and add our customizations. Let's now create our own less file called styles.less in the css folder. We will now include the Bootstrap files by adding the following line of code in the styles.less file: @import "./bootstrap/bootstrap.less"; We have given the path,./bootstrap/bootstrap.less as per the location of the bootstrap.less file. Remember to give the appropriate path if you have placed it at any other location. Now, let's try a few customizations and add the following code to styles.less: @body-bg: #FFA500; @padding-large-horizontal: 40px; @font-size-base: 7px; @line-height-base: 9px; @border-radius-large: 75px; The next step is to compile the styles.less file to styles.css. We will again use WinLess for this purpose. You have to uncheck all the options and select only styles.less to be compiled:  On compilation, the styles.css file will contain all the CSS declarations from Bootstrap. The next step would be to add the styles.css stylesheet to the bootstrap_example.html file.So your HTML code will look like this: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> <link href="css/styles.css" rel="stylesheet"> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of the code is as follows: Since we changed the background color to orange (#ffa500), created a border radius, and defined the font-size-base and line-height-base, the output on execution was as displayed in the preceding screenshot. The LESS variables should be added to the styles.less file after the Bootstrap import so that they override the variables defined in the Bootstrap files. In short, all the custom code you write should be added after the Bootstrap import. Summary Therefore, we had a look at the procedure to implement Deep Customization in Bootstrap. However, we are still at the start of the journey. The learning curve is always steep as there is so much more to learn. Learning is always an ongoing process and it would never cease to exist. Thus, there is still a long way to go and in a pragmatic sense, the journey is the destination. Resources for Article: Further resources on this subject: Creating attention-grabbing pricing tables [article] Getting Started with Bootstrap [article] Bootstrap 3.0 is Mobile First [article]
Read more
  • 0
  • 0
  • 3735

article-image-role-angularjs
Packt
16 Dec 2014
7 min read
Save for later

Role of AngularJS

Packt
16 Dec 2014
7 min read
This article by Sandeep Kumar Patel, author of Responsive Web Design with AngularJS we will explore the role of AngularJS for responsive web development. Before going into AngularJS, you will learn about responsive "web development in general. Responsive" web development can be performed "in two ways: Using the browser sniffing approach Using the CSS3 media queries approach (For more resources related to this topic, see here.) Using the browser sniffing approach When we view" web pages through our browser, the browser sends a user agent string to the server. This string" provides information like browser and device details. Reading these details, the browser can be redirected to the appropriate view. This method of reading client details is known as browser sniffing. The browser string has a lot of different information about the source from where the request is generated. The following diagram shows the information shared by the user string:   Details of the parameters" present in the user agent string are as follows: Browser name: This" represents the actual name of the browser from where the request has originated, for example, Mozilla or Opera Browser version: This represents" the browser release version from the vendor, for example, Firefox has the latest version 31 Browser platform: This represents" the underlying engine on which the browser is running, for example, Trident or WebKit Device OS: This represents" the operating system running on the device from where the request has originated, for example, Linux or Windows Device processor: This represents" the processor type on which the operating system is running, for example, 32 or 64 bit A different browser string is generated based on the combination of the device and type of browser used while accessing a web page. The following table shows examples of browser "strings: Browser Device User agent string Firefox Windows desktop Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Chrome OS X 10 desktop Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.66 Safari/537.36 Opera Windows desktop Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14 Safari OS X 10 desktop Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.13+ (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 Internet Explorer Windows desktop Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0   AngularJS has features like providers or services which can be most useful for this browser user-agent sniffing and a redirection approach. An AngularJS provider can be created that can be used in the configuration in the routing module. This provider can have reusable properties and reusable methods that can be used to identify the device and route the specific request to the appropriate template view. To discover more about user agent strings on various browser and device combinations, visit http://www.useragentstring.com/pages/Browserlist/. CSS3 media queries approach CSS3 brings a "new horizon to web application development. One of the key features" is media queries to develop a responsive web application. Media queries uses media types and features as "deciding parameters to apply the style to the current web page. Media type CSS3 media queries" provide rules for media types to have different styles applied to a web page. In the media queries specification, media types that should be supported by the implemented browser are listed. These media types are as follows: all: This is used" for all media type devices aural: This is "used for speech and sound synthesizers braille: This is used "for braille tactile feedback devices embossed: This" is used for paged braille printers handheld: This is "used for small or handheld devices, for example, mobile print: This" is used for printers, for example, an A4 size paper document projection: This is" used for projection-based devices, such as a projector screen with a slide screen: This is "used for computer screens, for example, desktop and "laptop screens tty: This is" used for media using a fixed-pitch character grid, such as teletypes and terminals tv: This is used "for television-type devices, for example, webOS "or Android-based television A media rule can be declared using the @media keyword with the specific type for the targeted media. The following code shows an example of the media rule usage, where the background body color" is black and text is white for the screen type media, and background body color is white and text is black for the printer media type: @media screen { body {    background:black;    color:white; } }   @media print{ body {    background:white;    color:black; } } An external style "sheet can be downloaded and applied to the current page based on the media type with the HTML link tag. The following code uses the link type in conjunction with media type: <link rel='stylesheet' media='screen' href='<fileName.css>' /> To learn more about" different media types,visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_types. Media feature Conditional styles can be "applied to a page based on different features of a device. The features that are "supported by CSS3 media queries to apply styles are as follows: color: Based on the" number of bits used for a color component by the device-specific style sheet, this can be applied to a page. color-index: Based "on the color look up, table styles can be applied "to a page. aspect-ratio: Based "on the aspect ratio, display area style sheets can be applied to a page. device-aspect-ratio: Based "on the device aspect ratio, styles can be applied to a page. device-height: Based "on device height, styles can be applied to a page. "This includes the entire screen. device-width: Based "on device width, styles can be applied to a page. "This includes the entire screen. grid: Based "on the device type, bitmap or grid, styles can be applied "to a page. height: Based on" the device rendering area height, styles can be used "to a page. monochrome: Based on" the monochrome type, styles can be applied. "This represents the number of bits used by the device in the grey scale. orientation: Based" on the viewport mode, landscape or portrait, styles can be applied to a page. resolution: Based" on the pixel density, styles can be applied to a page. scan: Based on the "scanning type used by the device for rendering, styles can be applied to a page. width: Based "on the device screen width, specific styles can be applied. The following" code shows some examples" of CSS3 media queries using different device features for conditional styles used: //for screen devices with a minimum aspect ratio 0.5 @media screen and (min-aspect-ratio: 1/2) { img {    height: 70px;    width: 70px; } } //for all device in portrait viewport @media all and (orientation: portrait) { img {    height: 100px;    width: 200px; } } //For printer devices with a minimum resolution of 300dpi pixel density @media print and (min-resolution: 300dpi) { img {    height: 600px;    width: 400px; } } To learn more" about different media features, visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_features. Summary In this chapter, you learned about responsive design and the SPA architecture. "You now understand the role of the AngularJS library when developing a responsive application. We quickly went through all the important features of AngularJS with the coded syntax. In the next chapter, you will set up your AngularJS application and learn to create dynamic routing-based on the devices. Resources for Article:  Further resources on this subject: Best Practices for Modern Web Applications [article] Important Aspect of AngularJS UI Development [article] A look into responsive design frameworks [article]
Read more
  • 0
  • 20
  • 26136

article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 2115
article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3-part2
Mike Ball
28 Nov 2014
9 min read
Save for later

Part 2: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
28 Nov 2014
9 min read
Part 2: Migrating WordPress blog content and deploying to production In part 1 of this series, we created middleman-demo, a basic Middleman-based blog. Part 1 addressed the benefits of a static site, setting up a Middleman development environment, Middleman’s templating system, and how to configure a Middleman project to support a basic blogging functionality. Now that middleman-demo is configured for blogging, let’s export old content from an existing WordPress blog, compile the application for production, and deploy to a web server. In this part, we’ll cover the following: Using the wp2middleman gem to migrate content from an existing WordPress blog Creating a Rake task to establish an Amazon Web Services S3 bucket Deploying a Middleman blog to Amazon S3 Setting up a custom domain for an S3-hosted site If you didn’t follow part 1, or you no longer have your original middleman-demo code, you can clone mine and check out the part2 branch: $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part2 Export your content from Wordpress Now that middleman-demo is configured for blogging, let’s export old content from an existing Wordpress blog. Wordpress provides a tool through which blog content can be exported as an XML file, also called a WordPress “eXtended RSS” or “WXR” file. A WXR file can be generated and downloaded via the Wordpress admin’s Tools > Export screen, as is explained in Wordpress’s WXR documentation. In absence of a real Wordpress blog, download middleman_demo.wordpress.xml file, a sample WXR file: $ wget www.mikeball.info/downloads/middleman_demo.wordpress.xml Migrating the Wordpress posts to markdown To migrate the posts contained in the Wordpress WXR file, I created wp2middleman, a command line tool to generate Middleman-style markdown files from the posts in a WXR. Install wp2middleman via Rubygems: $ gem install wp2middleman wp2middleman provides a wp2mm command. Pass the middleman_demo.wordpress.xml file to the wp2mm command: $ wp2mm middleman_demo.wordpress.xml If all goes well, the following output is printed to the terminal: Successfully migrated middleman_demo.wordpress.xml wp2middleman also produced an export directory. The export directory houses the blog posts from the middleman_demo.wordpress.xml WXR file, now represented as Middleman-style markdown files: $ ls export/ 2007-02-14-Fusce-mauris-ligula-rutrum-at-tristique-at-pellentesque-quis-nisl.html.markdown 2007-07-21-Suspendisse-feugiat-enim-vel-lorem.html.markdown 2008-02-20-Suspendisse-rutrum-Suspendisse-nisi-turpis-congue-ac.html.markdown 2008-03-17-Duis-euismod-purus-ac-quam-Mauris-tortor.html.markdown 2008-04-02-Donec-cursus-tincidunt-libero-Nam-blandit.html.markdown 2008-04-28-Etiam-nulla-nisl-cursus-vel-auctor-at-mollis-a-quam.html.markdown 2008-06-08-Praesent-faucibus-ligula-luctus-dolor.html.markdown 2008-07-08-Proin-lobortis-sapien-non-venenatis-luctus.html.markdown 2008-08-08-Etiam-eu-urna-eget-dolor-imperdiet-vehicula-Phasellus-dictum-ipsum-vel-neque-mauris-interdum-iaculis-risus.html.markdown 2008-09-08-Lorem-ipsum-dolor-sit-amet-consectetuer-adipiscing-elit.html.markdown 2013-12-30-Hello-world.html.markdown Note that wp2mm supports additional options, though these are beyond the scope of this tutorial. Read more on wp2middleman’s GitHub page. Also note that the markdown posts in export are named *.html.markdown and some -- such as SOME EXAMPLE TODO -- contain the HTML embedded in the original Wordpress post. Middleman supports the ability to embed multiple languages within a single post file. For example, Middleman will evaluate a file named .html.erb.markdown first as markdown and then ERb. The final result would be HTML. Move the contents of export to source/blog and remove the export directory: $ mv export/* source/blog && rm -rf export Now, assuming the Middleman server is running, visiting http://localhost:4567 lists all the blog posts migrated from Wordpress. Each post links to its permalink. In the case of posts with tags, each tag links to a tag page. Compiling for production Thus far, we’ve been viewing middleman-demo in local development, where the Middleman server dynamically generates the HTML, CSS, and JavaScript with each request. However, Middleman’s value lies in its ability to generate a static website -- simple HTML, CSS, JavaScript, and image files -- served directly by a web server such as Nginx or Apache and thus requiring no application server or internal backend. Compile middleman-demo to a static build directory: $ middleman build The resulting build directory houses every HTML file that can be served by middleman-demo, as well as all necessary CSS, JavaScript, and images. Its directory layout maps to the URL patterns defined in config.rb. The build directory is typically ignored from source control. Deploying the build to Amazon S3 Amazon Web Services is Amazon’s cloud computing platform. Amazon S3, or Simple Storage Service, is a simple data storage service. Because S3 “buckets” can be accessible over HTTP, S3 offers a great cloud-based hosting solution for static websites, such as middleman-demo. While S3 is not free, it is generally extremely affordable. Amazon charges on a per-usage basis according to how many requests your bucket serves, including PUT requests, i.e. uploads. Read more about S3 pricing on AWS’s pricing guide. Let’s deploy the middleman-demo build to Amazon S3. First, sign up for AWS. Through AWS’s web-based admin, create an IAM user and locate the corresponding “access key id” and “secret access key:” 1: Visit the AWS IAM console. 2: From the navigation menu, click Users. 3: Select your IAM user name. 4: Click User Actions; then click Manage Access Keys. 5: Click Create Access Key. 6: Click Download Credentials; store the keys in a secure location. 7: Store your access key id in an environment variable named AWS_ACCESS_KEY_ID: $ export AWS_ACCESS_KEY_ID=your_access_key_id 8: Store your secret access key as an environment variable named AWS_SECRET_ACCESS_KEY: $ export AWS_SECRET_ACCESS_KEY=your_secret_access_key Note that, to persist these environment variables beyond the current shell session, you may want to automatically set them in each shell session. Setting them in a file such as your ~/.bashrc ensures this: export AWS_ACCESS_KEY_ID=your_access_key_id export AWS_SECRET_ACCESS_KEY=your_secret_access_key Creating an S3 bucket with Ruby To deploy to S3, we’ll need to create a “bucket,” or an S3 endpoint to which the middleman-demo’s build directory can be deployed. This can be done via AWS’s management console, but we can also automate its creation with Ruby. We’ll use the aws-sdk Ruby gem and a Rake task to create an S3 bucket for middleman-demo. Add the aws-sdk gem to middleman-demo’s Gemfile: gem 'aws-sdk Install the new gem: $ bundle install Create a Rakefile: $ touch Rakefile Add the following Ruby to the Rakefile; this code establishes a Rake task -- a quick command line utility -- to automate the creation of an S3 bucket: require 'aws-sdk' desc "Create an AWS S3 bucket" task :s3_bucket, :bucket_name do |task, args| s3 = AWS::S3.new(region: 'us-east-1) bucket = s3.buckets.create(args[:bucket_name]) bucket.configure_website do |config| config.index_document_suffix = 'index.html' config.error_document_key = 'error/index.html' end end From the command line, use the newly-established :s3_bucket Rake task to create a unique S3 bucket for your middleman-demo. Note that, if you have an existing domain you’d like to use, your bucket should be named www.yourdomain.com: $ rake s3_bucket[some_unique_bucket_name] For example, I named my S3 bucket www.middlemandemo.com by entering the following: $ rake s3_bucket[www.middlemandemo.com] After running rake s3_bucket[YOUR_BUCKET], you should see YOUR_BUCKET amongst the buckets listed in your AWS web console. Creating an error template Our rake task specifies a config.error_document_key whose value is error/index.html. This configures your S3 bucket to serve an error.html for erroring responses, such as 404s. Create an source/error.html.erb template: $ touch source/error.html.erb And add the following: --- title: Oops - something went wrong --- <h2><%= current_page.data.title %></h2> Deploying to your S3 bucket With an S3 bucket established, the middleman-sync Ruby gem can be used to automate uploading middleman-demo builds to S3. Add the middleman-sync gem to the Gemfile: gem ‘middleman-sync’ Install the middleman-sync gem: $ bundle install Add the necessary middleman-sync configuration to config.rb: activate :sync do |sync| sync.fog_provider = 'AWS' sync.fog_region = 'us-east-1' sync.fog_directory = '<YOUR_BUCKET>' sync.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID'] sync.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY'] end Build and deploy middleman-demo: $ middleman build && middleman sync Note: if your deployment fails with a ’post_connection_check': hostname "YOUR_BUCKET" does not match the server certificate (OpenSSL::SSL::SSLError) (Excon::Errors::SocketError), it’s likely due to an open issue with middleman-sync. To work around this issue, add the following to the top of config.rb: require 'fog' Fog.credentials = { path_style: true } Now, middlemn-demo is browseable online at http://YOUR_BUCKET.s3-website-us-east-1.amazonaws.com/ Using a custom domain With middleman-demo -- deployed to an S3 bucket whose name matches a domain name, a custom domain can be configured easily. To use a custom domain, log into your domain management provider and add a CNAME mapping your domain to www.yourdomain.com.s3-website-us-east-1.amazonaws.com.. While the exact process for managing a CNAME varies between domain name providers, the process is generally fairly simple. Note that your S3 bucket name must perfectly match your domain name. Recap We’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a Wordpress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 2227

article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 12212