Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-light-speed-unit-testing
Packt
23 Apr 2015
6 min read
Save for later

Light Speed Unit Testing

Packt
23 Apr 2015
6 min read
In this article by Paulo Ragonha, author of the book Jasmine JavaScript Testing - Second Edition, we will learn Jasmine stubs and Jasmine Ajax plugin. (For more resources related to this topic, see here.) Jasmine stubs We use stubs whenever we want to force a specific path in our specs or replace a real implementation for a simpler one. Let's take the example of the acceptance criteria, "Stock when fetched, should update its share price", by writing it using Jasmine stubs. The stock's fetch function is implemented using the $.getJSON function, as follows: Stock.prototype.fetch = function(parameters) { $.getJSON(url, function (data) {    that.sharePrice = data.sharePrice;    success(that); }); }; We could use the spyOn function to set up a spy on the getJSON function with the following code: describe("when fetched", function() { beforeEach(function() {    spyOn($, 'getJSON').and.callFake(function(url, callback) {      callback({ sharePrice: 20.18 });    });    stock.fetch(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); We will use the and.callFake function to set a behavior to our spy (by default, a spy does nothing and returns undefined). We make the spy invoke its callback parameter with an object response ({ sharePrice: 20.18 }). Later, at the expectation, we use the toEqual assertion to verify that the stock's sharePrice has changed. To run this spec, you no longer need a server to make the requests to, which is a good thing, but there is one issue with this approach. If the fetch function gets refactored to use $.ajax instead of $.getJSON, then the test will fail. A better solution, provided by a Jasmine plugin called jasmine-ajax, is to stub the browser's AJAX infrastructure instead, so the implementation of the AJAX request is free to be done in different manners. Jasmine Ajax Jasmine Ajax is an official plugin developed to help out the testing of AJAX requests. It changes the browser's AJAX request infrastructure to a fake implementation. This fake (or mocked) implementation, although simpler, still behaves like the real implementation to any code using its API. Installing the plugin Before we dig into the spec implementation, first we need to add the plugin to the project. Go to https://github.com/jasmine/jasmine-ajax/ and download the current release (which should be compatible with the Jasmine 2.x release). Place it inside the lib folder. It is also needed to be added to the SpecRunner.html file, so go ahead and add another script: <script type="text/javascript" src="lib/mock-ajax.js"></script> A fake XMLHttpRequest Whenever you are using jQuery to make AJAX requests, under the hood it is actually using the XMLHttpRequest object to perform the request. XMLHttpRequest is the standard JavaScript HTTP API. Even though its name suggests that it uses XML, it supports other types of content such as JSON; the name has remained the same for compatibility reasons. So, instead of stubbing jQuery, we could change the XMLHttpRequest object with a fake implementation. That is exactly what this plugin does. Let's rewrite the previous spec to use this fake implementation: describe("when fetched", function() { beforeEach(function() {    jasmine.Ajax.install(); });   beforeEach(function() {    stock.fetch();      jasmine.Ajax.requests.mostRecent().respondWith({      'status': 200,      'contentType': 'application/json',      'responseText': '{ "sharePrice": 20.18 }'    }); });   afterEach(function() {    jasmine.Ajax.uninstall(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); Drilling the implementation down: First, we tell the plugin to replace the original implementation of the XMLHttpRequest object by a fake implementation using the jasmine.Ajax.install function. We then invoke the stock.fetch function, which will invoke $.getJSON, creating XMLHttpRequest anew under the hood. And finally, we use the jasmine.Ajax.requests.mostRecent().respondWith function to get the most recently made request and respond to it with a fake response. We use the respondWith function, which accepts an object with three properties: The status property to define the HTTP status code. The contentType (JSON in the example) property. The responseText property, which is a text string containing the response body for the request. Then, it's all a matter of running the expectations: it("should update its share price", function() { expect(stock.sharePrice).toEqual(20.18); }); Since the plugin changes the global XMLHttpRequest object, you must remember to tell Jasmine to restore it to its original implementation after the test runs; otherwise, you could interfere with the code from other specs (such as the Jasmine jQuery fixtures module). Here's how you can accomplish this: afterEach(function() { jasmine.Ajax.uninstall(); }); There is also a slightly different approach to write this spec; here, the request is first stubbed (with the response details) and the code to be exercised is executed later. The previous example is changed to the following: beforeEach(function() {jasmine.Ajax.stubRequest('http://localhost:8000/stocks/AOUE').andReturn({    'status': 200,    'contentType': 'application/json',    'responseText': '{ "sharePrice": 20.18 }' });   stock.fetch(); }); It is possible to use the jasmine.Ajax.stubRequest function to stub any request to a specific request. In the example, it is defined by the URL http://localhost:8000/stocks/AOUE, and the response definition is as follows: { 'status': 200, 'contentType': 'application/json', 'responseText': '{ "sharePrice": 20.18 }' } The response definition follows the same properties as the previously used respondWith function. Summary In this article, you learned how asynchronous tests can hurt the quick feedback loop you can get with unit testing. I showed how you can use either stubs or fakes to make your specs run quicker and with fewer dependencies. We have seen two different ways in which you could test AJAX requests with a simple Jasmine stub and with the more advanced, fake implementation of the XMLHttpRequest. You also got more familiar with spies and stubs and should be more comfortable using them in different scenarios. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Working with Blender [article] Category Theory [article]
Read more
  • 0
  • 0
  • 2080

article-image-designing-jasmine-tests-spies
Packt
22 Apr 2015
17 min read
Save for later

Designing Jasmine Tests with Spies

Packt
22 Apr 2015
17 min read
In this article by Munish Sethi, author of the book Jasmine Cookbook, we will see the implementation of Jasmine tests using spies. (For more resources related to this topic, see here.) Nowadays, JavaScript has become the de facto programming language to build and empower frontend/web applications. We can use JavaScript to develop simple or complex applications. However, applications in production are often vulnerable to bugs caused by design inconsistencies, logical implementation errors, and similar issues. Due to this, it is usually difficult to predict how applications will behave in real-time environments, which leads to unexpected behavior, nonavailability of applications, or outage for shorter/longer durations. This generates lack of confidence and dissatisfaction among application users. Also, high cost is often associated with fixing the production bugs. Therefore, there is a need to develop applications with high quality and high availability. Jasmine is a Behavior-Driven development (BDD) framework for testing JavaScript code both in browser and on server side. It plays a vital role to establish effective development process by applying efficient testing processes. Jasmine provides a rich set of libraries to design and develop Jasmine specs (unit tests) for JavaScript (or JavaScript enabled) applications. In this article, we will see how to develop specs using Jasmine spies and matchers. We will also see how to write Jasmine specs with the Data-Driven approach using JSON/HTML fixture from end-to-end (E2E) perspective. Let's understand the concept of mocks before we start developing Jasmine specs with spies. Generally, we write one unit test corresponding to a Jasmine spec to test a method, object, or component in isolation, and see how it behaves in different circumstances. However, there are situations where a method/object has dependencies on other methods or objects. In this scenario, we need to design tests/specs across the unit/methods or components to validate behavior or simulate a real-time scenario. However, due to nonavailability of dependent methods/objects or staging/production environment, it is quite challenging to write Jasmine tests for methods that have dependencies on other methods/objects. This is where Mocks come into the picture. A mock is a fake object that replaces the original object/method and imitates the behavior of the real object without going into the nitty-gritty or creating the real object/method. Mocks work by implementing the proxy model. Whenever, we create a mock object, it creates a proxy object, which replaces the real object/method. We can then define the methods that are called and their returned values in our test method. Mocks can then be utilized to retrieve some of the runtime statistics, as follows: How many times was the mocked function/object method called? What was the value that the function returned to the caller? With how many arguments was the function called? Developing Jasmine specs using spies In Jasmine, mocks are referred to as spies. Spies are used to mock a function/object method. A spy can stub any function and track calls to it and all its arguments. Jasmine provides a rich set of functions and properties to enable mocking. There are special matchers to interact with spies, that is, toHaveBeenCalled and toHaveBeenCalledWith. Now, to understand the preceding concepts, let's assume that you are developing an application for a company providing solutions for the healthcare industry. Currently, there is a need to design a component that gets a person's details (such as name, age, blood group, details of diseases, and so on) and processes it further for other usage. Now, assume that you are developing a component that verifies a person's details for blood or organ donation. There are also a few factors or biological rules that exist to donate or receive blood. For now, we can consider the following biological factors: The person's age should be greater than or equal to 18 years The person should not be infected with HIV+ Let's create the validate_person_eligibility.js file and consider the following code in the current context: var Person = function(name, DOB, bloodgroup, donor_receiver) {    this.myName = name; this.myDOB = DOB; this.myBloodGroup = bloodgroup; this.donor_receiver = donor_receiver; this.ValidateAge    = function(myDOB){     this.myDOB = myDOB || DOB;     return this.getAge(this.myDOB);    };    this.ValidateHIV   = function(personName,personDOB,personBloodGroup){     this.myName = personName || this.myName;     this.myDOB = personDOB || this.myDOB;     this.myBloodGroup = personBloodGroup || this.myBloodGroup;     return this.checkHIV(this.myName, this.myDOB, this.myBloodGroup);    }; }; Person.prototype.getAge = function(birth){ console.log("getAge() function is called"); var calculatedAge=0; // Logic to calculate person's age will be implemented later   if (calculatedAge<18) {    throw new ValidationError("Person must be 18 years or older"); }; return calculatedAge; }; Person.prototype.checkHIV = function(pName, pDOB, pBloodGroup){ console.log("checkHIV() function is called"); bolHIVResult=true; // Logic to verify HIV+ will be implemented later   if (bolHIVResult == true) {    throw new ValidationError("A person is infected with HIV+");   }; return bolHIVResult; };   // Define custom error for validation function ValidationError(message) { this.message = message; } ValidationError.prototype = Object.create(Error.prototype); In the preceding code snapshot, we created an object Person, which accepts four parameters, that is, name of the person, date of birth, the person's blood group, and the donor or receiver. Further, we defined the following functions within the person's object to validate biological factors: ValidateAge(): This function accepts an argument as the date of birth and returns the person's age by calling the getAge function. You can also notice that under the getAge function, the code is not developed to calculate the person's age. ValidateHIV(): This function accepts three arguments as name, date of birth, and the person's blood group. It verifies whether the person is infected with HIV or not by calling the checkHIV function. Under the function checkHIV, you can observe that code is not developed to check whether the person is infected with HIV+ or not. Next, let's create the spec file (validate_person_eligibility_spec.js) and code the following lines to develop the Jasmine spec, which validates all the test conditions (biological rules) described in the previous sections: describe("<ABC> Company: Health Care Solution, ", function() { describe("When to donate or receive blood, ", function(){    it("Person's age should be greater than " +        "or equal to 18 years", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "getAge");      testPersonCriteria.ValidateAge("10/25/1990");      expect(testPersonCriteria.getAge).toHaveBeenCalled();      expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990");    });    it("A person should not be " +        "infected with HIV+", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "checkHIV");      testPersonCriteria.ValidateHIV();      expect(testPersonCriteria.checkHIV).toHaveBeenCalled();    }); }); }); In the preceding snapshot, we mocked the functions getAge and checkHIV using spyOn(). Also, we applied the toHaveBeenCalled matcher to verify whether the function getAge is called or not. Let's look at the following pointers before we jump to the next step: Jasmine provides the spyOn() function to mock any JavaScript function. A spy can stub any function and track calls to it and to all arguments. A spy only exists in the describe or it block; it is defined, and will be removed after each spec. Jasmine provides special matchers, toHaveBeenCalled and toHaveBeenCalledWith, to interact with spies. The matcher toHaveBeenCalled returns true if the spy was called. The matcher toHaveBeenCalledWith returns true if the argument list matches any of the recorded calls to the spy. Let's add the reference of the validate_person_eligibility.js file to the Jasmine runner (that is, SpecRunner.html) and run the spec file to execute both the specs. You will see that both the specs are passing, as shown in the following screenshot: While executing the Jasmine specs, you can notice that log messages, which we defined under the getAge() and checkHIV functions, are not printed in the browser console window. Whenever, we mock a function using Jasmine's spyOn() function, it replaces the original method of the object with a proxy method. Next, let's consider a situation where the function <B> is called under the function <A>, which is mocked in your test. Due to the mock behavior, it creates a proxy object that replaces the function <A>, and function <B> will never be called. However, in order to pass the test, it needs to be executed. In this situation, we chain the spyOn() function with .and.callThrough. Let's consider the following test code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callThrough(); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); }); Whenever the spyOn() function is chained with and.callThrough, the spy will still track all calls to it. However, in addition, it will delegate the control back to the actual implementation/function. To see the effect, let's run the spec file check_person_eligibility_spec.js with the Jasmine runner. You will see that the spec is failing, as shown in the following screenshot: This time, while executing the spec file, you can notice that a log message (that is, getAge() function is called) is also printed in the browser console window. On the other hand, you can also define your own logic or set values in your test code as per specific requirements by chaining the spyOn() function with and.callFake. For example, consider the following code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callFake(function()      {    return 18; }); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); expect(testPersonCriteria.getAge()).toEqual(18); }); Whenever the spyOn() function is chained with and.callFake, all calls to the spy will be delegated to the supplied function. You can also notice that we added one more expectation to validate the person's age. To see execution results, run the spec file with the Jasmine runner. You will see that both the specs are passing: Implementing Jasmine specs using custom spy method In the previous section, we looked at how we can spy on a function. Now, we will understand the need of custom spy method and how Jasmine specs can be designed using it. There are several cases when one would need to replace the original method. For example, original function/method takes a long time to execute or it depends on the other method/object (or third-party system) that is/are not available in the test environment. In this situation, it is beneficial to replace the original method with a fake/custom spy method for testing purpose. Jasmine provides a method called jasmine.createSpy to create your own custom spy method. As we described in the previous section, there are few factors or biological rules that exist to donate or receive bloods. Let's consider few more biological rules as follows: Person with O+ blood group can receive blood from a person with O+ blood group Person with O+ blood group can give the blood to a person with A+ blood group First, let's update the JavaScript file validate_person_eligibility.js and add a new method ValidateBloodGroup to the Person object. Consider the following code: this.ValidateBloodGroup   = function(callback){ var _this = this; var matchBloodGroup; this.MatchBloodGroupToGiveReceive(function (personBloodGroup) {    _this.personBloodGroup = personBloodGroup;    matchBloodGroup = personBloodGroup;    callback.call(_this, _this.personBloodGroup); }); return matchBloodGroup; };   Person.prototype.MatchBloodGroupToGiveReceive = function(callback){ // Network actions are required to match the values corresponding // to match blood group. Network actions are asynchronous hence the // need for a callback. // But, for now, let's use hard coded values. var matchBloodGroup; if (this.donor_receiver == null || this.donor_receiver == undefined){    throw new ValidationError("Argument (donor_receiver) is missing "); }; if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "RECEIVER"){    matchBloodGroup = ["O+"]; }else if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "DONOR"){    matchBloodGroup = ["A+"]; }; callback.call(this, matchBloodGroup); }; In the preceding code snapshot, you can notice that the ValidateBloodGroup() function accepts an argument as the callback function. The ValidateBloodGroup() function returns matching/eligible blood group(s) for receiver/donor by calling the MatchBloodGroupToGiveReceive function. Let's create the Jasmine tests with custom spy method using the following code: describe("Person With O+ Blood Group: ", function(){ it("can receive the blood of the " +      "person with O+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Receiver");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Donor");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); You can notice that in the preceding snapshot, first we mocked the function MatchBloodGroupToGiveReceive using spyOn() and chained it with and.callThrough() to hand over the control back to the function. Thereafter, we created callback as the custom spy method using jasmine.createSpy. Furthermore, we are tracking calls/arguments to the callback and MatchBloodGroupToGiveReceive functions using tracking properties (that is, .calls.any() and .calls.count()). Whenever we create a custom spy method using jasmine.createSpy, it creates a bare spy. It is a good mechanism to test the callbacks. You can also track calls and arguments corresponding to custom spy method. However, there is no implementation behind it. To execute the tests, run the spec file with the Jasmine runner. You will see that all the specs are passing: Implementing Jasmine specs using Data-Driven approach In Data-Driven approach, Jasmine specs get input or expected values from the external data files (JSON, CSV, TXT files, and so on), which are required to run/execute tests. In other words, we isolate test data and Jasmine specs so that one can prepare the test data (input/expected values) separately as per the need of specs. For example, in the previous section, we provided all the input values (that is, name of person, date of birth, blood group, donor or receiver) to the person's object in the test code itself. However, for better management, it's always good to maintain test data and code/specs separately. To implement Jasmine tests with the data-driven approach, let's create a data file fixture_input_data.json. For now, you can use the following data in JSON format: [ { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Receiver" }, { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Donor" } ] Next, we will see how to provide all the required input values in our tests through a data file using the jasmine-jquery plugin. Before we move to the next step and implement the Jasmine tests with the Data-Driven approach, let's note the following points regarding the jasmine-jquery plugin: It provides two extensions to write the tests with HTML and JSON fixture: An API for handling HTML and JSON fixtures in your specs A set of custom matchers for jQuery framework The loadJSONFixtures method loads fixture(s) from one or more JSON files and makes it available at runtime To know more about the jasmine-jquery plugin, you can visit the following website: https://github.com/velesin/jasmine-jquery Let's implement both the specs created in the previous section using the Data-Driven approach. Consider the following code: describe("Person With O+ Blood Group: ", function(){    var fixturefile, fixtures, myResult; beforeEach(function() {        //Start - Load JSON Files to provide input data for all the test scenarios        fixturefile = "fixture_input_data.json";        fixtures = loadJSONFixtures(fixturefile);        myResult = fixtures[fixturefile];            //End - Load JSON Files to provide input data for all the test scenarios });   it("can receive the blood of the " +      "person with O+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[0].Name,        myResult[0].DOB,        myResult[0].Blood_Group,        myResult[0].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[1].Name,        myResult[1].DOB,        myResult[1].Blood_Group,        myResult[1].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); In the preceding code snapshot, you can notice that first we provided the input data from an external JSON file (that is, fixture_input_data.json) using the loadJSONFixtures function and made it available at runtime. Thereafter, we provided input values/data to both the specs, as required; we set the value of name, date of birth, blood group, and donor/receiver for specs 1 and 2, respectively. Further, following the same methodology, we can also create a separate data file for expected values, which we require in our tests to compare with actual values. If test data (input or expected values) is required during execution, it is advisable to provide it from an external file instead of using hardcoded values in your tests. Now, execute the test suite with the Jasmine runner and you will see that all the specs are passing: Summary In this article, we looked at the implementation of Jasmine tests using spies. We also demonstrated how to test the callback function using custom spy method. Further, we saw the implementation of Data-Driven approach, where you learned how to to isolate test data from the code. Resources for Article: Further resources on this subject: Web Application Testing [article] Testing Backbone.js Application [article] The architecture of JavaScriptMVC [article]
Read more
  • 0
  • 0
  • 2341

article-image-inserting-gis-objects
Packt
21 Apr 2015
15 min read
Save for later

Inserting GIS Objects

Packt
21 Apr 2015
15 min read
In this article by Angel Marquez author of the book PostGIS Essentials see how to insert GIS objects. Now is the time to fill our tables with data. It's very important to understand some of the theoretical concepts about spatial data before we can properly work with it. We will cover this concept through the real estate company example, used previously. Basically, we will insert two kinds of data: firstly, all the data that belongs to our own scope of interest. By this, I mean the spatial data that was generated by us (the positions of properties in the case of the example of the real estate company) for our specific problem, so as to save this data in a way that can be easily exploited. Secondly, we will import data of a more general use, which was provided by a third party. Another important feature that we will cover in this article are the spatial data files that we could use to share, import, and export spatial data within a standardized and popular format called shp or Shape files. In this article, we will cover the following topics: Developing insertion queries that include GIS objects Obtaining useful spatial data from a public third-party Filling our spatial tables with the help of spatial data files using a command line tool Filling our spatial tables with the help of spatial data files using a GUI tool provided by PostGIS (For more resources related to this topic, see here.) Developing insertion queries with GIS objects Developing an insertion query is a very common task for someone who works with databases. Basically, we follow the SQL language syntax of the insertion, by first listing all the fields involved and then listing all the data that will be saved in each one: INSERT INTO tbl_properties( id, town, postal_code, street, "number) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32); If the field is of a numerical value, we simply write the number; if it's a string-like data type, we have to enclose the text in two single quotes. Now, if we wish to include a spatial value in the insertion query, we must first find a way to represent this value. This is where the Well-Known Text (WKT) notation enters. WKT is a notation that represents a geometry object that can be easily read by humans; following is an example of this: POINT(-0.116190 51.556173) Here, we defined a geographic point by using a list of two real values, the latitude (y-axis) and the longitude (x-axis). Additionally, if we need to specify the elevation of some point, we will have to specify a third value for the z-axis; this value will be defined in meters by default, as shown in the following code snippet: POINT(-0.116190 51.556173 100) Some of the other basic geometry types defined by the WKT notation are: MULTILINESTRING: This is used to define one or more lines POLYGON: This is used to define only one polygon MULTIPOLYGON: This is used to define several polygons in the same row So, as an example, an SQL insertion query to add the first row to the table, tbl_properties, of our real estate database using the WKT notation, should be as follows: INSERT INTO tbl_properties (id, town, postal_code, street, "number", the_geom) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromText('POINT(-0.116190 51.556173)')); The special function provided by PostGIS, ST_GeomFromText, parses the text given as a parameter and converts it into a GIS object that can be inserted in the_geom field. Now, we could think this is everything and, therefore, start to develop all the insertion queries that we need. It could be true if we just want to work with the data generated by us and there isn't a need to share this information with other entities. However, if we want to have a better understanding of GIS (believe me, it could help you a lot and prevent a lot of unnecessary headache when working with data from several sources), it would be better to specify another piece of information as part of our GIS object representation to establish its Spatial Reference System (SRS). In the next section, we will explain this concept. What is a spatial reference system? We could think about Earth as a perfect sphere that will float forever in space and never change its shape, but it is not. Earth is alive and in a state of constant change, and it's certainly not a perfect circle; it is more like an ellipse (though not a perfect ellipse) with a lot of small variations, which have taken place over the years. If we want to represent a specific position inside this irregular shape called Earth, we must first make some abstractions: First we have to choose a method to represent Earth's surface into a regular form (such as a sphere, ellipsoid, and so on). After this, we must take this abstract three-dimensional form and represent it into a two-dimensional plane. This process is commonly called map projection, also known as projection. There are a lot of ways to make a projection; some of them are more precise than others. This depends on the usefulness that we want to give to the data, and the kind of projection that we choose. The SRS defines which projection will be used and the transformation that will be used to translate a position from a given projection to another. This leads us to another important point. Maybe it has occurred to you that a geographic position was unique, but it is not. By this, I mean that there could be two different positions with the same latitude and longitude values but be in different physical places on Earth. For a position to be unique, it is necessary to specify the SRS that was used to obtain this position. To explain this concept, let's consider Earth as a perfect sphere; how can you represent it as a two-dimensional square? Well, to do this, you will have to make a projection, as shown in the following figure:   A projection implies that you will have to make a spherical 3D image fit into a 2D figure, as shown in the preceding image; there are several ways to achieve this. We applied an azimuthal projection, which is a result of projecting a spherical surface onto a plane. However, as I told you earlier, there are several other ways to do this, as we can see in the following image:   These are examples of cylindrical and conical projections. Each one produces a different kind of 2D image of the terrain. Each has its own peculiarities and is used for several distinct purposes. If we put all the resultant images of these projections one above the other, we must get an image similar to the following figure:   As you can see, the terrain positions, which are not necessary, are the same between two projections, so you must clearly specify which projection you are using in your project in order to avoid possible mistakes and errors when you establish a position. There are a lot of SRS defined around the world. They could be grouped by their reach, that is, they could be local (state or province), national (an entire country), regional (several countries from the same area), or global (worldwide). The International Association of Oil and Gas Producers has defined a collection of Coordinate Reference System (CRS) known as the European Petroleum Survey Group (EPSG) dataset and has assigned a unique ID to each of these SRSs; this ID is called SRID. For uniquely defining a position, you must establish the SRS that it belongs to, using its particular ID; this is the SRID. There are literally hundreds of SRSs defined; to avoid any possible error, we must standardize which SRS we will use. A very common SRS, widely used around the globe is the WGS84 SRS with the SRID 4326. It is very important that you store the spatial data on your database, using EPSG: 4326 as much as possible, or almost use one equal projection on your database; this way you will avoid problems when you analyze your data. The WKT notation doesn't support the SRID specification as part of the text, since this was developed at the EWKT notation that allows us to include this information as part of our input string, as we will see in the following example: 'SRID=4326;POINT(51.556173 -0.116190)' When you create a spatial field, you must specify the SRID that will be used. Including SRS information in our spatial tables The matter that was discussed in the previous section is very important to develop our spatial tables. Taking into account the SRS that they will use from the beginning, we will follow a procedure to recreate our tables by adding this feature. This procedure must be applied to all the tables that we have created on both databases. Perform the following steps: Open a command session on pgSQL in your command line tool or by using the graphical GUI, PGAdmin III. We will open the Real_Estate database. Drop the spatial fields of your tables using the following instruction: SELECT DropGeometryColumn('tbl_properties', 'the_geom') Add the spatial field using this command: SELECT AddGeometryColumn('tbl_properties', 'the_geom', 4326, 'POINT', 2); Repeat these steps for the rest of the spatial tables. Now that we can specify the SRS that was used to obtain this position, we will develop an insertion query using the Extended WKT (EWKT) notation: INSERT INTO tbl_properties ( id, town, postal_code, street, "number", the_geom)VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromEWKT('SRID=4326;POINT(51.556173 -0.116190)')); The ST_GeomFromEWKT function works exactly as ST_GeomFromText, but it implements the extended functionality of the WKT notation. Now that you know how to represent a GIS object as text, it is up to you to choose the most convenient way to generate a SQL script that inserts existing data into the spatial data tables. As an example, you could develop a macro in Excel, a desktop application in C#, a PHP script on your server, and so on. Getting data from external sources In this section, we will learn how to obtain data from third-party sources. Most often, this data interchange is achieved through a spatial data file. There are many data formats for this file (such as KML, geoJSON, and so on). We will choose to work with the *.shp files, because they are widely used and supported in practically all the GIS tools available in the market. There are dozens of sites where you could get useful spatial data from practically any city, state, or country in the world. Much of this data is public and freely available. In this case, we will use data from a fabulous website called http://www.openstreetmap.org/. The following is a series of steps that you could follow if you want to obtain spatial data from this particular provider. I'm pretty sure you can easily adapt this procedure to obtain data from another provider on the Internet. Using the example of the real estate company, we will get data from the English county of Buckinghamshire. The idea is that you, as a member of the IT department, import data from the cities where the company has activities: Open your favorite Internet browser and go to http://www.openstreetmap.org/, as shown in the following screenshot: Click on the Export tab. Click on the Geofabrik Downloads link; you will be taken to http://download.geofabrik.de/, as shown in the following screenshot: There, you will find a list of sub regions of the world; select Europe: Next is a list of all countries in Europe; notice a new column called .shp.zip. This is the file format that we need to download. Select Great Britain: In the next list, select England, you can see your selection on the map located at the right-hand side of the web page, as shown in the following screenshot: Now, you will see a list of all the counties. Select the .shp.zip column from the county of Buckinghamshire: A download will start. When it finishes, you will get a file called buckinghamshire-latest.shp.zip. Unzip it. At this point, we have just obtained the data (several shp files). The next procedure will show us how to convert this file into SQL insertion scripts. Extracting spatial data from an shp file In the unzipped folder are shp files; each of them stores a particular feature of the geography of this county. We will focus on the shp named buildings.shp. Now, we will extract this data stored in the shp file. We will convert this data to a sql script so that we can insert its data into the tbl_buildings table. For this, we will use a Postgis tool called shp2pgSQL. Perform the following steps for extracting spatial data from an shp file: Open a command window with the cmd command. Go to the unzipped folder. Type the following command: shp2pgsql -g the_geom buildings.shp tbl_buildings > buildings.sql Open the script with Notepad. Delete the following lines from the script: CREATE TABLE "tbl_buildings"(gid serial, "osm_id" varchar(20), "name" varchar(50), "type" varchar(20), "timestamp" varchar (30) ); ALTER TABLE "tbl_buildings" ADD PRIMARY KEY (gid); SELECT AddGeometryColumn('','tbl_buildings','geom','0','MULTIPOLYGON',2); Save the script. Open and run it with the pgAdmin query editor. Open the table; you must have at least 13363 new registers. Keep in mind that this number can change when new updates come. Importing shp files with a graphical tool There is another way to import an shp file into our table; we could use a graphical tool called postgisgui for this. To use this tool, perform the following steps: In the file explorer, open the folder: C:Program FilesPostgreSQL9.3binpostgisgui. Execute the shp2pgsql-gui application. Once this is done, we will see the following window: Configure the connection with the server. Click on the View Connections Details... button. Set the data to connect to the server, as shown in the following screenshot: Click the Add File button. Select the points.shp file. Once selected, type the following parameters in the Import List section:     Mode: In this field, type Append     SRID: In this field, type 4326     Geo column: In this field, type the_geom     Table: In this field, type tbl_landmarks   Click on the Import button. The import process will fail and show you the following message: This is because the structure is not the same as shown in the shp and in our table. There is no way to indicate to the tool which field we don't want to import. So, the only way for us to solve this problem is let the tool create a new table and after this, change the structure. This can be done by following these steps: Go to pgAdmin and drop the tbl_landmarks table. Change the mode to Create in the Import list. Click on the Import button. Now, the import process is successful, but the table structure has changed. Go to the PGAdmin again, refresh the data, and edit the table structure to be the same as it was before:     Change the name of the geom field to the_geom.     Change the name of the osm_id field to id.     Drop the Timestamp field.     Drop the primary key constraint and add a new one attached to the id field. For that, right-click on Constraints in the left panel.     Navigate to New Object | New Primary Key and type pk_landmarks_id. In the Columns tab, add the id field.   Now, we have two spatial tables, one with data that contains positions represented as the PostGIS type, POINT (tbl_landmarks), and the other with polygons, represented by PostGIS with the type, MULTIPOLYGON(tbl_buildings). Now, I would like you to import the data contained in the roads.shp file, using one of the two previously viewed methods. The following table has data that represents the path of different highways, streets, roads, and so on, which belong to this area in the form of lines, represented by PostGIS with the MULTILINESTRING type. When it's imported, change its name to tbl_roads and adjust the columns to the structure used for the other tables in this article. Here's an example of how the imported data must look like, as you can see the spatial data is show in its binary form in the following table: Summary In this article, you learned some basic concepts of GIS (such as WKT, EWKT, and SRS), which are fundamental for working with the GIS data. Now, you are able to craft your own spatial insertion queries or import this data into your own data tables. Resources for Article: Further resources on this subject: Improving proximity filtering with KNN [article] Installing PostgreSQL [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 3871
Banner background image

article-image-using-networking-distributed-computing-openframeworks
Packt
16 Apr 2015
16 min read
Save for later

Using networking for distributed computing with openFrameworks

Packt
16 Apr 2015
16 min read
In this article by Denis Perevalov and Igor (Sodazot) Tatarnikov, authors of the book openFrameworks Essentials, we will investigate how to create a distributed project consisting of several programs working together and communicating with each other via networking. (For more resources related to this topic, see here.) Distributed computing with networking Networking is a way of sending and receiving data between programs, which work on a single or different computers and mobile devices. Using networking, it is possible to split a complex project into several programs working together. There are at least three reasons to create distributed projects: The first reason is splitting to obtain better performance. For example, when creating a big interactive wall with cameras and projectors, it is possible to use two computers. The first computer (tracker) will process data from cameras and send the result to the second computer (render), which will render the picture and output it to projectors. The second reason is creating a heterogeneous project using different development languages. For example, consider a project that generates a real-time visualization of data captured from the Web. It is easy to capture and analyze the data from the Web using a programming language like Python, but it is hard to create a rich, real-time visualization with it.On the opposite side, openFrameworks is good for real-time visualization but is not very elegant when dealing with data from the Web. So, it is a good idea to build a project consisting of two programs. The first Python program will capture data from the Web, and the second openFrameworks program will perform rendering. The third reason is synchronization with, and external control of, one program with other programs/devices. For example, a video synthesizer can be controlled from other computers and mobiles via networking. Networking in openFrameworks openFrameworks' networking capabilities are implemented in two core addons: ofxNetwork and ofxOsc. To use an addon in your project, you need to include it in the new project when creating a project using Project Generator, or by including the addon's headers and libraries into the existing project manually. If you need to use only one particular addon, you can use an existing addon's example as a sketch for your project. The ofxNetwork addon The ofxNetwork addon contains classes for sending and receiving data using the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). The difference between these protocols is that TCP guarantees receiving data without losses and errors but requires the establishment of a preliminary connection (known as handshake) between a sender and a receiver. UDP doesn't require the establishment of any preliminary connection but also doesn't guarantee delivery and correctness of the received data. Typically, TCP is used in tasks where data needs to be received without errors, such as downloading a JPEG file from a web server. UDP is used in tasks where data should be received in real time at a fast rate, such as receiving a game state 60 times per second in a networking game. The ofxNetwork addon's classes are quite generic and allow the implementation of a wide range of low-level networking tasks. In this article, we don't explore it in detail. The ofxOsc addon The ofxOsc addon is intended for sending and receiving messages using the Open Sound Control (OSC) protocol. Messages of this protocol (OSC messages) are intended to store control commands and parameter values. This protocol is very popular today and is implemented in many VJ and multimedia programs and software for live electronic sound performance. All the popular programming tools support OSC too. An OSC protocol can use the UDP or TCP protocols for data transmission. Most often, as in openFrameworks implementation, a UDP protocol is used. See details of the OSC protocol at opensoundcontrol.org/spec-1_0. The main classes of ofxOsc are the following: ofxOscSender: This sends OSC messages ofxOscReceiver: This receives OSC messages ofxOscMessage: This class is for storing a single OSC message ofxOscBundle: This class is for storing several OSC messages, which can be sent and received as a bundle Let's add the OSC receiver to our VideoSynth project and then create a simple OSC sender, which will send messages to the VideoSynth project. Implementing the OSC messages receiver To implement the receiving of OSC messages in the VideoSynth project, perform the following steps: Include the ofxOsc addon's header to the ofApp.h file by inserting the following line after the #include "ofxGui.h" line: #include "ofxOsc.h" Add a declaration of the OSC receiver object to the ofApp class: ofxOscReceiver oscReceiver; Set up the OSC receiver in setup(): oscReceiver.setup( 12345 ); The argument of the setup() method is the networking port number. After executing this command, oscReceiver begins listening on this port for incoming OSC messages. Each received message is added to a special message queue for further processing. A networking port is a number from 0 to 65535. Ports from 10000 to 65535 normally are not used by existing operating systems, so you can use them as port numbers for OSC messages. Note that two programs receiving networking data and working on the same computer must have different port numbers. Add the processing of incoming OSC messages to update(): while ( oscReceiver.hasWaitingMessages() ) {ofxOscMessage m;oscReceiver.getNextMessage( &m );if ( m.getAddress() == "/pinchY" ) {pinchY = m.getArgAsFloat( 0 );}} The first line is a while loop, which checks whether there are unprocessed messages in the message queue of oscReceiver. The second line declares an empty OSC message m. The third line pops the latest message from the message queue and copies it to m. Now, we can process this message. Any OSC message consists of two parts: an address and (optionally) one or several arguments. An address is a string beginning with the / character. An address denotes the name of a control command or the name of a parameter that should be adjusted. Arguments can be float, integer, or string values, which specify some parameters of the command. In our example, we want to adjust the pinchY slider with OSC commands, so we expect to have an OSC message with the address /pinchY and the first argument with its float value. Hence, in the fourth line, we check whether the address of the m message is equal to /pinchY. If this is true, in the fifth line, we get the first message's argument (an argument with the index value 0) and set the pinchY slider to this value. Of course, we could use any other address instead of /pinchY (for example, /val), but normally, it is convenient to have the address similar to the parameter's name. It is easy to control other sliders with OSC. For example, to add control of the extrude slider, just add the following code: if ( m.getAddress() == "/extrude" ) {extrude = m.getArgAsFloat( 0 );} After running the project, nothing new happens; it works as always. But now, the project is listening for incoming OSC messages on port 12345. To check this, let's create a tiny openFrameworks project that sends OSC messages. Creating an OSC sender with openFrameworks Let's create a new project OscOF, one that contains a GUI panel with one slider, and send the slider's value via OSC to the VideoSynth project. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the upcoming Sending OSC messages between two separate computers section. Now perform the following steps: Create a new project using Project Generator. Namely, start Project Generator, set the project's name to OscOF (that means OSC with openFrameworks), and include the ofxGui and ofxOsc addons to the newly created project. The ofxGui addon is needed to create the GUI slider, and the ofxOsc addon is needed to send OSC messages. Open this project in your IDE. Include both addons' headers to the ofApp.h file by inserting the following lines (after the #include "ofMain.h" line): #include "ofxGui.h"#include "ofxOsc.h" Add the declarations of the OSC sender object, the GUI panel, and the GUI slider to the ofApp class declaration: ofxOscSender oscSender;ofxPanel gui;ofxFloatSlider slider;void sliderChanged( float &value ); The last line declares a new function, which will be called by openFrameworks when the slider's value is changed. This function will send the corresponding OSC message. The symbol & before value means that the value argument is passed to the function as a reference. Using reference here is not important for us, but is required by ofxGui; please see the information on the notion of a reference in the C++ documentation. Set up the OSC sender, the GUI panel with the slider, and the project's window title and size by adding the following code to setup(): oscSender.setup( "localhost", 12345 );slider.addListener( this, &ofApp::sliderChanged );gui.setup( "Parameters" );gui.add( slider.setup("pinchY", 0, 0, 1) );ofSetWindowTitle( "OscOF" );ofSetWindowShape( 300, 150 ); The first line starts the OSC sender. Here, the first argument specifies the IP address to which the OSC sender will send its messages. In our case, it is "localhost". This means the sender will send data to the same computer on which the sender runs. The second argument specifies the networking port, 12345. The difference between setting up the OSC sender and receiver is that we need to specify the address and port for the sender, and not only the port. Also, after starting, the sender does nothing until we give it the explicit command to send an OSC message. The second line starts listening to the slider's value changes. The first and second arguments of the addListener() command specify the object (this) and its member function (sliderChanged), which should be called when the slider is changed. The remaining lines set up the GUI panel, the GUI slider, and the project's window title and shape. Now, add the sliderChanged() function definition to ofApp.cpp: void ofApp::sliderChanged( float &value ) {ofxOscMessage m;m.setAddress( "/pinchY" );m.addFloatArg( value );oscSender.sendMessage( m );} This function is called when the slider value is changed, and the value parameter is its new value. The first three lines of the function create an OSC message m, set its address to /pinchY, and add a float argument equal to value. The last line sends this OSC message. As you may see, the m message's address (/pinchY) coincides with the address implemented in the previous section, which is expected by the receiver. Also, the receiver expects that this message has a float argument—and it is true too! So, the receiver will properly interpret our messages and set its pinchY slider to the desired value. Finally, add the command to draw GUI to draw(): gui.draw(); On running the project, you will see its window, consisting of a GUI panel with a slider, as shown in the following screenshot: This is the OSC sender made with openFrameworks Don't stop this project for a while. Run the VideoSynth project and change the pinchY slider's value in the OscOF window using the mouse. The pinchY slider in VideoSynth should change accordingly. This means that the OSC transmission between the two openFrameworks programs works. If you are not interested in sending data between two separate computers, feel free to skip the following section. Sending OSC messages between two separate computers We have checked passing OSC messages between two programs that run on the same computer. Now let's consider a situation when an OSC sender and an OSC receiver run on two separate computers connected to the same Local Area Network (LAN) using Ethernet or Wi-Fi. If you have two laptops, most probably they are already connected to the same networking router and hence are in the same LAN. To make an OSC connection work in this case, we need to change the "localhost" value in the sender's setup command by the local IP address of the receiver's computer. Typically, this address has a form like "192.168.0.2", or it could be a name, for example, "LAPTOP3". You can get the receiver's computer IP address by opening the properties of your network adapter or by executing the ifconfig command in the terminal window (for OS X or Linux) or ipconfig in the command prompt window (for Windows). Connection troubleshooting If you set the IP address in the sender's setup, but OSC messages from the OSC sender don't come to the OSC receiver, then it could be caused by the network firewall or antivirus software, which blocks transmitting data over our 12345 port. So please check the firewall and antivirus settings. To make sure that the connection between the two computers exists, use the ping command in the terminal (or the command prompt) window. Creating OSC senders with TouchOSC and Python At this point, we create the OSC sender using openFrameworks and send its data out to the VideoSynth project. But, it's easy to create the OSC sender using other programming tools. Such an opportunity can be useful for you in creating complex projects. So, let's show how to create an OSC sender on a mobile device using the TouchOSC app and also create simple senders using the Python and Max/MSP languages. If you are not interested in sending OSC from mobile devices or in Python or Max/MSP, feel free to skip the corresponding sections. Creating an OSC sender for a mobile device using the TouchOSC app It is very handy to control your openFrameworks project by a mobile device (or devices) using the OSC protocol. You can create a custom OSC sender by yourself, or you can use special apps made for this purpose. One such application is TouchOSC. It's a paid application available for iOS (see hexler.net/software/touchosc) and Android (see hexler.net/software/touchosc-android). Working with TouchOSC consists of four steps: creating the GUI panel (called layout) on the laptop, uploading it to a mobile device, setting up the OSC receiver's address and port, and working with the layout. Let's consider them in detail: To create the layout, download, unzip, and run a special program, TouchOSC Editor, on a laptop (it's available for OS X, Windows, and Linux). Add the desired GUI elements on the layout by right-clicking on the layout. When the layout is ready, upload it to a mobile device by running the TouchOSC app on the mobile and pressing the Sync button in TouchOSC Editor. In the TouchOSC app, go to the settings and set up the OSC receiver's IP address and port number. Next, open the created layout by choosing it from the list of all the existing layouts. Now, you can use the layout's GUI elements to send the OSC messages to your openFrameworks project (and, of course, to any other OSC-supporting software). Creating an OSC sender with Python In this section, we will create a project that sends OSC messages using the Python language. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the previous Sending OSC messages between two separate computers section. Python is a free, interpreted language available for all operating systems. It is extremely popular nowadays in various fields, including teaching programming, developing web projects, and performing computations in natural sciences. Using Python, you can easily capture information from the Web and social networks (using their API) and send it to openFrameworks for further processing, such as visualization or sonification, that is, converting data to a picture or sound. Using Python, it is quite easy to create GUI applications, but here we consider creating a project without a GUI. Perform the following steps to install Python, create an OSC sender, and run it: Install Python from www.python.org/downloads (the current version is 3.4). Download the python-osc library from pypi.python.org/pypi/python-osc and unzip it. This library implements the OSC protocol support in Python. Install this library, open the terminal (or command prompt) window, go to the folder where you unzipped python-osc and type the following: python setup.py install If this doesn't work, type the following: python3 setup.py install Python is ready to send OSC messages. Now let's create the sender program. Using your preferred code or text editor, create the OscPython.py file and fill it with the following code: from pythonosc import udp_clientfrom pythonosc import osc_message_builderimport timeif __name__ == "__main__":oscSender = udp_client.UDPClient("localhost", 12345)for i in range(10):m = osc_message_builder.OscMessageBuilder(address ="/pinchY")m.add_arg(i*0.1)oscSender.send(m.build())print(i)time.sleep(1) The first three lines import the udp_client, osc_message_builder, and time modules for sending the UDP data (we will send OSC messages using UDP), creating OSC messages, and working with time respectively. The if __name__ == "__main__": line is generic for Python programs and denotes the part of the code that will be executed when the program runs from the command line. The first line of the executed code creates the oscSender object, which will send the UDP data to the localhost IP address and the 12345 port. The second line starts a for cycle, where i runs the values 0, 1, 2, …, 9. The body of the cycle consists of commands for creating an OSC message m with address /pinchY and argument i*0.1, and sending it by OSC. The last two lines print the value i to the console and delay the execution for one second. Open the terminal (or command prompt) window, go to the folder with the OscPython.py file, and execute it by the python OscPython.py command. If this doesn't work, use the python3 OscPython.py command. The program starts and will send 10 OSC messages with the /pinchY address and the 0.0, 0.1, 0.2, …, 0.9 argument values, with 1 second of pause between the sent messages. Additionally, the program prints values from 0 to 9, as shown in the following screenshot: This is the output of an OSC sender made with Python Run the VideoSynth project and start our Python sender again. You will see how its pinchY slider gradually changes from 0.0 to 0.9. This means that OSC transmission from a Python program to an openFrameworks program works. Summary In this article, we learned how to create distributed projects using the OSC networking protocol. At first, we implemented receiving OSC in our openFrameworks project. Next, we created a simple OSC sender project with openFrameworks. Then, we considered how to create an OSC sender on mobile devices using TouchOSC and also how to build senders using the Python language. Now, we can control the video synthesizer from other computers or mobile devices via networking. Resources for Article: Further resources on this subject: Kinect in Motion – An Overview [Article] Learn Cinder Basics – Now [Article] Getting Started with Kinect [Article]
Read more
  • 0
  • 0
  • 1523

article-image-getting-started-odoo-development
Packt
06 Apr 2015
14 min read
Save for later

Getting Started with Odoo Development

Packt
06 Apr 2015
14 min read
In this article by Daniel Reis, author of the book Odoo Development Essentials, we will see how to get started with Odoo. Odoo is a powerful open source platform for business applications. A suite of closely integrated applications was built on it, covering all business areas from CRM and Sales to Accounting and Stocks. Odoo has a dynamic and growing community around it, constantly adding features, connectors, and additional business apps. Many can be found at Odoo.com. In this article, we will guide you to install Odoo from the source code and create your first Odoo application. Inspired by the todomvc.com project, we will build a simple to-do application. It should allow us to add new tasks, mark them as completed, and finally, clear the task list from all already completed tasks. (For more resources related to this topic, see here.) Installing Odoo from source We will use a Debian/Ubuntu system for our Odoo server, so you will need to have it installed and available to work on. If you don't have one, you might want to set up a virtual machine with a recent version of Ubuntu Server before proceeding. For a development environment, we will install it directly from Odoo's Git repository. This will end up giving more control on versions and updates. We will need to make sure Git is installed. In the terminal, type the following command: $ sudo apt-get update && sudo apt-get upgrade # Update system $ sudo apt-get install git # Install Git To keep things tidy, we will keep all our work in the /odoo-dev directory inside our home directory: $ mkdir ~/odoo-dev # Create a directory to work in $ cd ~/odoo-dev # Go into our work directory Now, we can use this script to show how to install Odoo from source code in a Debian system: $ git clone https://github.com/odoo/odoo.git -b 8.0 --depth=1 $ ./odoo/odoo.py setup_deps # Installs Odoo system dependencies $ ./odoo/odoo.py setup_pg   # Installs PostgreSQL & db superuser Quick start an Odoo instance In Odoo 8.0, we can create a directory and quick start a server instance for it. We can start by creating the directory called todo-app for our instance as shown here: $ mkdir ~/odoo-dev/todo-app $ cd ~/odoo-dev/todo-app Now we can create our todo_minimal module in it and initialize the Odoo instance: $ ~/odoo-dev/odoo/odoo.py scaffold todo_minimal $ ~/odoo-dev/odoo/odoo.py start -i todo_minimal The scaffold command creates a module directory using a predefined template. The start command creates a database with the current directory name and automatically adds it to the addons path so that its modules are available to be installed. Additionally, we used the -i option to also install our todo_minimal module. It will take a moment to initialize the database, and eventually we will see an INFO log message Modules loaded. Then, the server will be ready to listen to client requests. By default, the database is initialized with demonstration data, which is useful for development databases. Open http://<server-name>:8069 in your browser to be presented with the login screen. The default administrator account is admin with the password admin. Whenever you want to stop the Odoo server instance and return to the command line, press CTRL + C. If you are hosting Odoo in a virtual machine, you might need to do some network configuration to be able to use it as a server. The simplest solution is to change the VM network type from NAT to Bridged. Hopefully, this can help you find the appropriate solution in your virtualization software documentation. Creating the application models Now that we have an Odoo instance and a new module to work with, let's start by creating the data model. Models describe business objects, such as an opportunity, a sales order, or a partner (customer, supplier, and so on). A model has data fields and can also define specific business logic. Odoo models are implemented using a Python class derived from a template class. They translate directly to database objects, and Odoo automatically takes care of that when installing or upgrading the module. Let's edit the models.py file in the todo_minimal module directory so that it contains this: # -*- coding: utf-8 -*- from openerp import models, fields, api   class TodoTask(models.Model):    _name = 'todo.task'    name = fields.Char()    is_done = fields.Boolean()    active = fields.Boolean(default=True) Our to-do tasks will have a name title text, a done flag, and an active flag. The active field has a special meaning for Odoo; by default, records with a False value in it won't be visible to the user. We will use it to clear the tasks out of sight without actually deleting them. Upgrading a module For our changes to take effect, the module has to be upgraded. The simplest and fastest way to make all our changes to a module effective is to go to the terminal window where you have Odoo running, stop it (CTRL + C), and then restart it requesting the module upgrade. To start upgrading the server, the todo_minimal module in the todo-app database, use the following command: $ cd ~/odoo-dev/todo-app # we should be in the right directory $ ./odoo.py start -u todo_minimal The -u option performs an upgrade on a given list of modules. In this case, we upgrade just the todo_minimal module. Developing a module is an iterative process. You should make your changes in gradual steps and frequently install them with a module upgrade. Doing so will make it easier to detect mistakes sooner, and narrowing down the culprit in case the error message is not clear enough. And this can be very frequent when starting with Odoo development. Adding menu options Now that we have a model to store our data, let's make it available on the user interface. All we need is to add a menu option to open the to-do task model, so it can be used. This is done using an XML data file. Let's reuse the templates.xml data file and edit it so that it look like this: <openerp> <data>    <act_window id="todo_task_action"                name="To-do Task Action"                    res_model="todo.task" view_mode="tree,form" />        <menuitem id="todo_task_menu"                 name="To-do Tasks"                  action="todo_task_action"                  parent="mail.mail_feeds"                  sequence="20" /> </data> </openerp> Here, we have two records: a menu option and a window action. The Communication top menu to the user interface was added by the mail module dependency. We can know the identifier of the specific menu option where we want to add our own menu option by inspecting that module, it is mail_feeds. Also, our menu option executes the todo_task_action action we created. And that window action opens a tree view for the todo.task model. If we upgrade the module now and try the menu option just added, it will open an automatically generated view for our model, allowing to add and edit records. Views should be defined for models to be exposed to the users, but Odoo is nice enough to do that automatically if we don't, so we can work with our model right away, without having any form or list views defined yet. So far so good. Let's improve our user interface now. Creating Views Odoo supports several types of views, but the more important ones are list (also known as "tree"), form, and search views. For our simple module, we will just add a list view. Edit the templates.xml file to add the following <record> element just after the <data> opening tag at the top:    <record id="todo_task_tree" model="ir.ui.view">        <field name="name">To-do Task Form</field>        <field name="model">todo.task</field>        <field name="arch" type="xml">            <tree editable="top" colors="gray:is_done==True">                <field name="name" />                <field name="is_done" />            </tree>          </field>    </record> This creates a tree view for the todo.task model with two columns: the title name and the is_done flag. Additionally, it has a color rule to display the tasks done in gray. Adding business logic We want to add business logic to be able to clear the already completed tasks. Our plan is to add an option on the More button, shown at the top of the list when we select lines. We will use a very simple wizard for this, opening a confirmation dialog, where we can execute a method to inactivate the done tasks. Wizards use a special type of model for temporary data: a Transient model. We will now add it to the models.py file as follows: class TodoTaskClear(models.TransientModel):    _name = 'todo.task.clear'      @api.multi    def do_clear_done(self):        Task = self.env['todo.task']        done_recs = Task.search([('is_done', '=', True)])        done_recs.write({'active': False)}        return True Transient models work just like regular models, but their data is temporary and will eventually be purged from the database. In this case, we don't need any fields, since no additional input is going to be asked to the user. It just has a method that will be called when the confirmation button is pressed. It lists all tasks that are done and then sets their active flag to False. Next, we need to add the corresponding user interface. In the templates.xml file, add the following code:    <record id="todo_task_clear_dialog" model="ir.ui.view">      <field name="name">To-do Clear Wizard</field>      <field name="model">todo.task.clear</field>      <field name="arch" type="xml">        <form>           All done tasks will be cleared, even if            unselected.<br/>Continue?            <footer>                <button type="object"                        name="do_clear_done"                        string="Clear                        " class="oe_highlight" />                or <button special="cancel"                            string="Cancel"/>            </footer>        </form>      </field>    </record>      <!-- More button Action -->    <act_window id="todo_task_clear_action"        name="Clear Done"        src_model="todo.task"        res_model="todo.task.clear"        view_mode="form"        target="new" multi="True" /> The first record defines the form for the dialog window. It has a confirmation text and two buttons on the footer: Clear and Cancel. The Clear button when pressed will call the do_clear_done() method defined earlier. The second record is an action that adds the corresponding option in the More button for the to-do tasks model. Configuring security Finally, we need to set the default security configurations for our module. These configurations are usually stored inside the security/ directory. We need to add them to the __openerp__.py manifest file. Change the data attribute to the following: 'data': [  'security/ir.model.access.csv',    'security/todo_access_rules.xml',    'templates.xml'], The access control lists are defined for models and user groups in the ir.model.access.csv file. There is a pre-generated template. Edit it to look like this: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink access_todo_task_user,To-do Task User Access,model_todo_task,base.group_user,1,1,1,1 This gives full access to all users in the base group named Employees. However, we want each user to see only their own to-do tasks. For that, we need a record rule setting a filter on the records the base group can see. Inside the security/ directory, add a todo_access_rules.xml file to define the record rule: <openerp> <data>    <record id="todo_task_user_rule" model="ir.rule">        <field name="name">ToDo Tasks only for owner</field>        <field name="model_id" ref="model_todo_task"/>        <field name="domain_force">[('create_uid','=',user.id)]              </field>        <field name="groups"                eval="[(4, ref('base.group_user'))]"/>    </record> </data> </openerp> This is all we need to set up the module security. Summary We created a new module from start, covering the most frequently used elements in a module: models, user interface views, business logic in model methods, and access security. In the process, we got familiar with the module development process, involving module upgrades and application server restarts to make the gradual changes effective in Odoo. Resources for Article: Further resources on this subject: Making Goods with Manufacturing Resource Planning [article] Machine Learning in IPython with scikit-learn [article] Administrating Solr [article]
Read more
  • 0
  • 0
  • 9330

article-image-getting-ready-coffeescript
Packt
02 Apr 2015
20 min read
Save for later

Getting Ready with CoffeeScript

Packt
02 Apr 2015
20 min read
In this article by Mike Hatfield, author of the book, CoffeeScript Application Development Cookbook, we will see that JavaScript, though very successful, can be a difficult language to work with. JavaScript was designed by Brendan Eich in a mere 10 days in 1995 while working at Netscape. As a result, some might claim that JavaScript is not as well rounded as some other languages, a point well illustrated by Douglas Crockford in his book titled JavaScript: The Good Parts, O'Reilly Media. These pitfalls found in the JavaScript language led Jeremy Ashkenas to create CoffeeScript, a language that attempts to expose the good parts of JavaScript in a simple way. CoffeeScript compiles into JavaScript and helps us avoid the bad parts of JavaScript. (For more resources related to this topic, see here.) There are many reasons to use CoffeeScript as your development language of choice. Some of these reasons include: CoffeeScript helps protect us from the bad parts of JavaScript by creating function closures that isolate our code from the global namespace by reducing the curly braces and semicolon clutter and by helping tame JavaScript's notorious this keyword CoffeeScript helps us be more productive by providing features such as list comprehensions, classes with inheritance, and many others Properly written CoffeeScript also helps us write code that is more readable and can be more easily maintained As Jeremy Ashkenas says: "CoffeeScript is just JavaScript." We can use CoffeeScript when working with the large ecosystem of JavaScript libraries and frameworks on all aspects of our applications, including those listed in the following table: Part Some options User interfaces UI frameworks including jQuery, Backbone.js, AngularJS, and Kendo UI Databases Node.js drivers to access SQLite, Redis, MongoDB, and CouchDB Internal/external services Node.js with Node Package Manager (NPM) packages to create internal services and interfacing with external services Testing Unit and end-to-end testing with Jasmine, Qunit, integration testing with Zombie, and mocking with Persona Hosting Easy API and application hosting with Heroku and Windows Azure Tooling Create scripts to automate routine tasks and using Grunt Configuring your environment and tools One significant aspect to being a productive CoffeeScript developer is having a proper development environment. This environment typically consists of the following: Node.js and the NPM CoffeeScript Code editor Debugger In this recipe, we will look at installing and configuring the base components and tools necessary to develop CoffeeScript applications. Getting ready In this section, we will install the software necessary to develop applications with CoffeeScript. One of the appealing aspects of developing applications using CoffeeScript is that it is well supported on Mac, Windows, and Linux machines. To get started, you need only a PC and an Internet connection. How to do it... CoffeeScript runs on top of Node.js—the event-driven, non-blocking I/O platform built on Chrome's JavaScript runtime. If you do not have Node.js installed, you can download an installation package for your Mac OS X, Linux, and Windows machines from the start page of the Node.js website (http://nodejs.org/). To begin, install Node.js using an official prebuilt installer; it will also install the NPM. Next, we will use NPM to install CoffeeScript. Open a terminal or command window and enter the following command: npm install -g coffee-script This will install the necessary files needed to work with CoffeeScript, including the coffee command that provides an interactive Read Evaluate Print Loop (REPL)—a command to execute CoffeeScript files and a compiler to generate JavaScript. It is important to use the -g option when installing CoffeeScript, as this installs the CoffeeScript package as a global NPM module. This will add the necessary commands to our path. On some Windows machines, you might need to add the NPM binary directory to your path. You can do this by editing the environment variables and appending ;%APPDATA%npm to the end of the system's PATH variable. Configuring Sublime Text What you use to edit code can be a very personal choice, as you, like countless others, might use the tools dictated by your team or manager. Fortunately, most popular editing tools either support CoffeeScript out of the box or can be easily extended by installing add-ons, packages, or extensions. In this recipe, we will look at adding CoffeeScript support to Sublime Text and Visual Studio. Getting ready This section assumes that you have Sublime Text or Visual Studio installed. Sublime Text is a very popular text editor that is geared to working with code and projects. You can download a fully functional evaluation version from http://www.sublimetext.com. If you find it useful and decide to continue to use it, you will be encouraged to purchase a license, but there is currently no enforced time limit. How to do it... Sublime Text does not support CoffeeScript out of the box. Thankfully, a package manager exists for Sublime Text; this package manager provides access to hundreds of extension packages, including ones that provide helpful and productive tools to work with CoffeeScript. Sublime Text does not come with this package manager, but it can be easily added by following the instructions on the Package Control website at https://sublime.wbond.net/installation. With Package Control installed, you can easily install the CoffeeScript packages that are available using the Package Control option under the Preferences menu. Select the Install Package option. You can also access this command by pressing Ctrl + Shift + P, and in the command list that appears, start typing install. This will help you find the Install Package command quickly. To install the CoffeeScript package, open the Install Package window and enter CoffeeScript. This will display the CoffeeScript-related packages. We will use the Better CoffeeScript package: As you can see, the CoffeeScript package includes syntax highlighting, commands, shortcuts, snippets, and compilation. How it works... In this section, we will explain the different keyboard shortcuts and code snippets available with the Better CoffeeScript package for Sublime. Commands You can run the desired command by entering the command into the Sublime command pallet or by pressing the related keyboard shortcut. Remember to press Ctrl + Shift + P to display the command pallet window. Some useful CoffeeScript commands include the following: Command Keyboard shortcut Description Coffee: Check Syntax Alt + Shift + S This checks the syntax of the file you are editing or the currently selected code. The result will display in the status bar at the bottom. Coffee: Compile File Alt + Shift + C This compiles the file being edited into JavaScript. Coffee: Run Script Alt + Shift + R This executes the selected code and displays a buffer of the output. The keyboard shortcuts are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by choosing CoffeeScript in the list of file types in the bottom-left corner of the screen. Snippets Snippets allow you to use short tokens that are recognized by Sublime Text. When you enter the code and press the Tab key, Sublime Text will automatically expand the snippet into the full form. Some useful CoffeeScript code snippets include the following: Token Expands to log[Tab] console.log cla class Name constructor: (arguments) ->    # ... forin for i in array # ... if if condition # ... ifel if condition # ... else # ... swi switch object when value    # ... try try # ... catch e # ... The snippets are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by selecting CoffeeScript in the list of file types in the bottom-left corner of the screen. Configuring Visual Studio In this recipe, we will demonstrate how to add CoffeeScript support to Visual Studio. Getting ready If you are on the Windows platform, you can use Microsoft's Visual Studio software. You can download Microsoft's free Express edition (Express 2013 for Web) from http://www.microsoft.com/express. How to do it... If you are a Visual Studio user, Version 2010 and above can work quite effectively with CoffeeScript through the use of Visual Studio extensions. If you are doing any form of web development with Visual Studio, the Web Essentials extension is a must-have. To install Web Essentials, perform the following steps: Launch Visual Studio. Click on the Tools menu and select the Extensions and Updates menu option. This will display the Extensions and Updates window (shown in the next screenshot). Select Online in the tree on the left-hand side to display the most popular downloads. Select Web Essentials 2012 from the list of available packages and then click on the Download button. This will download the package and install it automatically. Once the installation is finished, restart Visual Studio by clicking on the Restart Now button. You will likely find Web Essentials 2012 ranked highly in the list of Most Popular packages. If you do not see it, you can search for Web Essentials using the Search box in the top-right corner of the window. Once installed, the Web Essentials package provides many web development productivity features, including CSS helpers, tools to work with Less CSS, enhancements to work with JavaScript, and, of course, a set of CoffeeScript helpers. To add a new CoffeeScript file to your project, you can navigate to File | New Item or press Ctrl + Shift + A. This will display the Add New Item dialog, as seen in the following screenshot. Under the Web templates, you will see a new CoffeeScript File option. Select this option and give it a filename, as shown here: When we have our CoffeeScript file open, Web Essentials will display the file in a split-screen editor. We can edit our code in the left-hand pane, while Web Essentials displays a live preview of the JavaScript code that will be generated for us. The Web Essentials CoffeeScript compiler will create two JavaScript files each time we save our CoffeeScript file: a basic JavaScript file and a minified version. For example, if we save a CoffeeScript file named employee.coffee, the compiler will create employee.js and employee.min.js files. Though I have only described two editors to work with CoffeeScript files, there are CoffeeScript packages and plugins for most popular text editors, including Emacs, Vim, TextMate, and WebMatrix. A quick dive into CoffeeScript In this recipe, we will take a quick look at the CoffeeScript language and command line. How to do it... CoffeeScript is a highly expressive programming language that does away with much of the ceremony required by JavaScript. It uses whitespace to define blocks of code and provides shortcuts for many of the programming constructs found in JavaScript. For example, we can declare variables and functions without the var keyword: firstName = 'Mike' We can define functions using the following syntax: multiply = (a, b) -> a * b Here, we defined a function named multiply. It takes two arguments, a and b. Inside the function, we multiplied the two values. Note that there is no return statement. CoffeeScript will always return the value of the last expression that is evaluated inside a function. The preceding function is equivalent to the following JavaScript snippet: var multiply = function(a, b) { return a * b; }; It's worth noting that the CoffeeScript code is only 28 characters long, whereas the JavaScript code is 50 characters long; that's 44 percent less code. We can call our multiply function in the following way: result = multiply 4, 7 In CoffeeScript, using parenthesis is optional when calling a function with parameters, as you can see in our function call. However, note that parenthesis are required when executing a function without parameters, as shown in the following example: displayGreeting = -> console.log 'Hello, world!' displayGreeting() In this example, we must call the displayGreeting() function with parenthesis. You might also wish to use parenthesis to make your code more readable. Just because they are optional, it doesn't mean you should sacrifice the readability of your code to save a couple of keystrokes. For example, in the following code, we used parenthesis even though they are not required: $('div.menu-item').removeClass 'selected' Like functions, we can define JavaScript literal objects without the need for curly braces, as seen in the following employee object: employee = firstName: 'Mike' lastName: 'Hatfield' salesYtd: 13204.65 Notice that in our object definition, we also did not need to use a comma to separate our properties. CoffeeScript supports the common if conditional as well as an unless conditional inspired by the Ruby language. Like Ruby, CoffeeScript also provides English keywords for logical operations such as is, isnt, or, and and. The following example demonstrates the use of these keywords: isEven = (value) -> if value % 2 is 0    'is' else    'is not'   console.log '3 ' + isEven(3) + ' even' In the preceding code, we have an if statement to determine whether a value is even or not. If the value is even, the remainder of value % 2 will be 0. We used the is keyword to make this determination. JavaScript has a nasty behavior when determining equality between two values. In other languages, the double equal sign is used, such as value == 0. In JavaScript, the double equal operator will use type coercion when making this determination. This means that 0 == '0'; in fact, 0 == '' is also true. CoffeeScript avoids this using JavaScript's triple equals (===) operator. This evaluation compares value and type such that 0 === '0' will be false. We can use if and unless as expression modifiers as well. They allow us to tack if and unless at the end of a statement to make simple one-liners. For example, we can so something like the following: console.log 'Value is even' if value % 2 is 0 Alternatively, we can have something like this: console.log 'Value is odd' unless value % 2 is 0 We can also use the if...then combination for a one-liner if statement, as shown in the following code: if value % 2 is 0 then console.log 'Value is even' CoffeeScript has a switch control statement that performs certain actions based on a list of possible values. The following lines of code show a simple switch statement with four branching conditions: switch task when 1    console.log 'Case 1' when 2    console.log 'Case 2' when 3, 4, 5    console.log 'Case 3, 4, 5' else    console.log 'Default case' In this sample, if the value of a task is 1, case 1 will be displayed. If the value of a task is 3, 4, or 5, then case 3, 4, or 5 is displayed, respectively. If there are no matching values, we can use an optional else condition to handle any exceptions. If your switch statements have short operations, you can turn them into one-liners, as shown in the following code: switch value when 1 then console.log 'Case 1' when 2 then console.log 'Case 2' when 3, 4, 5 then console.log 'Case 3, 4, 5' else console.log 'Default case' CoffeeScript provides a number of syntactic shortcuts to help us be more productive while writing more expressive code. Some people have claimed that this can sometimes make our applications more difficult to read, which will, in turn, make our code less maintainable. The key to highly readable and maintainable code is to use a consistent style when coding. I recommend that you follow the guidance provided by Polar in their CoffeeScript style guide at http://github.com/polarmobile/coffeescript-style-guide. There's more... With CoffeeScript installed, you can use the coffee command-line utility to execute CoffeeScript files, compile CoffeeScript files into JavaScript, or run an interactive CoffeeScript command shell. In this section, we will look at the various options available when using the CoffeeScript command-line utility. We can see a list of available commands by executing the following command in a command or terminal window: coffee --help This will produce the following output: As you can see, the coffee command-line utility provides a number of options. Of these, the most common ones include the following: Option Argument Example Description None None coffee This launches the REPL-interactive shell. None Filename coffee sample.coffee This command will execute the CoffeeScript file. -c, --compile Filename coffee -c sample.coffee This command will compile the CoffeeScript file into a JavaScript file with the same base name,; sample.js, as in our example. -i, --interactive   coffee -i This command will also launch the REPL-interactive shell. -m, --map Filename coffee--m sample.coffee This command generates a source map with the same base name, sample.js.map, as in our example. -p, --print Filename coffee -p sample.coffee This command will display the compiled output or compile errors to the terminal window. -v, --version None coffee -v This command will display the correct version of CoffeeScript. -w, --watch Filename coffee -w -c sample.coffee This command will watch for file changes, and with each change, the requested action will be performed. In our example, our sample.coffee file will be compiled each time we save it. The CoffeeScript REPL As we have been, CoffeeScript has an interactive shell that allows us to execute CoffeeScript commands. In this section, we will learn how to use the REPL shell. The REPL shell can be an excellent way to get familiar with CoffeeScript. To launch the CoffeeScript REPL, open a command window and execute the coffee command. This will start the interactive shell and display the following prompt: For example, if we enter the expression x = 4 and press return, we would see what is shownin the following screenshot In the coffee> prompt, we can assign values to variables, create functions, and evaluate results. When we enter an expression and press the return key, it is immediately evaluated and the value is displayed. For example, if we enter the expression x = 4 and press return, we would see what is shown in the following screenshot: This did two things. First, it created a new variable named x and assigned the value of 4 to it. Second, it displayed the result of the command. Next, enter timesSeven = (value) -> value * 7 and press return: You can see that the result of this line was the creation of a new function named timesSeven(). We can call our new function now: By default, the REPL shell will evaluate each expression when you press the return key. What if we want to create a function or expression that spans multiple lines? We can enter the REPL multiline mode by pressing Ctrl + V. This will change our coffee> prompt to a ------> prompt. This allows us to enter an expression that spans multiple lines, such as the following function: When we are finished with our multiline expression, press Ctrl + V again to have the expression evaluated. We can then call our new function: The CoffeeScript REPL offers some handy helpers such as expression history and tab completion. Pressing the up arrow key on your keyboard will circulate through the expressions we previously entered. Using the Tab key will autocomplete our function or variable name. For example, with the isEvenOrOdd() function, we can enter isEven and press Tab to have the REPL complete the function name for us. Debugging CoffeeScript using source maps If you have spent any time in the JavaScript community, you would have, no doubt, seen some discussions or rants regarding the weak debugging story for CoffeeScript. In fact, this is often a top argument some give for not using CoffeeScript at all. In this recipe, we will examine how to debug our CoffeeScript application using source maps. Getting ready The problem in debugging CoffeeScript stems from the fact that CoffeeScript compiles into JavaScript which is what the browser executes. If an error arises, the line that has caused the error sometimes cannot be traced back to the CoffeeScript source file very easily. Also, the error message is sometimes confusing, making troubleshooting that much more difficult. Recent developments in the web development community have helped improve the debugging experience for CoffeeScript by making use of a concept known as a source map. In this section, we will demonstrate how to generate and use source maps to help make our CoffeeScript debugging easier. To use source maps, you need only a base installation of CoffeeScript. How to do it... You can generate a source map for your CoffeeScript code using the -m option on the CoffeeScript command: coffee -m -c employee.coffee How it works... Source maps provide information used by browsers such as Google Chrome that tell the browser how to map a line from the compiled JavaScript code back to its origin in the CoffeeScript file. Source maps allow you to place breakpoints in your CoffeeScript file and analyze variables and execute functions in your CoffeeScript module. This creates a JavaScript file called employee.js and a source map called employee.js.map. If you look at the last line of the generated employee.js file, you will see the reference to the source map: //# sourceMappingURL=employee.js.map Google Chrome uses this JavaScript comment to load the source map. The following screenshot demonstrates an active breakpoint and console in Goggle Chrome: Debugging CoffeeScript using Node Inspector Source maps and Chrome's developer tools can help troubleshoot our CoffeeScript that is destined for the Web. In this recipe, we will demonstrate how to debug CoffeeScript that is designed to run on the server. Getting ready Begin by installing the Node Inspector NPM module with the following command: npm install -g node-inspector How to do it... To use Node Inspector, we will use the coffee command to compile the CoffeeScript code we wish to debug and generate the source map. In our example, we will use the following simple source code in a file named counting.coffee: for i in [1..10] if i % 2 is 0    console.log "#{i} is even!" else    console.log "#{i} is odd!" To use Node Inspector, we will compile our file and use the source map parameter with the following command: coffee -c -m counting.coffee Next, we will launch Node Inspector with the following command: node-debug counting.js How it works... When we run Node Inspector, it does two things. First, it launches the Node debugger. This is a debugging service that allows us to step through code, hit line breaks, and evaluate variables. This is a built-in service that comes with Node. Second, it launches an HTTP handler and opens a browser that allows us to use Chrome's built-in debugging tools to use break points, step over and into code, and evaluate variables. Node Inspector works well using source maps. This allows us to see our native CoffeeScript code and is an effective tool to debug server-side code. The following screenshot displays our Chrome window with an active break point. In the local variables tool window on the right-hand side, you can see that the current value of i is 2: The highlighted line in the preceding screenshot depicts the log message. Summary This article introduced CoffeeScript and lays the foundation to use CoffeeScript to develop all aspects of modern cloud-based applications. Resources for Article: Further resources on this subject: Writing Your First Lines of CoffeeScript [article] Why CoffeeScript? [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 1854
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-static-interactive-and-dynamic-dashboard
Packt
01 Apr 2015
10 min read
Save for later

From a Static to an Interactive and Dynamic Dashboard

Packt
01 Apr 2015
10 min read
In this article, by David Lai and Xavier Hacking, authors of SAP BusinessObjects Dasboards 4.1 Cookbook, we will provide developers with recipes on interactivity and look and feel of the dashboards, which will improve the dashboard user experience. We will cover the following recipes: Using the Hierarchical Table Inputting data values Displaying alerts on a map Changing the look of a chart (For more resources related to this topic, see here.) An important strength that SAP BusinessObjects Dashboards has is the amount of control it allows a developer to provide the user with. This leads to totally customized dashboards, which give users the interactivity that guides them to make the right business decisions. It is important that developers know what type of interactive tools are available so that they can utilize the power of these tools. With the right interactivity, users can retrieve information more quickly and efficiently. Using the Hierarchical Table The Hierarchical Table is a powerful component that was introduced in SAP BusinessObjects Dashboards 4.0 FP3. It allows users to connect to either a BEx query connection or an OLAP universe and take advantage of its hierarchical display and multi-selection capability. Before the Hierarchical Table was introduced, there was no way to accomplish native hierarchical display and selection without significant workarounds. Although the Hierarchical Table component is extremely powerful, please note that it can only be used with either a BEx query or an OLAP universe. It will not work on a universe based on a relational database. Getting ready Before you can take advantage of the Hierarchical Table component, you must have an OLAP universe or a BEx query connection available. In our example, we create a simple cube from the Adventureworks data warehouse, which is easily accessible from MSDN. You can download the Adventureworks data warehouse available at http://msftdbprodsamples.codeplex.com/releases/view/105902. To set up a simple cube, please follow the instructions available at http://www.accelebrate.com/library/tutorials/ssas-2008. To set up an OLAP connection to the cube, please follow the instructions available at http://wiki.scn.sap.com/wiki/display/BOBJ/Setting+up+OLAP+Microsoft+Analysis+Service+through+an+XMLA+connection+with+SSO. Finally, you will have to set up an OLAP universe that connects to the OLAP connection. Instructions for this can be found at http://scn.sap.com/docs/DOC-22026. How to do it… Create an OLAP universe query / BEx query from the Query Browser. From the Components window, go to the Selectors category and drag a Hierarchical Table component onto the dashboard canvas. Click on the Bind to Query Data button and choose the query that you created in step 1. Next, choose the dimensions and measures that you want displayed on the Hierarchical Table. By default, you must select at least one hierarchy dimension. Click on the Configure Columns button below the data binding to adjust the column widths on the Hierarchical Table. We do this because by default, SAP BusinessObjects Dashboards does not set the column widths very well when we first bind the data. On the Appearance tab, edit the number formats for each measure appropriately. For example, you can set dollar amounts as the currency with two decimal places. Next, we want to capture rows that are selected during runtime. To do this, click on the Insertion tab. For the Insertion Type, you have the option of Value or Row. For the Value insertion option, you must choose an option for Source Data, which is one of the columns in the Hierarchical Table. In our example, we will choose the Insertion Type as Row, which grabs values from all the columns. We'll need to bind the output destination. We will assume that a user can select a maximum of 30 rows. So we'll bind the output to a 30 x 3 destination range. Bind a spreadsheet table object to the destination output from step 8 to prove that our selection works. Finally, test the Hierarchical Table by entering preview mode. In the following screenshot, you can see that we can expand/collapse our Hierarchical Table, as well as make multiple selections! How it works... As you can see, the Hierarchical Table selector is a very useful component because before this component was available, we were unable to perform any form of hierarchical analysis as well as simple multi-selection. The component achieves hierarchical capabilities by taking advantage of the OLAP cube engine. There's more… Unfortunately, the Hierarchical Table selector is only available from cube sources and not a traditional data warehouse table, because it uses the OLAP cube engine to do the processing. The hierarchical capability, in our opinion, is doable with data warehouse tables as other tools allow this. So hopefully, SAP will one day upgrade the Hierarchical Table selector so that it works with your traditional data warehouse universe based on tables. Inputting data values The ability to input values into the dashboard is a very useful feature. In the following example, we have a sales forecast that changes according to an inputted number value. If we were to use a slider component for the input value, it would be more difficult for the user to select their desired input value. Another good example could be a search box to find a value on a selector which has over 100 items. This way you don't need to hunt for your value. Instead, you can just type it in. In this recipe, we will create an input textbox to control a what-if scenario. Getting ready Create a chart with its values bound to cells that will be controlled by the input textbox value. The following is an example of a sales forecast chart and its cells that are controlled by the what-if scenario: You may refer to the source file Inputting data values.xlf from the code bundle to retrieve the pre-populated data from the preceding image if you don't want to manually type everything in yourself. How to do it... Drag an Input Text object from the Text section of the Components window onto the canvas. In the properties window of the Input Text component, bind the Link to Cell as well as Destination to cell D3 from the Getting ready section. Go to the Behavior icon of the input text properties and make sure Treat All Input As Text is unchecked. The blue cell D6 from the Getting ready section that's labeled as valid value will check to make sure the input text entered by the user is valid. To do this, we use the following formula: =IF(ISNUMBER(D3),IF(AND(D3>=-20,D3<=20),D3,"INVALID"),"INVALID") The formula checks to make sure that the cell contains a number that is between -20 and 20. Now every cell in the chart binding destination will depend on D6. The binding destination cells will not add the D6 value if D6 is "INVALID". In addition, a pop up will appear saying "Input is invalid" if D6 is "INVALID". Create the pop up by dragging a Label text component onto the canvas with Input is invalid as its text. Next, go to the behavior tab and for dynamic visibility, bind it to D6 and set the Key as INVALID. How it works... In this example, we use an input value textbox to control the forecast bars on the chart. If we type 20, it will add 20 to each value in the forecast. If we type -20, it will subtract 20 from each value in the forecast. We also add a check in step 4 that determines whether the value entered is valid or not; hence the use of Excel formulas. If a value is invalid, we want to output an error to the user so that they are aware that they entered an invalid value.   Displaying alerts on a map A map on a dashboard allows us to visually identify how different regions are doing using a picture instead of a table or chart. With alerts on the map, we can provide even more value. For example, look at the following screenshot. We can see that different regions of the map can be colored differently depending on their value. This allows users to identify at a glance whether a region is doing well or poorly.   Getting ready Insert a Canadian map object into the canvas and bind data to the map. You may also refer to the data prepared in the source file, Displaying alerts on a map.xlf. How to do it... In a separate area of the spreadsheet (highlighted in yellow), set up the threshold values. Assume that all provinces have the same threshold. Go to the Alerts section of the map properties and check Enable Alerts. Select the radio button By Value. In the Alert Thresholds section, check Use a Range. Then, bind the range to the Threshold dataset in step 1. In the Color Order section, select the radio button High values are good. How it works... In this recipe, we show how to set up alerting for a map component. The way we set it up is pretty standard from steps 2 through 5. Once the alerting mechanism is set up, each province in the map will have its value associated with the alert threshold that we set up in step 1. The province will be colored red if the sales value is less than the yellow threshold. The province will be colored yellow if the sales value is greater than or equal to the yellow threshold but less than the green threshold. The province will be colored green if the sales value is greater than or equal to the green threshold. Changing the look of a chart This recipe will explain how to change the look of a chart. Particularly, it will go through each tab in the appearance icon of the chart properties. We will then make modifications and see the resulting changes. Getting ready Insert a chart object into the canvas. Prepare some data and bind it to the chart. How to do it... Click on the chart object on the canvas/object properties window to go to chart properties. In the Layout tab, uncheck Show Chart Background. In the Series tab, click on the colored box under Fill to change the color of the bar to your desired color. Then change the width of each bar; click on the Marker Size area and change it to 35. Click on the colored boxes circled in red in the Axes tab and choose dark blue as the Line Color for the horizontal and vertical axes separately. Uncheck Show Minor Gridlines to remove all the horizontal lines in between each of the major gridlines. Next, go to the Text and Color tabs, where you can make changes to all the different text areas of the chart, as shown in the following screenshot: How it works... As you can see, the default chart looks plain and the bars are skinny so it's harder to visualize things. It is a good idea to remove the chart background if there is one so that the chart blends in better. In addition, the changes to the chart colors and text provide additional aesthetics that help improve the look of the chart. Summary In this article, we learned various recipes on how to make interactive dashboards including using Hierarchical Table and alerts. Using such techniques greatly improve the look and feel of the dashboards and help create great presentations. Resources for Article: Further resources on this subject: Creating Our First Universe [article] Report Data Filtering [article] End User Transactions [article]
Read more
  • 0
  • 0
  • 2045

article-image-introduction-testing-angularjs-directives
Packt
01 Apr 2015
14 min read
Save for later

An introduction to testing AngularJS directives

Packt
01 Apr 2015
14 min read
In this article by Simon Bailey, the author of AngularJS Testing Cookbook, we will cover the following recipes: Starting with testing directives Setting up templateUrl Searching elements using selectors Accessing basic HTML content Accessing repeater content (For more resources related to this topic, see here.) Directives are the cornerstone of AngularJS and can range in complexity providing the foundation to many aspects of an application. Therefore, directives require comprehensive tests to ensure they are interacting with the DOM as intended. This article will guide you through some of the rudimentary steps required to embark on your journey to test directives. The focal point of many of the recipes revolves around targeting specific HTML elements and how they respond to interaction. You will learn how to test changes on scope based on a range of influences and finally begin addressing testing directives using Protractor. Starting with testing directives Testing a directive involves three key steps that we will address in this recipe to serve as a foundation for the duration of this article: Create an element. Compile the element and link to a scope object. Simulate the scope life cycle. Getting ready For this recipe, you simply need a directive that applies a scope value to the element in the DOM. For example: angular.module('chapter5', []) .directive('writers', function() {    return {      restrict: 'E',      link: function(scope, element) {        element.text('Graffiti artist: ' + scope.artist);      }    }; }); How to do it… First, create three variables accessible across all tests:     One for the element: var element;     One for scope: var scope;     One for some dummy data to assign to a scope value: var artist = 'Amara Por Dios'; Next, ensure that you load your module: beforeEach(module('chapter5')); Create a beforeEach function to inject the necessary dependencies and create a new scope instance and assign the artist to a scope: beforeEach(inject(function ($rootScope, $compile) { scope = $rootScope.$new(); scope.artist = artist; })); Next, within the beforeEach function, add the following code to create an Angular element providing the directive HTML string: element = angular.element('<writers></writers>'); Compile the element providing our scope object: $compile(element)(scope); Now, call $digest on scope to simulate the scope life cycle: scope.$digest(); Finally, to confirm whether these steps work as expected, write a simple test that uses the text() method available on the Angular element. The text() method will return the text contents of the element, which we then match against our artist value: it('should display correct text in the DOM', function() { expect(element.text()).toBe('Graffiti artist: ' + artist); }); Here is what your code should look like to run the final test: var scope; var element; var artist;   beforeEach(module('chapter5'));   beforeEach(function() { artist = 'Amara Por Dios'; });   beforeEach(inject(function($compile) { element = angular.element('<writers></writers>'); scope.artist = artist; $compile(element)(scope); scope.$digest(); }));   it('should display correct text in the DOM', function() {    expect(element.text()).toBe('Graffiti artist: ' + artist); }); How it works… In step 4, the directive HTML tag is provided as a string to the angular.element function. The angular element function wraps a raw DOM element or an HTML string as a jQuery element if jQuery is available; otherwise, it defaults to using Angular's jQuery lite which is a subset of jQuery. This wrapper exposes a range of useful jQuery methods to interact with the element and its content (for a full list of methods available, visit https://docs.angularjs.org/api/ng/function/angular.element). In step 6, the element is compiled into a template using the $compile service. The $compile service can compile HTML strings into a template and produces a template function. This function can then be used to link the scope and the template together. Step 6 demonstrates just this, linking the scope object created in step 3. The final step to getting our directive in a testable state is in step 7 where we call $digest to simulate the scope life cycle. This is usually part of the AngularJS life cycle within the browser and therefore needs to be explicitly called in a test-based environment such as this, as opposed to end-to-end tests using Protractor. There's more… One beforeEach() method containing the logic covered in this recipe can be used as a reference to work from for the rest of this article: beforeEach(inject(function($rootScope, $compile) { // Create scope scope = $rootScope.$new(); // Replace with the appropriate HTML string element = angular.element('<deejay></deejay>'); // Replace with test scope data scope.deejay = deejay; // Compile $compile(element)(scope); // Digest scope.$digest(); })); See also The Setting up templateUrl recipe The Searching elements using selectors recipe The Accessing basic HTML content recipe The Accessing repeater content recipe Setting up templateUrl It's fairly common to separate the template content into an HTML file that can then be requested on demand when the directive is invoked using the templateUrl property. However, when testing directives that make use of the templateUrl property, we need to load and preprocess the HTML files to AngularJS templates. Luckily, the AngularJS team preempted our dilemma and provided a solution using Karma and the karma-ng-html2js-preprocessor plugin. This recipe will show you how to use Karma to enable us to test a directive that uses the templateUrl property. Getting ready For this recipe, you will need to ensure the following: You have installed Karma You installed the karma-ng-html2js-preprocessor plugin by following the instructions at https://github.com/karma-runner/karma-ng-html2js-preprocessor/blob/master/README.md#installation. You configured the karma-ng-html2js-preprocessor plugin by following the instructions at https://github.com/karma-runner/karma-ng-html2js-preprocessor/blob/master/README.md#configuration. Finally, you'll need a directive that loads an HTML file using templateUrl and for this example, we apply a scope value to the element in the DOM. Consider the following example: angular.module('chapter5', []) .directive('emcees', function() {    return {      restrict: 'E',      templateUrl: 'template.html',      link: function(scope, element) {        scope.emcee = scope.emcees[0];      }    }; }) An example template could be as simple as what we will use for this example (template.html): <h1>{{emcee}}</h1> How to do it… First, create three variables accessible across all tests:     One for the element: var element;     One for the scope: var scope;     One for some dummy data to assign to a scope value: var emcees = ['Roxanne Shante', 'Mc Lyte']; Next, ensure that you load your module: beforeEach(module('chapter5')); We also need to load the actual template. We can do this by simply appending the filename to the beforeEach function we just created in step 2: beforeEach(module('chapter5', 'template.html')); Next, create a beforeEach function to inject the necessary dependencies and create a new scope instance and assign the artist to a scope: beforeEach(inject(function ($rootScope, $compile) { scope = $rootScope.$new(); Scope.emcees = emcees; })); Within the beforeEach function, add the following code to create an Angular element providing the directive HTML string:    element = angular.element('<emcees></emcees>'); Compile the element providing our scope object: $compile(element)(scope); Call $digest on scope to simulate the scope life cycle: scope.$digest(); Next, create a basic test to establish that the text contained within the h1 tag is what we expect: it('should set the scope property id to the correct initial value', function () {}); Now, retrieve a reference to the h1 tag using the find() method on the element providing the tag name as the selector: var h1 = element.find('h1'); Finally, add the expectation that the h1 tag text matches our first emcee from the array we provided in step 4: expect(h1.text()).toBe(emcees[0]); You will see the following passing test within your console window: How it works… The karma-ng-html2js-preprocessor plugin works by converting HTML files into JS strings and generates AngularJS modules that we load in step 3. Once loaded, AngularJS makes these modules available by putting the HTML files into the $templateCache. There are libraries available to help incorporate this into your project build process, for example using Grunt or Gulp. There is a popular example specifically for Gulp at https://github.com/miickel/gulp-angular-templatecache. Now that the template is available, we can access the HTML content using the compiled element we created in step 5. In this recipe, we access the text content of the element using the find() method. Be aware that if using the smaller jQuery lite subset of jQuery, there are certain limitations compared to the full-blown jQuery version. The find() method in particular is limited to look up by tag name only. To read more about the find() method, visit the jQuery API documentation at http://api.jquery.com/find. See also The Starting with testing directives recipe Searching elements using selectors Directives, as you should know, attach special behavior to a DOM element. When AngularJS compiles and returns the element on which the directive is applied, it is wrapped by either jqLite or jQuery. This exposes an API on the element, offering many useful methods to query the element and its contents. In this recipe, you will learn how to use these methods to retrieve elements using selectors. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I tested against a property on scope named deejay: var deejay = { name: 'Shortee', style: 'turntablism' }; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within an h2 tag is as we expected: it('should return an element using find()', function () {}); Next, retrieve a reference to the h2 tag using the find() method on the element providing the tag name as the selector: var h2 = element.find('h2'); Finally, we create an expectation that the element is actually defined: expect(h2[0]).toBeDefined(); How it works… In step 2, we use the find() method with the h2 selector to test against in step 3's expectation. Remember, the element returned is wrapped by jqLite or jQuery. Therefore, even if the element is not found, the object returned will have jQuery-specific properties; this means that we cannot run an expectation on the element alone being defined. A simple way to determine if the element itself is indeed defined is to access it via jQuery's internal array of DOM objects, typically the first. So, this is why in our recipe we run an expectation against element[0] as opposed to element itself. There's more… Here is an example using the querySelector() method. The querySelector() method is available on the actual DOM so we need to access it on an actual HTML element and not the jQuery wrapped element. The following code shows the selector we use in a CSS selector: it('should return an element using querySelector and css selector', function() { var elementByClass = element[0].querySelector('.deejay- style'); expect(elementByClass).toBeDefined(); }); Here is a another example using the querySelector() method that uses an id selector: it(should return an element using querySelector and id selector', function() { var elementByClass = element[0].querySelector(' #deejay_name'); expect(elementByClass).toBeDefined(); }); You can read more about the querySelector() method at https://developer.mozilla.org/en-US/docs/Web/API/document.querySelector. See also The Starting with testing directives recipe The Accessing basic HTML content recipe Accessing basic HTML content A substantial number of directive tests will involve interacting with the HTML content within the rendered HTML template. This recipe will teach you how to test whether a directive's HTML content is as expected. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I will test against a property on a scope named deejay: var deejay = { name: 'Shortee', style: 'turntablism' }; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within a h2 tag is as we expected: it('should display correct deejay data in the DOM', function () {}); Next, retrieve a reference to the h2 tag using the find() method on the element providing the tag name as the selector: var h2 = element.find('h2'); Finally, using the html() method on the returned element from step 2, we can get the HTML contents within an expectation that the h2 tag HTML code matches our scope's deejay name: expect(h2.html()).toBe(deejay.name); How it works… We made heavy use of the jQuery (or jqLite) library methods available for our element. In step 2, we use the find() method with the h2 selector. This returns a match for us to further utilize in step 3, in our expectation where we access the HTML contents of the element using the html() method this time (http://api.jquery.com/html/). There's more… We could also run a similar expectation for text within our h2 element using the text() method (http://api.jquery.com/text/) on the element, for example: it('should retrieve text from <h2>', function() { var h2 = element.find('h2'); expect(h2.text()).toBe(deejay.name); }); See also The Starting with testing directives recipe The Searching elements using selectors recipe Accessing repeater content AngularJS facilitates generating repeated content with ease using the ngRepeat directive. In this recipe, we'll learn how to access and test repeated content. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I tested against a property on scope named breakers: var breakers = [{ name: 'China Doll' }, { name: 'Crazy Legs' }, { name: 'Frosty Freeze' }]; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within the h2 tag is as we expected: it('should display the correct breaker name', function () {}); Next, retrieve a reference to the li tag using the find() method on the element providing the tag name as the selector: var list = element.find('li'); Finally, targeting the first element in the list, we retrieve the text content expecting it to match the first item in the breakers array: expect(list.eq(0).text()).toBe('China Doll'); How it works… In step 2, the find() method using li as the selector will return all the list items. In step 3, using the eq() method (http://api.jquery.com/eq/) on the returned element from step 2, we can get the HTML contents at a specific index, zero in this particular case. As the returned object from the eq() method is a jQuery object, we can call the text() method, which immediately after that will return the text content of the element. We can then run an expectation that the first li tag text matches the first breaker within the scope array. See also The Starting with testing directives recipe The Searching elements using selectors recipe The Accessing basic HTML content recipe Summary In this article you have learned to focus on testing changes within a directive based on interaction from either UI events or application updates to the model. Directives are one of the important jewels of AngularJS and can range in complexity. They can provide the foundation to many aspects of the application and therefore require comprehensive tests. Resources for Article: Further resources on this subject: The First Step [article] AngularJS Performance [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 4123

article-image-dealing-legacy-code
Packt
31 Mar 2015
16 min read
Save for later

Dealing with Legacy Code

Packt
31 Mar 2015
16 min read
In this article by Arun Ravindran, author of the book Django Best Practices and Design Patterns, we will discuss the following topics: Reading a Django code base Discovering relevant documentation Incremental changes versus full rewrites Writing tests before changing code Legacy database integration (For more resources related to this topic, see here.) It sounds exciting when you are asked to join a project. Powerful new tools and cutting-edge technologies might await you. However, quite often, you are asked to work with an existing, possibly ancient, codebase. To be fair, Django has not been around for that long. However, projects written for older versions of Django are sufficiently different to cause concern. Sometimes, having the entire source code and documentation might not be enough. If you are asked to recreate the environment, then you might need to fumble with the OS configuration, database settings, and running services locally or on the network. There are so many pieces to this puzzle that you might wonder how and where to start. Understanding the Django version used in the code is a key piece of information. As Django evolved, everything from the default project structure to the recommended best practices have changed. Therefore, identifying which version of Django was used is a vital piece in understanding it. Change of Guards Sitting patiently on the ridiculously short beanbags in the training room, the SuperBook team waited for Hart. He had convened an emergency go-live meeting. Nobody understood the "emergency" part since go live was at least 3 months away. Madam O rushed in holding a large designer coffee mug in one hand and a bunch of printouts of what looked like project timelines in the other. Without looking up she said, "We are late so I will get straight to the point. In the light of last week's attacks, the board has decided to summarily expedite the SuperBook project and has set the deadline to end of next month. Any questions?" "Yeah," said Brad, "Where is Hart?" Madam O hesitated and replied, "Well, he resigned. Being the head of IT security, he took moral responsibility of the perimeter breach." Steve, evidently shocked, was shaking his head. "I am sorry," she continued, "But I have been assigned to head SuperBook and ensure that we have no roadblocks to meet the new deadline." There was a collective groan. Undeterred, Madam O took one of the sheets and began, "It says here that the Remote Archive module is the most high-priority item in the incomplete status. I believe Evan is working on this." "That's correct," said Evan from the far end of the room. "Nearly there," he smiled at others, as they shifted focus to him. Madam O peered above the rim of her glasses and smiled almost too politely. "Considering that we already have an extremely well-tested and working Archiver in our Sentinel code base, I would recommend that you leverage that instead of creating another redundant system." "But," Steve interrupted, "it is hardly redundant. We can improve over a legacy archiver, can't we?" "If it isn't broken, then don't fix it", replied Madam O tersely. He said, "He is working on it," said Brad almost shouting, "What about all that work he has already finished?" "Evan, how much of the work have you completed so far?" asked O, rather impatiently. "About 12 percent," he replied looking defensive. Everyone looked at him incredulously. "What? That was the hardest 12 percent" he added. O continued the rest of the meeting in the same pattern. Everybody's work was reprioritized and shoe-horned to fit the new deadline. As she picked up her papers, readying to leave she paused and removed her glasses. "I know what all of you are thinking... literally. But you need to know that we had no choice about the deadline. All I can tell you now is that the world is counting on you to meet that date, somehow or other." Putting her glasses back on, she left the room. "I am definitely going to bring my tinfoil hat," said Evan loudly to himself. Finding the Django version Ideally, every project will have a requirements.txt or setup.py file at the root directory, and it will have the exact version of Django used for that project. Let's look for a line similar to this: Django==1.5.9 Note that the version number is exactly mentioned (rather than Django>=1.5.9), which is called pinning. Pinning every package is considered a good practice since it reduces surprises and makes your build more deterministic. Unfortunately, there are real-world codebases where the requirements.txt file was not updated or even completely missing. In such cases, you will need to probe for various tell-tale signs to find out the exact version. Activating the virtual environment In most cases, a Django project would be deployed within a virtual environment. Once you locate the virtual environment for the project, you can activate it by jumping to that directory and running the activated script for your OS. For Linux, the command is as follows: $ source venv_path/bin/activate Once the virtual environment is active, start a Python shell and query the Django version as follows: $ python >>> import django >>> print(django.get_version()) 1.5.9 The Django version used in this case is Version 1.5.9. Alternatively, you can run the manage.py script in the project to get a similar output: $ python manage.py --version 1.5.9 However, this option would not be available if the legacy project source snapshot was sent to you in an undeployed form. If the virtual environment (and packages) was also included, then you can easily locate the version number (in the form of a tuple) in the __init__.py file of the Django directory. For example: $ cd envs/foo_env/lib/python2.7/site-packages/django $ cat __init__.py VERSION = (1, 5, 9, 'final', 0) ... If all these methods fail, then you will need to go through the release notes of the past Django versions to determine the identifiable changes (for example, the AUTH_PROFILE_MODULE setting was deprecated since Version 1.5) and match them to your legacy code. Once you pinpoint the correct Django version, then you can move on to analyzing the code. Where are the files? This is not PHP One of the most difficult ideas to get used to, especially if you are from the PHP or ASP.NET world, is that the source files are not located in your web server's document root directory, which is usually named wwwroot or public_html. Additionally, there is no direct relationship between the code's directory structure and the website's URL structure. In fact, you will find that your Django website's source code is stored in an obscure path such as /opt/webapps/my-django-app. Why is this? Among many good reasons, it is often more secure to move your confidential data outside your public webroot. This way, a web crawler would not be able to accidentally stumble into your source code directory. Starting with urls.py Even if you have access to the entire source code of a Django site, figuring out how it works across various apps can be daunting. It is often best to start from the root urls.py URLconf file since it is literally a map that ties every request to the respective views. With normal Python programs, I often start reading from the start of its execution—say, from the top-level main module or wherever the __main__ check idiom starts. In the case of Django applications, I usually start with urls.py since it is easier to follow the flow of execution based on various URL patterns a site has. In Linux, you can use the following find command to locate the settings.py file and the corresponding line specifying the root urls.py: $ find . -iname settings.py -exec grep -H 'ROOT_URLCONF' {} ; ./projectname/settings.py:ROOT_URLCONF = 'projectname.urls'   $ ls projectname/urls.py projectname/urls.py Jumping around the code Reading code sometimes feels like browsing the web without the hyperlinks. When you encounter a function or variable defined elsewhere, then you will need to jump to the file that contains that definition. Some IDEs can do this automatically for you as long as you tell it which files to track as part of the project. If you use Emacs or Vim instead, then you can create a TAGS file to quickly navigate between files. Go to the project root and run a tool called Exuberant Ctags as follows: find . -iname "*.py" -print | etags - This creates a file called TAGS that contains the location information, where every syntactic unit such as classes and functions are defined. In Emacs, you can find the definition of the tag, where your cursor (or point as it called in Emacs) is at using the M-. command. While using a tag file is extremely fast for large code bases, it is quite basic and is not aware of a virtual environment (where most definitions might be located). An excellent alternative is to use the elpy package in Emacs. It can be configured to detect a virtual environment. Jumping to a definition of a syntactic element is using the same M-. command. However, the search is not restricted to the tag file. So, you can even jump to a class definition within the Django source code seamlessly. Understanding the code base It is quite rare to find legacy code with good documentation. Even if you do, the documentation might be out of sync with the code in subtle ways that can lead to further issues. Often, the best guide to understand the application's functionality is the executable test cases and the code itself. The official Django documentation has been organized by versions at https://docs.djangoproject.com. On any page, you can quickly switch to the corresponding page in the previous versions of Django with a selector on the bottom right-hand section of the page: In the same way, documentation for any Django package hosted on readthedocs.org can also be traced back to its previous versions. For example, you can select the documentation of django-braces all the way back to v1.0.0 by clicking on the selector on the bottom left-hand section of the page: Creating the big picture Most people find it easier to understand an application if you show them a high-level diagram. While this is ideally created by someone who understands the workings of the application, there are tools that can create very helpful high-level depiction of a Django application. A graphical overview of all models in your apps can be generated by the graph_models management command, which is provided by the django-command-extensions package. As shown in the following diagram, the model classes and their relationships can be understood at a glance: Model classes used in the SuperBook project connected by arrows indicating their relationships This visualization is actually created using PyGraphviz. This can get really large for projects of even medium complexity. Hence, it might be easier if the applications are logically grouped and visualized separately. PyGraphviz Installation and Usage If you find the installation of PyGraphviz challenging, then don't worry, you are not alone. Recently, I faced numerous issues while installing on Ubuntu, starting from Python 3 incompatibility to incomplete documentation. To save your time, I have listed the steps that worked for me to reach a working setup. On Ubuntu, you will need the following packages installed to install PyGraphviz: $ sudo apt-get install python3.4-dev graphviz libgraphviz-dev pkg-config Now activate your virtual environment and run pip to install the development version of PyGraphviz directly from GitHub, which supports Python 3: $ pip install git+http://github.com/pygraphviz/pygraphviz.git#egg=pygraphviz Next, install django-extensions and add it to your INSTALLED_APPS. Now, you are all set. Here is a sample usage to create a GraphViz dot file for just two apps and to convert it to a PNG image for viewing: $ python manage.py graph_models app1 app2 > models.dot $ dot -Tpng models.dot -o models.png Incremental change or a full rewrite? Often, you would be handed over legacy code by the application owners in the earnest hope that most of it can be used right away or after a couple of minor tweaks. However, reading and understanding a huge and often outdated code base is not an easy job. Unsurprisingly, most programmers prefer to work on greenfield development. In the best case, the legacy code ought to be easily testable, well documented, and flexible to work in modern environments so that you can start making incremental changes in no time. In the worst case, you might recommend discarding the existing code and go for a full rewrite. Or, as it is commonly decided, the short-term approach would be to keep making incremental changes, and a parallel long-term effort might be underway for a complete reimplementation. A general rule of thumb to follow while taking such decisions is—if the cost of rewriting the application and maintaining the application is lower than the cost of maintaining the old application over time, then it is recommended to go for a rewrite. Care must be taken to account for all the factors, such as time taken to get new programmers up to speed, the cost of maintaining outdated hardware, and so on. Sometimes, the complexity of the application domain becomes a huge barrier against a rewrite, since a lot of knowledge learnt in the process of building the older code gets lost. Often, this dependency on the legacy code is a sign of poor design in the application like failing to externalize the business rules from the application logic. The worst form of a rewrite you can probably undertake is a conversion, or a mechanical translation from one language to another without taking any advantage of the existing best practices. In other words, you lost the opportunity to modernize the code base by removing years of cruft. Code should be seen as a liability not an asset. As counter-intuitive as it might sound, if you can achieve your business goals with a lesser amount of code, you have dramatically increased your productivity. Having less code to test, debug, and maintain can not only reduce ongoing costs but also make your organization more agile and flexible to change. Code is a liability not an asset. Less code is more maintainable. Irrespective of whether you are adding features or trimming your code, you must not touch your working legacy code without tests in place. Write tests before making any changes In the book Working Effectively with Legacy Code, Michael Feathers defines legacy code as, simply, code without tests. He elaborates that with tests one can easily modify the behavior of the code quickly and verifiably. In the absence of tests, it is impossible to gauge if the change made the code better or worse. Often, we do not know enough about legacy code to confidently write a test. Michael recommends writing tests that preserve and document the existing behavior, which are called characterization tests. Unlike the usual approach of writing tests, while writing a characterization test, you will first write a failing test with a dummy output, say X, because you don't know what to expect. When the test harness fails with an error, such as "Expected output X but got Y", then you will change your test to expect Y. So, now the test will pass, and it becomes a record of the code's existing behavior. Note that we might record buggy behavior as well. After all, this is unfamiliar code. Nevertheless, writing such tests are necessary before we start changing the code. Later, when we know the specifications and code better, we can fix these bugs and update our tests (not necessarily in that order). Step-by-step process to writing tests Writing tests before changing the code is similar to erecting scaffoldings before the restoration of an old building. It provides a structural framework that helps you confidently undertake repairs. You might want to approach this process in a stepwise manner as follows: Identify the area you need to make changes to. Write characterization tests focusing on this area until you have satisfactorily captured its behavior. Look at the changes you need to make and write specific test cases for those. Prefer smaller unit tests to larger and slower integration tests. Introduce incremental changes and test in lockstep. If tests break, then try to analyze whether it was expected. Don't be afraid to break even the characterization tests if that behavior is something that was intended to change. If you have a good set of tests around your code, then you can quickly find the effect of changing your code. On the other hand, if you decide to rewrite by discarding your code but not your data, then Django can help you considerably. Legacy databases There is an entire section on legacy databases in Django documentation and rightly so, as you will run into them many times. Data is more important than code, and databases are the repositories of data in most enterprises. You can modernize a legacy application written in other languages or frameworks by importing their database structure into Django. As an immediate advantage, you can use the Django admin interface to view and change your legacy data. Django makes this easy with the inspectdb management command, which looks as follows: $ python manage.py inspectdb > models.py This command, if run while your settings are configured to use the legacy database, can automatically generate the Python code that would go into your models file. Here are some best practices if you are using this approach to integrate to a legacy database: Know the limitations of Django ORM beforehand. Currently, multicolumn (composite) primary keys and NoSQL databases are not supported. Don't forget to manually clean up the generated models, for example, remove the redundant 'ID' fields since Django creates them automatically. Foreign Key relationships may have to be manually defined. In some databases, the auto-generated models will have them as integer fields (suffixed with _id). Organize your models into separate apps. Later, it will be easier to add the views, forms, and tests in the appropriate folders. Remember that running the migrations will create Django's administrative tables (django_* and auth_*) in the legacy database. In an ideal world, your auto-generated models would immediately start working, but in practice, it takes a lot of trial and error. Sometimes, the data type that Django inferred might not match your expectations. In other cases, you might want to add additional meta information such as unique_together to your model. Eventually, you should be able to see all the data that was locked inside that aging PHP application in your familiar Django admin interface. I am sure this will bring a smile to your face. Summary In this article, we looked at various techniques to understand legacy code. Reading code is often an underrated skill. But rather than reinventing the wheel, we need to judiciously reuse good working code whenever possible. Resources for Article: Further resources on this subject: So, what is Django? [article] Adding a developer with Django forms [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 2595

article-image-geocoding-address-based-data
Packt
30 Mar 2015
7 min read
Save for later

Geocoding Address-based Data

Packt
30 Mar 2015
7 min read
In this article by Kurt Menke, GISP, Dr. Richard Smith Jr., GISP, Dr. Luigi Pirelli, Dr. John Van Hoesen, GISP, authors of the book Mastering QGIS, we'll have a look at how to geocode address-based date using QGIS and MMQGIS. (For more resources related to this topic, see here.) Geocoding addresses has many applications, such as mapping the customer base for a store, members of an organization, public health records, or incidence of crime. Once mapped, the points can be used in many ways to generate information. For example, they can be used as inputs to generate density surfaces, linked to parcels of land, and characterized by socio-economic data. They may also be an important component of a cadastral information system. An address geocoding operation typically involves the tabular address data and a street network dataset. The street network needs to have attribute fields for address ranges on the left- and right-hand side of each road segment. You can geocode within QGIS using a plugin named MMQGIS (http://michaelminn.com/linux/mmqgis/). MMQGIS has many useful tools. For geocoding, we will use the tools found in MMQGIS | Geocode. There are two tools there: Geocode CSV with Google/ OpenStreetMap and Geocode from Street Layer as shown in the following screenshot. The first tool allows you to geocode a table of addresses using either the Google Maps API or the OpenStreetMap Nominatim web service. This tool requires an Internet connection but no local street network data as the web services provide the street network. The second tool requires a local street network dataset with address range attributes to geocode the address data: How address geocoding works The basic mechanics of address geocoding are straightforward. The street network GIS data layer has attribute columns containing the address ranges on both the even and odd side of every street segment. In the following example, you can see a piece of the attribute table for the Streets.shp sample data. The columns LEFTLOW, LEFTHIGH, RIGHTLOW, and RIGHTHIGH contain the address ranges for each street segment: In the following example we are looking at Easy Street. On the odd side of the street, the addresses range from 101 to 199. On the even side, they range from 102 to 200. If you wanted to map 150 Easy Street, QGIS would assume that the address is located halfway down the even side of that block. Similarly, 175 Easy Street would be on the odd side of the street three quarters the way down the block. Address geocoding assumes that the addresses are evenly spaced along the linear network. QGIS should place the address point very close to its actual position, but due to variability in lot sizes not every address point will be perfectly positioned. Now that you've learned the basics, let's work through an example. Here we will geocode addresses using web services. The output will be a point shapefile containing all the attribute fields found in the source Addresses.csv file. An example – geocoding using web services Here are the steps for geocoding the Addresses.csv sample data using web services. Load the Addresses.csv and the Streets.shp sample data into QGIS Desktop. Open Addresses.csv and examine the table. These are addresses of municipal facilities. Notice that the street address (for example, 150 Easy Street) is contained in a single field. There are also fields for the city, state, and country. Since both Google and OpenStreetMap are global services, it is wise to include such fields so that the services can narrow down the geography. Install and enable the MMQGIS plugin. Navigate to MMQGIS | Geocode | Geocode CSV with Google/OpenStreetMap. The Web Service Geocode dialog window will open. Select Input CSV File (UTF-8) by clicking on Browse… and locating the delimited text file on your system. Select the address fields by clicking on the drop-down menu and identifying the Address Field, City Field, State Field, and Country Field fields. MMQGIS may identify some or all of these fields by default if they are named with logical names such as Address or State. Choose the web service. Name the output shapefile by clicking on Browse…. Name Not Found Output List by clicking on Browse…. Any records that are not matched will be written to this file. This allows you to easily see and troubleshoot any unmapped records. Click on OK. The status of the geocoding operation can be seen in the lower-left corner of QGIS. The word Geocoding will be displayed, followed by the number of records that have been processed. The output will be a point shapefile and a CSV file listing that addresses were not matched. Two additional attribute columns will be added to the output address point shapefile: addrtype and addrlocat. These fields provide information on how the web geocoding service obtained the location. These may be useful for accuracy assessment. Addrtype is the Google <type> element or the OpenStreetMap class attribute. This will indicate what kind of address type this is (highway, locality, museum, neighborhood, park, place, premise, route, train_station, university etc.). Addrlocat is the Google <location_type> element or OpenStreetMap type attribute. This indicates the relationship of the coordinates to the addressed feature (approximate, geometric center, node, relation, rooftop, way interpolation, and so on). If the web service returns more than one location for an address, the first of the locations will be used as the output feature. Use of this plugin requires an active Internet connection. Google places both rate and volume restrictions on the number of addresses that can be geocoded within various time limits. You should visit the Google Geocoding API website: (http://code.google.com/apis/maps/documentation/geocoding/) for more details, and current information and Google's terms of service. Geocoding via these web services can be slow. If you don't get the desired results with one service, try the other. Geocoding operations rarely have 100% success. Street names in the street shapefile must match the street names in the CSV file exactly. Any discrepancies between the name of a street in the address table, and the street attribute table will lower the geocoding success rate. The following image shows the results of geocoding addresses via street address ranges. The addresses are shown with the street network used in the geocoding operation: Geocoding is often an iterative process. After the initial geocoding operation, you can review the Not Found CSV file. If it's empty then all the records were matched. If it has records in it, compare them with the attributes of the streets layer. This will help you determine why those records were not mapped. It may be due to inconsistencies in the spelling of street names. It may also be due to a street centerline layer that is not as current as the addresses. Once the errors have been identified they can be corrected by editing the data, or obtaining a different street centreline dataset. The geocoding operation can be re-run on those unmatched addresses. This process can be repeated until all records are matched. Use the Identify tool to inspect the mapped points, and the roads, to ensure that the operation was successful. Never take a GIS operation for granted. Check your results with a critical eye. Summary This article introduced you to the process of address geocoding using QGIS and the MMQGIS plugin. Resources for Article: Further resources on this subject: Editing attributes [article] How Vector Features are Displayed [article] QGIS Feature Selection Tools [article]
Read more
  • 0
  • 1
  • 5215
article-image-gui-components-qt-5
Packt
30 Mar 2015
8 min read
Save for later

GUI Components in Qt 5

Packt
30 Mar 2015
8 min read
In this article by Symeon Huang, author of the book Qt 5 Blueprints, explains typical and basic GUI components in Qt 5 (For more resources related to this topic, see here.) Design UI in Qt Creator Qt Creator is the official IDE for Qt application development and we're going to use it to design application's UI. At first, let's create a new project: Open Qt Creator. Navigate to File | New File or Project. Choose Qt Widgets Application. Enter the project's name and location. In this case, the project's name is layout_demo. You may wish to follow the wizard and keep the default values. After this creating process, Qt Creator will generate the skeleton of the project based on your choices. UI files are under Forms directory. And when you double-click on a UI file, Qt Creator will redirect you to integrated Designer, the mode selector should have Design highlighted and the main window should contains several sub-windows to let you design the user interface. Here we can design the UI by dragging and dropping. Qt Widgets Drag three push buttons from the widget box (widget palette) into the frame of MainWindow in the center. The default text displayed on these buttons is PushButtonbut you can change text if you want, by double-clicking on the button. In this case, I changed them to Hello, Hola, and Bonjouraccordingly. Note that this operation won't affect the objectName property and in order to keep it neat and easy-to-find, we need to change the objectName! The right-hand side of the UI contains two windows. The upper right section includes Object Inspector and the lower-right includes the Property Editor. Just select a push button, we can easily change objectName in the Property Editor. For the sake of convenience, I changed these buttons' objectName properties to helloButton, holaButton, and bonjourButton respectively. Save changes and click on Run on the left-hand side panel, it will build the project automatically then run it as shown in the following screenshot: In addition to the push button, Qt provides lots of commonly used widgets for us. Buttons such as tool button, radio button, and checkbox. Advanced views such as list, tree, and table. Of course there are input widgets, line edit, spin box, font combo box, date and time edit, and so on. Other useful widgets such as progress bar, scroll bar, and slider are also in the list. Besides, you can always subclass QWidget and write your own one. Layouts A quick way to delete a widget is to select it and press the Delete button. Meanwhile, some widgets, such as the menu bar, status bar, and toolbar can't be selected, so we have to right-click on them in Object Inspector and delete them. Since they are useless in this example, it's safe to remove them and we can do this for good. Okay, let's understand what needs to be done after the removal. You may want to keep all these push buttons on the same horizontal axis. To do this, perform the following steps: Select all the push buttons either by clicking on them one by one while keeping the Ctrl key pressed or just drawing an enclosing rectangle containing all the buttons. Right-click and select Layout | LayOut Horizontally. The keyboard shortcut for this is Ctrl + H. Resize the horizontal layout and adjust its layoutSpacing by selecting it and dragging any of the points around the selection box until it fits best. Hmm…! You may have noticed that the text of the Bonjour button is longer than the other two buttons, and it should be wider than the others. How do you do this? You can change the property of the horizontal layout object's layoutStretch property in Property Editor. This value indicates the stretch factors of the widgets inside the horizontal layout. They would be laid out in proportion. Change it to 3,3,4, and there you are. The stretched size definitely won't be smaller than the minimum size hint. This is how the zero factor works when there is a nonzero natural number, which means that you need to keep the minimum size instead of getting an error with a zero divisor. Now, drag Plain Text Edit just below, and not inside, the horizontal layout. Obviously, it would be neater if we could extend the plain text edit's width. However, we don't have to do this manually. In fact, we could change the layout of the parent, MainWindow. That's it! Right-click on MainWindow, and then navigate to Lay out | Lay Out Vertically. Wow! All the children widgets are automatically extended to the inner boundary of MainWindow; they are kept in a vertical order. You'll also find Layout settings in the centralWidget property, which is exactly the same thing as the previous horizontal layout. The last thing to make this application halfway decent is to change the title of the window. MainWindow is not the title you want, right? Click on MainWindow in the object tree. Then, scroll down its properties to find windowTitle. Name it whatever you want. In this example, I changed it to Greeting. Now, run the application again and you will see it looks like what is shown in the following screenshot: Qt Quick Components Since Qt 5, Qt Quick has evolved to version 2.0 which delivers a dynamic and rich experience. The language it used is so-called QML, which is basically an extended version of JavaScript using a JSON-like format. To create a simple Qt Quick application based on Qt Quick Controls 1.2, please follow following procedures: Create a new project named HelloQML. Select Qt Quick Application instead of Qt Widgets Application that we chose previously. Select Qt Quick Controls 1.2 when the wizard navigates you to Select Qt Quick Components Set. Edit the file main.qml under the root of Resources file, qml.qrc, that Qt Creator has generated for our new Qt Quick project. Let's see how the code should be. import QtQuick 2.3 import QtQuick.Controls 1.2   ApplicationWindow {    visible: true    width: 640    height: 480    title: qsTr("Hello QML")      menuBar: MenuBar {        Menu {            title: qsTr("File")            MenuItem {                text: qsTr("Exit")                shortcut: "Ctrl+Q"                onTriggered: Qt.quit()            }        }    }      Text {        id: hw        text: qsTr("Hello World")        font.capitalization: Font.AllUppercase        anchors.centerIn: parent    }      Label {        anchors { bottom: hw.top; bottomMargin: 5; horizontalCenter: hw.horizontalCenter }        text: qsTr("Hello Qt Quick")    } } If you ever touched Java or Python, then the first two lines won't be too unfamiliar for you. It simply imports the Qt Quick and Qt Quick Controls. And the number behind is the version of the library. The body of this QML source file is really in JSON style, which enables you understand the hierarchy of the user interface through the code. Here, the root item is ApplicationWindow, which is basically the same thing as QMainWindow in Qt/C++. When you run this application in Windows, you can barely find the difference between the Text item and Label item. But on some platforms, or when you change system font and/or its colour, you'll find that Label follows the font and colour scheme of the system while Text doesn't. Run this application, you'll see there is a menu bar, a text, and a label in the application window. Exactly what we wrote in the QML file: You may miss the Design mode for traditional Qt/C++ development. Well, you can still design Qt Quick application in Design mode! Click on Design in mode selector when you edit main.qml file. Qt Creator will redirect you into Design mode where you can use mouse drag-and-drop UI components: Almost all widgets you use in Qt Widget application can be found here in a Qt Quick application. Moreover, you can use other modern widgets such as busy indicator in Qt Quick while there's no counterpart in Qt Widget application. However, QML is a declarative language whose performance is obviously poor than C++. Therefore, more and more developers choose to write UI with Qt Quick in order to deliver a better visual style, while keep core functions in Qt/C++. Summary In this article, we had a brief contact with various GUI components of Qt 5 and focus on the Design mode in Qt Creator. Two small examples used as a Qt-like "Hello World" demonstrations. Resources for Article: Further resources on this subject: Code interlude – signals and slots [article] Program structure, execution flow, and runtime objects [article] Configuring Your Operating System [article]
Read more
  • 0
  • 0
  • 4220

article-image-subscribing-report
Packt
26 Mar 2015
6 min read
Save for later

Subscribing to a report

Packt
26 Mar 2015
6 min read
 In this article by Johan Yu, the author of Salesforce Reporting and Dashboards, we get acquainted to the components used when working with reports on the Salesforce platform. Subscribing to a report is a new feature in Salesforce introduced in the Spring 2015 release. When you subscribe to a report, you will get a notification on weekdays, daily, or weekly, when the reports meet the criteria defined. You just need to subscribe to the report that you most care about. (For more resources related to this topic, see here.) Subscribing to a report is not the same as the report's Schedule Future Run option, where scheduling a report for a future run will keep e-mailing you the report content at a specified frequency defined, without specifying any conditions. But when you subscribe to a report, you will receive notifications when the report output meets the criteria you have defined. Subscribing to a report will not send you the e-mail content, but just an alert that the report you subscribed to meets the conditions specified. To subscribe to a report, you do not need additional permission as our administrator is able to control to enable or disable this feature for the entire organization. By default, this feature will be turned on for customers using the Salesforce Spring 2015 release. If you are an administrator for the organization, you can check out this feature by navigating to Setup | Customize | Reports & Dashboards | Report Notification | Enable report notification subscriptions for all users. Besides receiving notifications via e-mail, you also can opt for Salesforce1 notifications and posts to Chatter feeds, and execute a custom action. Report Subscription To subscribe to a report, you need to define a set of conditions to trigger the notifications. Here is what you need to understand before you subscribe to a report: When: Everytime conditions are met or only the first time conditions are met. Conditions: An aggregate can be a record count or a summarize field. Then define the operator and value you want the aggregate to be compared to. The summarize field means a field that you use in that report to summarize its data as average, smallest, largest, or sum. You can add multiple conditions, but at this moment, you only have the AND condition. Schedule frequency: Schedule weekday, daily, weekly, and the time the report will be run. Actions: E-mail notifications: You will get e-mail alerts when conditions are met. Posts to Chatter feeds: Alerts will be posted to your Chatter feed. Salesforce1 notifications: Alerts in your Salesforce1 app. Execute a custom action: This will trigger a call to the apex class. You will need a developer to write apex code for this. Active: This is a checkbox used to activate or disable subscription. You may just need to disable it when you need to unsubscribe temporarily; otherwise, deleting will remove all the settings defined. The following screenshot shows the conditions set in order to subscribe to a report: Monitoring a report subscription How can you know whether you have subscribed to a report? When you open the report and see the Subscribe button, it means you are not subscribed to that report:   Once you configure the report to subscribe, the button label will turn to Edit Subscription. But, do not get it wrong that not all reports with Edit Subscription, you will get alerts when the report meets the criteria, because the setting may just not be active, remember step above when you subscribe a report. To know all the reports you subscribe to at a glance, as long as you have View Setup and Configuration permissions, navigate to Setup | Jobs | Scheduled Jobs, and look for Type as Reporting Notification, as shown in this screenshot:   Hands-on – subscribing to a report Here is our next use case: you would like to get a notification in your Salesforce1 app—an e-mail notification—and also posts on your Chatter feed once the Closed Won opportunity for the month has reached $50,000. Salesforce should check the report daily, but instead of getting this notification daily, you want to get it only once a week or month; otherwise, it will be disturbing. Creating reports Make sure you set the report with the correct filter, set Close Date as This Month, and summarize the Amount field, as shown in the following screenshot:   Subscribing Click on the Subscribe button and fill in the following details: Type as Only the first time conditions are met Conditions: Aggregate as Sum of Amount Operator as Greater Than or Equal Value as 50000 Schedule: Frequency as Every Weekday Time as 7AM In Actions, select: Send Salesforce1 Notification Post to Chatter Feed Send Email Notification In Active, select the checkbox Testing and saving The good thing of this feature is the ability to test without waiting until the scheduled date or time. Click on the Save & Run Now button. Here is the result: Salesforce1 notifications Open your Salesforce1 mobile app, look for the notification icon, and notice a new alert from the report you subscribed to, as shown in this screenshot: If you click on the notification, it will take you to the report that is shown in the following screenshot:   Chatter feed Since you selected the Post to Chatter Feed action, the same alert will go to your Chatter feed as well. Clicking on the link in the Chatter feed will open the same report in your Salesforce1 mobile app or from the web browser, as shown in this screenshot: E-mail notification The last action we've selected for this exercise is to send an e-mail notification. The following screenshot shows how the e-mail notification would look:   Limitations The following limitations are observed while subscribing to a report: You can set up to five conditions per report, and no OR logic conditions are possible You can subscribe for up to five reports, so use it wisely Summary In this article, you became familiar with components when working with reports on the Salesforce platform. We saw different report formats and the uniqueness of each format. We continued discussions on adding various types of charts to the report with point-and-click effort and no code; all of this can be done within minutes. We saw how to add filters to reports to customize our reports further, including using Filter Logic, Cross Filter, and Row Limit for tabular reports. We walked through managing and customizing custom report types, including how to hide unused report types and report type adoption analysis. In the last part of this article, we saw how easy it is to subscribe to a report and define criteria. Resources for Article: Further resources on this subject: Salesforce CRM – The Definitive Admin Handbook - Third Edition [article] Salesforce.com Customization Handbook [article] Developing Applications with Salesforce Chatter [article]
Read more
  • 0
  • 0
  • 1973

article-image-geolocating-photos-map
Packt
25 Mar 2015
7 min read
Save for later

Geolocating photos on the map

Packt
25 Mar 2015
7 min read
In this article by Joel Lawhead, author of the book, QGIS Python Programming Cookbook uses the tags to create locations on a map for some photos and provide links to open them. (For more resources related to this topic, see here.) Getting ready You will need to download some sample geotagged photos from https://github.com/GeospatialPython/qgis/blob/gh-pages/photos.zip?raw=true and place them in a directory named photos in your qgis_data directory. How to do it... QGIS requires the Python Imaging Library (PIL), which should already be included with your installation. PIL can parse EXIF tags. We will gather the filenames of the photos, parse the location information, convert it to decimal degrees, create the point vector layer, add the photo locations, and add an action link to the attributes. To do this, we need to perform the following steps: In the QGIS Python Console, import the libraries that we'll need, including k for parsing image data and the glob module for doing wildcard file searches: import globimport Imagefrom ExifTags import TAGS Next, we'll create a function that can parse the header data: def exif(img):   exif_data = {}   try:         i = Image.open(img)       tags = i._getexif()       for tag, value in tags.items():         decoded = TAGS.get(tag, tag)           exif_data[decoded] = value   except:       passreturn exif_data Now, we'll create a function that can convert degrees-minute-seconds to decimal degrees, which is how coordinates are stored in JPEG images: def dms2dd(d, m, s, i):   sec = float((m * 60) + s)   dec = float(sec / 3600)   deg = float(d + dec)   if i.upper() == 'W':       deg = deg * -1   elif i.upper() == 'S':       deg = deg * -1   return float(deg) Next, we'll define a function to parse the location data from the header data: def gps(exif):   lat = None   lon = None   if exif['GPSInfo']:             # Lat       coords = exif['GPSInfo']       i = coords[1]       d = coords[2][0][0]       m = coords[2][1][0]       s = coords[2][2][0]       lat = dms2dd(d, m ,s, i)       # Lon       i = coords[3]       d = coords[4][0][0]       m = coords[4][1][0]       s = coords[4][2][0]       lon = dms2dd(d, m ,s, i)return lat, lon Next, we'll loop through the photos directory, get the filenames, parse the location information, and build a simple dictionary to store the information, as follows: photos = {}photo_dir = "/Users/joellawhead/qgis_data/photos/"files = glob.glob(photo_dir + "*.jpg")for f in files:   e = exif(f)   lat, lon = gps(e) photos[f] = [lon, lat] Now, we'll set up the vector layer for editing: lyr_info = "Point?crs=epsg:4326&field=photo:string(75)"vectorLyr = QgsVectorLayer(lyr_info, "Geotagged Photos" , "memory")vpr = vectorLyr.dataProvider() We'll add the photo details to the vector layer: features = []for pth, p in photos.items():   lon, lat = p   pnt = QgsGeometry.fromPoint(QgsPoint(lon,lat))   f = QgsFeature()   f.setGeometry(pnt)   f.setAttributes([pth])   features.append(f)vpr.addFeatures(features)vectorLyr.updateExtents() Now, we can add the layer to the map and make the active layer: QgsMapLayerRegistry.instance().addMapLayer(vectorLyr)iface.setActiveLayer(vectorLyr)activeLyr = iface.activeLayer() Finally, we'll add an action that allows you to click on it and open the photo: actions = activeLyr.actions()actions.addAction(QgsAction.OpenUrl, "Photos", '[% "photo" %]') How it works... Using the included PIL EXIF parser, getting location information and adding it to a vector layer is relatively straightforward. This action is a default option for opening a URL. However, you can also use Python expressions as actions to perform a variety of tasks. The following screenshot shows an example of the data visualization and photo popup: There's more... Another plugin called Photo2Shape is available, but it requires you to install an external EXIF tag parser. Image change detection Change detection allows you to automatically highlight the differences between two images in the same area if they are properly orthorectified. We'll do a simple difference change detection on two images, which are several years apart, to see the differences in urban development and the natural environment. Getting ready You can download the two images from https://github.com/GeospatialPython/qgis/blob/gh-pages/change-detection.zip?raw=true and put them in a directory named change-detection in the rasters directory of your qgis_data directory. Note that the file is 55 megabytes, so it may take several minutes to download. How to do it... We'll use the QGIS raster calculator to subtract the images in order to get the difference, which will highlight significant changes. We'll also add a color ramp shader to the output in order to visualize the changes. To do this, we need to perform the following steps: First, we need to import the libraries that we need in to the QGIS console: from PyQt4.QtGui import *from PyQt4.QtCore import *from qgis.analysis import * Now, we'll set up the path names and raster names for our images: before = "/Users/joellawhead/qgis_data/rasters/change-detection/before.tif"|after = "/Users/joellawhead/qgis_data/rasters/change-detection/after.tif"beforeName = "Before"afterName = "After" Next, we'll establish our images as raster layers: beforeRaster = QgsRasterLayer(before, beforeName)afterRaster = QgsRasterLayer(after, afterName) Then, we can build the calculator entries: beforeEntry = QgsRasterCalculatorEntry()afterEntry = QgsRasterCalculatorEntry()beforeEntry.raster = beforeRasterafterEntry.raster = afterRasterbeforeEntry.bandNumber = 1afterEntry.bandNumber = 2beforeEntry.ref = beforeName + "@1"afterEntry.ref = afterName + "@2"entries = [afterEntry, beforeEntry] Now, we'll set up the simple expression that does the math for remote sensing: exp = "%s - %s" % (afterEntry.ref, beforeEntry.ref) Then, we can set up the output file path, the raster extent, and pixel width and height: output = "/Users/joellawhead/qgis_data/rasters/change-detection/change.tif"e = beforeRaster.extent()w = beforeRaster.width()h = beforeRaster.height() Now, we perform the calculation: change = QgsRasterCalculator(exp, output, "GTiff", e, w, h, entries)change.processCalculation() Finally, we'll load the output as a layer, create the color ramp shader, apply it to the layer, and add it to the map, as shown here: lyr = QgsRasterLayer(output, "Change")algorithm = QgsContrastEnhancement.StretchToMinimumMaximumlimits = QgsRaster.ContrastEnhancementMinMaxlyr.setContrastEnhancement(algorithm, limits)s = QgsRasterShader()c = QgsColorRampShader()c.setColorRampType(QgsColorRampShader.INTERPOLATED)i = []qri = QgsColorRampShader.ColorRampItemi.append(qri(0, QColor(0,0,0,0), 'NODATA'))i.append(qri(-101, QColor(123,50,148,255), 'Significant Itensity Decrease'))i.append(qri(-42.2395, QColor(194,165,207,255), 'Minor Itensity Decrease'))i.append(qri(16.649, QColor(247,247,247,0), 'No Change'))i.append(qri(75.5375, QColor(166,219,160,255), 'Minor Itensity Increase'))i.append(qri(135, QColor(0,136,55,255), 'Significant Itensity Increase'))c.setColorRampItemList(i)s.setRasterShaderFunction(c)ps = QgsSingleBandPseudoColorRenderer(lyr.dataProvider(), 1, s)lyr.setRenderer(ps)QgsMapLayerRegistry.instance().addMapLayer(lyr) How it works... If a building is added in the new image, it will be brighter than its surroundings. If a building is removed, the new image will be darker in that area. The same holds true for vegetation, to some extent. Summary The concept is simple. We subtract the older image data from the new image data. Concentrating on urban areas tends to be highly reflective and results in higher image pixel values. Resources for Article: Further resources on this subject: Prototyping Arduino Projects using Python [article] Python functions – Avoid repeating code [article] Pentesting Using Python [article]
Read more
  • 0
  • 0
  • 2553
article-image-prerequisites
Packt
25 Mar 2015
6 min read
Save for later

Prerequisites

Packt
25 Mar 2015
6 min read
In this article by Deepak Vohra, author of the book, Advanced Java® EE Development with WildFly® you will see how to create a Java EE project and its pre-requisites. (For more resources related to this topic, see here.) The objective of the EJB 3.x specification is to simplify its development by improving the EJB architecture. This simplification is achieved by providing metadata annotations to replace XML configuration. It also provides default configuration values by making entity and session beans POJOs (Plain Old Java Objects) and by making component and home interfaces redundant. The EJB 2.x entity beans is replaced with EJB 3.x entities. EJB 3.0 also introduced the Java Persistence API (JPA) for object-relational mapping of Java objects. WildFly 8.x supports EJB 3.2 and the JPA 2.1 specifications from Java EE 7. The sample application is based on Java EE 6 and EJB 3.1. The configuration of EJB 3.x with Java EE 7 is also discussed and the sample application can be used or modified to run on a Java EE 7 project. We have used a Hibernate 4.3 persistence provider. Unlike some of the other persistence providers, the Hibernate persistence provider supports automatic generation of relational database tables including the joining of tables. In this article, we will create an EJB 3.x project. This article has the following topics: Setting up the environment Creating a WildFly runtime Creating a Java EE project Setting up the environment We need to download and install the following software: WildFly 8.1.0.Final: Download wildfly-8.1.0.Final.zip from http://wildfly.org/downloads/. MySQL 5.6 Database-Community Edition: Download this edition from http://dev.mysql.com/downloads/mysql/. When installing MySQL, also install Connector/J. Eclipse IDE for Java EE Developers: Download Eclipse Luna from https://www.eclipse.org/downloads/packages/release/Luna/SR1. JBoss Tools (Luna) 4.2.0.Final: Install this as a plug-in to Eclipse from the Eclipse Marketplace (http://tools.jboss.org/downloads/installation.html). The latest version from Eclipse Marketplace is likely to be different than 4.2.0. Apache Maven: Download version 3.05 or higher from http://maven.apache.org/download.cgi. Java 7: Download Java 7 from http://www.oracle.com/technetwork/java/javase/downloads/index.html?ssSourceSiteId=ocomcn. Set the environment variables: JAVA_HOME, JBOSS_HOME, MAVEN_HOME, and MYSQL_HOME. Add %JAVA_HOME%/bin, %MAVEN_HOME%/bin, %JBOSS_HOME%/bin, and %MYSQL_HOME%/bin to the PATH environment variable. The environment settings used are C:wildfly-8.1.0.Final for JBOSS_HOME, C:Program FilesMySQLMySQL Server 5.6.21 for MYSQL_HOME, C:mavenapache-maven-3.0.5 for MAVEN_HOME, and C:Program FilesJavajdk1.7.0_51 for JAVA_HOME. Run the add-user.bat script from the %JBOSS_HOME%/bin directory to create a user for the WildFly administrator console. When prompted What type of user do you wish to add?, select a) Management User. The other option is b) Application User. Management User is used to log in to Administration Console, and Application User is used to access applications. Subsequently, specify the Username and Password for the new user. When prompted with the question, Is this user going to be used for one AS process to connect to another AS..?, enter the answer as no. When installing and configuring the MySQL database, specify a password for the root user (the password mysql is used in the sample application). Creating a WildFly runtime As the application is run on WildFly 8.1, we need to create a runtime environment for WildFly 8.1 in Eclipse. Select Window | Preferences in Eclipse. In Preferences, select Server | Runtime Environment. Click on the Add button to add a new runtime environment, as shown in the following screenshot: In New Server Runtime Environment, select JBoss Community | WildFly 8.x Runtime. Click on Next: In WildFly Application Server 8.x, which appears below New Server Runtime Environment, specify a Name for the new runtime or choose the default name, which is WildFly 8.x Runtime. Select the Home Directory for the WildFly 8.x server using the Browse button. The Home Directory is the directory where WildFly 8.1 is installed. The default path is C:wildfly-8.1.0.Final. Select the Runtime JRE as JavaSE-1.7. If the JDK location is not added to the runtime list, first add it from the JRE preferences screen in Eclipse. In Configuration base directory, select standalone as the default setting. In Configuration file, select standalone.xml as the default setting. Click on Finish: A new server runtime environment for WildFly 8.x Runtime gets created, as shown in the following screenshot. Click on OK: Creating a Server Runtime Environment for WildFly 8.x is a prerequisite for creating a Java EE project in Eclipse. In the next topic, we will create a new Java EE project for an EJB 3.x application. Creating a Java EE project JBoss Tools provides project templates for different types of JBoss projects. In this topic, we will create a Java EE project for an EJB 3.x application. Select File | New | Other in Eclipse IDE. In the New wizard, select the JBoss Central | Java EE EAR Project wizard. Click on the Next button: The Java EE EAR Project wizard gets started. By default, a Java EE 6 project is created. A Java EE EAR Project is a Maven project. The New Project Example window lists the requirements and runs a test for the requirements. The JBoss AS runtime is required and some plugins (including the JBoss Maven Tools plugin) are required for a Java EE project. Select Target Runtime as WildFly 8.x Runtime, which was created in the preceding topic. Then, check the Create a blank project checkbox. Click on the Next button: Specify Project name as jboss-ejb3, Package as org.jboss.ejb3, and tick the Use default Workspace location box. Click on the Next button: Specify Group Id as org.jboss.ejb3, Artifact Id as jboss-ejb3, Version as 1.0.0, and Package as org.jboss.ejb3.model. Click on Finish: A Java EE project gets created, as shown in the following Project Explorer window. The jboss-ejb3 project consists of three subprojects: jboss-ejb3-ear, jboss-ejb3-ejb, and jboss-ejb3-web. Each subproject consists of a pom.xml file for Maven. The jboss-ejb3-ejb subproject consists of a META-INF/persistence.xml file within the src/main/resources source folder for the JPA database persistence configuration. Summary In this article, we learned how to create a Java EE project and its prerequisites. Resources for Article: Further resources on this subject: Common performance issues [article] Running our first web application [article] Various subsystem configurations [article]
Read more
  • 0
  • 0
  • 1527

article-image-cross-browser-tests-using-selenium-webdriver
Packt
25 Mar 2015
18 min read
Save for later

Cross-browser Tests using Selenium WebDriver

Packt
25 Mar 2015
18 min read
In this article by Prashanth Sams, author of the book Selenium Essentials, helps you to perform efficient compatibility tests. Here, we will also learn about how to run tests on cloud. You will cover the following topics in the article: Selenium WebDriver compatibility tests Selenium cross-browser tests on cloud Selenium headless browser testing (For more resources related to this topic, see here.) Selenium WebDriver compatibility tests Selenium WebDriver handles browser compatibility tests on almost every popular browser, including Chrome, Firefox, Internet Explorer, Safari, and Opera. In general, every browser's JavaScript engine differs from the others, and each browser interprets the HTML tags differently. The WebDriver API drives the web browser as the real user would drive it. By default, FirefoxDriver comes with the selenium-server-standalone.jar library added; however, for Chrome, IE, Safari, and Opera, there are libraries that need to be added or instantiated externally. Let's see how we can instantiate each of the following browsers through its own driver: Mozilla Firefox: The selenium-server-standalone library is bundled with FirefoxDriver to initialize and run tests in a Firefox browser. FirefoxDriver is added to the Firefox profile as a file extension on starting a new instance of FirefoxDriver. Please check the Firefox versions and its suitable drivers at http://selenium.googlecode.com/git/java/CHANGELOG. The following is the code snippet to kick start Mozilla Firefox: WebDriver driver = new FirefoxDriver(); Google Chrome: Unlike FirefoxDriver, the ChromeDriver is an external library file that makes use of WebDriver's wire protocol to run Selenium tests in a Google Chrome web browser. The following is the code snippet to kick start Google Chrome: System.setProperty("webdriver.chrome.driver","C:\chromedriver.exe"); WebDriver driver = new ChromeDriver(); To download ChromeDriver, refer to http://chromedriver.storage.googleapis.com/index.html. Internet Explorer: IEDriverServer is an executable file that uses the WebDriver wire protocol to control the IE browser in Windows. Currently, IEDriverServer supports the IE versions 6, 7, 8, 9, and 10. The following code snippet helps you to instantiate IEDriverServer: System.setProperty("webdriver.ie.driver","C:\IEDriverServer.exe"); DesiredCapabilities dc = DesiredCapabilities.internetExplorer(); dc.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true); WebDriver driver = new InternetExplorerDriver(dc); To download IEDriverServer, refer to http://selenium-release.storage.googleapis.com/index.html. Apple Safari: Similar to FirefoxDriver, SafariDriver is internally bound with the latest Selenium servers, which starts the Apple Safari browser without any external library. SafariDriver supports the Safari browser versions 5.1.x and runs only on MAC. For more details, refer to http://elementalselenium.com/tips/69-safari. The following code snippet helps you to instantiate SafariDriver: WebDriver driver = new SafariDriver(); Opera: OperaPrestoDriver (formerly called OperaDriver) is available only for Presto-based Opera browsers. Currently, it does not support Opera versions 12.x and above. However, the recent releases (Opera 15.x and above) of Blink-based Opera browsers are handled using OperaChromiumDriver. For more details, refer to https://github.com/operasoftware/operachromiumdriver. The following code snippet helps you to instantiate OperaChrumiumDriver: DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("opera.binary", "C://Program Files (x86)//Opera//opera.exe"); capabilities.setCapability("opera.log.level", "CONFIG"); WebDriver driver = new OperaDriver(capabilities); To download OperaChromiumDriver, refer to https://github.com/operasoftware/operachromiumdriver/releases. TestNG TestNG (Next Generation) is one of the most widely used unit-testing frameworks implemented for Java. It runs Selenium-based browser compatibility tests with the most popular browsers. The Eclipse IDE users must ensure that the TestNG plugin is integrated with the IDE manually. However, the TestNG plugin is bundled with IntelliJ IDEA as default. The testng.xml file is a TestNG build file to control test execution; the XML file can run through Maven tests using POM.xmlwith the help of the following code snippet: <plugin>    <groupId>org.apache.maven.plugins</groupId>    <artifactId>maven-surefire-plugin</artifactId>    <version>2.12.2</version>    <configuration>      <suiteXmlFiles>      <suiteXmlFile>testng.xml</suiteXmlFile>      </suiteXmlFiles>    </configuration> </plugin> To create a testng.xml file, right-click on the project folder in the Eclipse IDE, navigate to TestNG | Convert to TestNG, and click on Convert to TestNG, as shown in the following screenshot: The testng.xml file manages the entire tests; it acts as a mini data source by passing the parameters directly into the test methods. The location of the testng.xml file is hsown in the following screenshot: As an example, create a Selenium project (for example, Selenium Essentials) along with the testng.xml file, as shown in the previous screenshot. Modify the testng.xml file with the following tags: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Suite" verbose="3" parallel="tests" thread-count="5">   <test name="Test on Firefox">    <parameter name="browser" value="Firefox" />    <classes>      <class name="package.classname" />      </classes> </test>   <test name="Test on Chrome">    <parameter name="browser" value="Chrome" />    <classes>    <class name="package.classname" />    </classes> </test>   <test name="Test on InternetExplorer">    <parameter name="browser" value="InternetExplorer" />    <classes>       <class name="package.classname" />    </classes> </test>   <test name="Test on Safari">    <parameter name="browser" value="Safari" />    <classes>      <class name="package.classname" />    </classes> </test>   <test name="Test on Opera">    <parameter name="browser" value="Opera" />    <classes>      <class name="package.classname" />    </classes> </test> </suite> <!-- Suite --> Download all the external drivers except FirefoxDriver and SafariDriver, extract the zipped folders, and locate the external drivers in the test script as mentioned in the preceding snippets for each browser. The following Java snippet will explain to you how you can get parameters directly from the testng.xml file and how you can run cross-browser tests as a whole: @BeforeTest @Parameters({"browser"}) public void setUp(String browser) throws MalformedURLException { if (browser.equalsIgnoreCase("Firefox")) {    System.out.println("Running Firefox");    driver = new FirefoxDriver(); } else if (browser.equalsIgnoreCase("chrome")) {    System.out.println("Running Chrome"); System.setProperty("webdriver.chrome.driver", "C:\chromedriver.exe");    driver = new ChromeDriver(); } else if (browser.equalsIgnoreCase("InternetExplorer")) {    System.out.println("Running Internet Explorer"); System.setProperty("webdriver.ie.driver", "C:\IEDriverServer.exe"); DesiredCapabilities dc = DesiredCapabilities.internetExplorer();    dc.setCapability (InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true); //If IE fail to work, please remove this line and remove enable protected mode for all the 4 zones from Internet options    driver = new InternetExplorerDriver(dc); } else if (browser.equalsIgnoreCase("safari")) {    System.out.println("Running Safari");    driver = new SafariDriver(); } else if (browser.equalsIgnoreCase("opera")) {    System.out.println("Running Opera"); // driver = new OperaDriver();       --Use this if the location is set properly--    DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("opera.binary", "C://Program Files (x86)//Opera//opera.exe");    capabilities.setCapability("opera.log.level", "CONFIG");    driver = new OperaDriver(capabilities); } } SafariDriver is not yet stable. A few of the major issues in SafariDriver are as follows: SafariDriver won't work properly in Windows SafariDriver does not support modal dialog box interaction You cannot navigate forward or backwards in browser history through SafariDriver Selenium cross-browser tests on the cloud The ability to automate Selenium tests on the cloud is quite interesting, with instant access to real devices. Sauce Labs, BrowserStack, and TestingBot are the leading web-based tools used for cross-browser compatibility checking. These tools contain unique test automation features, such as diagnosing failures through screenshots and video, executing parallel tests, running Appium mobile automation tests, executing tests on internal local servers, and so on. SauceLabs SauceLabs is the standard Selenium test automation web app to do cross-browser compatibility tests on the cloud. It lets you automate tests on your favorite programming languages using test frameworks such as JUnit, TestNG, Rspec, and many more. SauceLabs cloud tests can also be executed from the Selenium Builder IDE interface. Check for the available SauceLabs devices, OS, and platforms at https://saucelabs.com/platforms. Access the websitefrom your web browser, log in, and obtain the Sauce username and Access Key. Make use of the obtained credentials to drive tests over the SauceLabs cloud. SauceLabs creates a new instance of the virtual machine while launching the tests. Parallel automation tests are also possible using SauceLabs. The following is a Java program to run tests over the SauceLabs cloud: package packagename;   import java.net.URL; import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver; import java.lang.reflect.*;   public class saucelabs {   private WebDriver driver;   @Parameters({"username", "key", "browser", "browserVersion"}) @BeforeMethod public void setUp(@Optional("yourusername") String username,                     @Optional("youraccesskey") String key,                      @Optional("iphone") String browser,                      @Optional("5.0") String browserVersion,                      Method method) throws Exception {   // Choose the browser, version, and platform to test DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName(browser); capabilities.setCapability("version", browserVersion); capabilities.setCapability("platform", Platform.MAC); capabilities.setCapability("name", method.getName()); // Create the connection to SauceLabs to run the tests this.driver = new RemoteWebDriver( new URL("http://" + username + ":" + key + "@ondemand.saucelabs.com:80/wd/hub"), capabilities); }   @Test public void Selenium_Essentials() throws Exception {    // Make the browser get the page and check its title driver.get("http://www.google.com"); System.out.println("Page title is: " + driver.getTitle()); Assert.assertEquals("Google", driver.getTitle()); WebElement element = driver.findElement(By.name("q")); element.sendKeys("Selenium Essentials"); element.submit(); } @AfterMethod public void tearDown() throws Exception { driver.quit(); } } SauceLabs has a setup similar to BrowserStack on test execution and generates detailed logs. The breakpoints feature allows the user to manually take control over the virtual machine and pause tests, which helps the user to investigate and debug problems. By capturing JavaScript's console log, the JS errors and network requests are displayed for quick diagnosis while running tests against Google Chrome browser. BrowserStack BrowserStack is a cloud-testing web app to access virtual machines instantly. It allows users to perform multi-browser testing of their applications on different platforms. It provides a setup similar to SauceLabs for cloud-based automation using Selenium. Access the site https://www.browserstack.com from your web browser, log in, and obtain the BrowserStack username and Access Key. Make use of the obtained credentials to drive tests over the BrowserStack cloud. For example, the following generic Java program with TestNG provides a detailed overview of the process that runs on the BrowserStack cloud. Customize the browser name, version, platform, and so on, using capabilities. Let's see the Java program we just talked about: package packagename;   import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver;   public class browserstack {   public static final String USERNAME = "yourusername"; public static final String ACCESS_KEY = "youraccesskey"; public static final String URL = "http://" + USERNAME + ":" + ACCESS_KEY + "@hub.browserstack.com/wd/hub";   private WebDriver driver;   @BeforeClass public void setUp() throws Exception {    DesiredCapabilities caps = new DesiredCapabilities();    caps.setCapability("browser", "Firefox");    caps.setCapability("browser_version", "23.0");    caps.setCapability("os", "Windows");    caps.setCapability("os_version", "XP");    caps.setCapability("browserstack.debug", "true"); //This enable Visual Logs      driver = new RemoteWebDriver(new URL(URL), caps); }   @Test public void testOnCloud() throws Exception {    driver.get("http://www.google.com");    System.out.println("Page title is: " + driver.getTitle());    Assert.assertEquals("Google", driver.getTitle());    WebElement element = driver.findElement(By.name("q"));    element.sendKeys("seleniumworks");    element.submit(); }   @AfterClass public void tearDown() throws Exception {    driver.quit(); } } The app generates and stores test logs for the user to access anytime. The generated logs provide a detailed analysis with step-by-step explanations. To enhance the test speed, run parallel Selenium tests on the BrowserStack cloud; however, the automation plan has to be upgraded to increase the number of parallel test runs. TestingBot TestingBot also provides a setup similar to BrowserStack and SauceLabs for cloud-based cross-browser test automation using Selenium. It records a video of the running tests to analyze problems and debug. Additionally, it provides support to capture the screenshots on test failure. To run local Selenium tests, it provides an SSH tunnel tool that lets you run tests against local servers or other web servers. TestingBot uses Amazon's cloud infrastructure to run Selenium scripts in various browsers. Access the site https://testingbot.com/, log in, and obtain Client Key and Client Secret from your TestingBot account. Make use of the obtained credentials to drive tests over the TestingBot cloud. Let's see an example Java test program with TestNG using the Eclipse IDE that runs on the TestingBot cloud: package packagename;   import java.net.URL;   import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver;   public class testingbot { private WebDriver driver;   @BeforeClass public void setUp() throws Exception { DesiredCapabilitiescapabillities = DesiredCapabilities.firefox();    capabillities.setCapability("version", "24");    capabillities.setCapability("platform", Platform.WINDOWS);    capabillities.setCapability("name", "testOnCloud");    capabillities.setCapability("screenshot", true);    capabillities.setCapability("screenrecorder", true);    driver = new RemoteWebDriver( new URL ("http://ClientKey:[email protected]:4444/wd/hub"), capabillities); }   @Test public void testOnCloud() throws Exception {    driver.get      ("http://www.google.co.in/?gws_rd=cr&ei=zS_mUryqJoeMrQf-yICYCA");    driver.findElement(By.id("gbqfq")).clear();    WebElement element = driver.findElement(By.id("gbqfq"));    element.sendKeys("selenium"); Assert.assertEquals("selenium - Google Search", driver.getTitle()); }   @AfterClass public void tearDown() throws Exception {    driver.quit(); } } Click on the Tests tab to check the log results. The logs are well organized with test steps, screenshots, videos, and a summary. Screenshots are captured on each and every step to make the tests more precise, as follows: capabillities.setCapability("screenshot", true); // screenshot capabillities.setCapability("screenrecorder", true); // video capture TestingBot provides a unique feature by scheduling and running tests directly from the site. The tests can be prescheduled to repeat tests any number of times on a daily or weekly basis. It's even more accurate on scheduling the test start time. You will be apprised of test failures with an alert through e-mail, an API call, an SMS, or a Prowl notification. This feature enables error handling to rerun failed tests automatically as per the user settings. Launch Selenium IDE, record tests, and save the test case or test suite in default format (HTML). Access the https://testingbot.com/ URL from your web browser and click on the Test Lab tab. Now, try to upload the already-saved Selenium test case, select the OS platform and browser name and version. Finally, save the settings and execute tests. The test results are recorded and displayed under Tests. Selenium headless browser testing A headless browser is a web browser without Graphical User Interface (GUI). It accesses and renders web pages but doesn't show them to any human being. A headless browser should be able to parse JavaScript. Currently, most of the systems encourage tests against headless browsers due to its efficiency and time-saving properties. PhantomJS and HTMLUnit are the most commonly used headless browsers. Capybara-webkit is another efficient headless WebKit for rails-based applications. PhantomJS PhantomJS is a headless WebKit scriptable with JavaScript API. It is generally used for headless testing of web applications that comes with built-in GhostDriver. Tests on PhantomJs are obviously fast since it has fast and native support for various web standards, such as DOM handling, CSS selector, JSON, canvas, and SVG. In general, WebKit is a layout engine that allows the web browsers to render web pages. Some of the browsers, such as Safari and Chrome, use WebKit. Apparently, PhantomJS is not a test framework; it is a headless browser that is used only to launch tests via a suitable test runner called GhostDriver. GhostDriver is a JS implementation of WebDriver Wire Protocol for PhantomJS; WebDriver Wire Protocol is a standard API that communicates with the browser. By default, the GhostDriver is embedded with PhantomJS. To download PhantomJS, refer to http://phantomjs.org/download.html. Download PhantomJS, extract the zipped file (for example, phantomjs-1.x.x-windows.zip for Windows) and locate the phantomjs.exe folder. Add the following imports to your test code: import org.openqa.selenium.phantomjs.PhantomJSDriver; import org.openqa.selenium.phantomjs.PhantomJSDriverService; import org.openqa.selenium.remote.DesiredCapabilities; Introduce PhantomJSDriver using capabilities to enable or disable JavaScript or to locate the phantomjs executable file path: DesiredCapabilities caps = new DesiredCapabilities(); caps.setCapability("takesScreenshot", true); caps.setJavascriptEnabled(true); // not really needed; JS is enabled by default caps.setCapability(PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY, "C:/phantomjs.exe"); WebDriver driver = new PhantomJSDriver(caps); Alternatively, PhantomJSDriver can also be initialized as follows: System.setProperty("phantomjs.binary.path", "/phantomjs.exe"); WebDriver driver = new PhantomJSDriver(); PhantomJS supports screen capture as well. Since PhantomJS is a WebKit and a real layout and rendering engine, it is feasible to capture a web page as a screenshot. It can be set as follows: caps.setCapability("takesScreenshot", true); The following is the test snippet to capture a screenshot on test run: File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("c:\sample.jpeg"),true); For example, check the following test program for more details: package packagename;   import java.io.File; import java.util.concurrent.TimeUnit; import org.apache.commons.io.FileUtils; import org.openqa.selenium.*; import org.openqa.selenium.phantomjs.PhantomJSDriver;   public class phantomjs { private WebDriver driver; private String baseUrl;   @BeforeTest public void setUp() throws Exception { System.setProperty("phantomjs.binary.path", "/phantomjs.exe");    driver = new PhantomJSDriver();    baseUrl = "https://www.google.co.in"; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); }   @Test public void headlesstest() throws Exception { driver.get(baseUrl + "/"); driver.findElement(By.name("q")).sendKeys("selenium essentials"); File scrFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("c:\screen_shot.jpeg"), true); }   @AfterTest public void tearDown() throws Exception {    driver.quit(); } } HTMLUnitDriver HTMLUnit is a headless (GUI-less) browser written in Java and is typically used for testing. HTMLUnitDriver, which is based on HTMLUnit, is the fastest and most lightweight implementation of WebDriver. It runs tests using a plain HTTP request, which is quicker than launching a browser and executes tests way faster than other drivers. The HTMLUnitDriver is added to the latest Selenium servers (2.35 or above). The JavaScript engine used by HTMLUnit (Rhino) is unique and different from any other popular browsers available on the market. HTMLUnitDriver supports JavaScript and is platform independent. By default, the JavaScript support for HTMLUnitDriver is disabled. Enabling JavaScript in HTMLUnitDriver slows down the test execution; however, it is advised to enable JavaScript support because most of the modern sites are Ajax-based web apps. By enabling JavaScript, it also throws a number of JavaScript warning messages in the console during test execution. The following snippet lets you enable JavaScript for HTMLUnitDriver: HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setJavascriptEnabled(true); // enable JavaScript The following line of code is an alternate way to enable JavaScript: HtmlUnitDriver driver = new HtmlUnitDriver(true); The following piece of code lets you handle a transparent proxy using HTMLUnitDriver: HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setProxy("xxx.xxx.xxx.xxx", port); // set proxy for handling Transparent Proxy driver.setJavascriptEnabled(true); // enable JavaScript [this emulate IE's js by default] HTMLUnitDriver can emulate the popular browser's JavaScript in a better way. By default, HTMLUnitDriver emulates IE's JavaScript. For example, to handle the Firefox web browser with version 17, use the following snippet: HtmlUnitDriver driver = new HtmlUnitDriver(BrowserVersion.FIREFOX_17); driver.setJavascriptEnabled(true); Here is the snippet to emulate a specific browser's JavaScript using capabilities: DesiredCapabilities capabilities = DesiredCapabilities.htmlUnit(); driver = new HtmlUnitDriver(capabilities); DesiredCapabilities capabilities = DesiredCapabilities.firefox(); capabilities.setBrowserName("Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0"); capabilities.setVersion("24.0"); driver = new HtmlUnitDriver(capabilities); Summary In this article, you learned to perform efficient compatibility tests and also learned about how to run tests on cloud. Resources for Article: Further resources on this subject: Selenium Testing Tools [article] First Steps with Selenium RC [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 4811