Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 6450

article-image-running-simpletest-and-phpunit
Packt
09 Feb 2016
4 min read
Save for later

Running Simpletest and PHPUnit

Packt
09 Feb 2016
4 min read
In this article by Matt Glaman, the author of the book Drupal 8 Development Cookbook. This article will kick off with an introduction to getting a Drupal 8 site installed and running. (For more resources related to this topic, see here.) Running Simpletest and PHPUnit Drupal 8 ships with two testing suites. Previously Drupal only supported Simpletest. Now there are PHPUnit tests as well. In the official change record, PHPUnit was added to provide testing without requiring a full Drupal bootstrap, which occurs with each Simpletest test. Read the change record here: https://www.drupal.org/node/2012184. Getting ready Currently core comes with Composer dependencies prepackaged and no extra steps need to be taken to run PHPUnit. This article will demonstrate how to run tests the same way that the QA testbot on Drupal.org does. The process of managing Composer dependencies may change, but is currently postponed due to Drupal.org's testing and packaging infrastructure. Read more here: https://www.drupal.org/node/1475510. How to do it… First enable the Simpletest module. Even though you might only want to run PHPUnit, this is a soft dependency for running the test runner script. Open a command line terminal and navigate to your Drupal installation directory and run the following to execute all available PHPUnit tests: php core/scripts/run-tests.sh PHPUnit Running Simpletest tests required executing the same script, however instead of passing PHPUnit as the argument, you must define the URL option and tests option: php core/scripts/run-tests.sh --url http://localhost --all Review the test output! How it works… The run-tests.sh script has been shipped with Drupal since 2008, then named run-functional-tests.php. The command interacts with the test suites in Drupal to run all or specific tests and sets up other configuration items. We will highlight some of the useful options below: --help: This displays the items covered in the following bullets --list: This displays the available test groups that can be run --url: This is required unless the Drupal site is accessible through http://localhost:80 --sqlite: This allows you to run Simpletest without having to have Drupal installed --concurrency: This allows you to define how many tests run in parallel There's more… Is run-tests a shell script? The run-tests.sh isn't actually a shell script. It is a PHP script, which is why you must execute it with PHP. In fact, within core/scripts each file is a PHP script file meant to be executed from the command line. These scripts are not intended to be run through a web server, which is one of the reasons for the .sh extension. There are issues with discovered PHP across platforms that prevent providing a shebang line to allow executing the file as a normal bash or bat script. For more info view this Drupal.org issue: https://www.drupal.org/node/655178. Running Simpletest without Drupal installed With Drupal 8, Simpletest can be run off SQLlite and no longer requires an installed database. This can be accomplished by passing the sqlite and dburl options to the run-tests.sh script. This requires the PHP SQLite extension to be installed. Here is an example adapted from the DrupalCI test runner for Drupal core: php core/scripts/run-tests.sh --sqlite /tmp/.ht.sqlite --die-on-fail --dburl sqlite://tmp/.ht.sqlite --all Combined with the built in PHP web server for debugging you can run Simpletest without a full-fledged environment. Running specific tests Each example thus far has used the all option to run every Simpletest available. There are various ways to run specific tests: --module: This allows you to run all the tests of a specific module --class: This runs a specific path, identified by full namespace path --file: This runs tests from a specified file --directory: This run tests within a specified directory Previously in Drupal tests were grouped inside of module.test files, which is where the file option derives from. Drupal 8 utilizes the PSR-4 autoloading method and requires one class per file. DrupalCI With Drupal 8 came a new initiative to upgrade the testing infrastructure on Drupal.org. The outcome was DrupalCI. DrupalCI is open source and can be downloaded and run locally. The project page for DrupalCI is: https://www.drupal.org/project/drupalci. The test bot utilizes Docker and can be downloaded locally to run tests. The project ships with a Vagrant file to allow it to be run within a virtual machine or locally. Learn more on the testbot's project page: https://www.drupal.org/project/drupalci_testbot. See also PHPUnit manual: https://phpunit.de/manual/4.8/en/writing-tests-for-phpunit.html Drupal PHPUnit handbook: https://drupal.org/phpunit Simpletest from the command line: https://www.drupal.org/node/645286 Resources for Article: Further resources on this subject: Installing Drupal 8 [article] What is Drupal? [article] Drupal 7 Social Networking: Managing Users and Profiles [article]
Read more
  • 0
  • 0
  • 2277

Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 2131
Banner background image

article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 2449

article-image-going-mobile-first
Packt
03 Feb 2016
16 min read
Save for later

Going Mobile First

Packt
03 Feb 2016
16 min read
In this article by Silvio Moreto Pererira, author of the book, Bootstrap By Example, we will focus on mobile design and how to change the page layout for different viewports, change the content, and more. In this article, you will learn the following: Mobile first development Debugging for any device The Bootstrap grid system for different resolutions (For more resources related to this topic, see here.) Make it greater Maybe you have asked yourself (or even searched for) the reason of the mobile first paradigm movement. It is simple and makes complete sense to speed up your development pace. The main argument for the mobile first paradigm is that it is easier to make it greater than to shrink it. In other words, if you first make a desktop version of the web page (known as responsive design or mobile last) and then go to adjust the website for a mobile, it has a probability of 99% of breaking the layout at some point, and you will have to fix a lot of things in both the mobile and desktop. On the other hand, if you first create the mobile version, naturally the website will use (or show) less content than the desktop version. So, it will be easier to just add the content, place the things in the right places, and create the full responsiveness stack. The following image tries to illustrate this concept. Going mobile last, you will get a degraded, warped, and crappy layout, and going mobile first, you will get a progressively enhanced, future-friendly awesome web page. See what happens to the poor elephant in this metaphor: Bootstrap and themobile first design At the beginning of Bootstrap, there was no concept of mobile first, so it was made to work for designing responsive web pages. However, with the version 3 of the framework, the concept of mobile first was very solid in the community. For doing this, the whole code of the scaffolding system was rewritten to become mobile first from the start. They decided to reformulate how to set up the grid instead of just adding mobile styles. This made a great impact on compatibility between versions older than 3, but was crucial for making the framework even more popular. To ensure the proper rendering of the page, set the correct viewport at the <head> tag: <meta name="viewport" content="width=device-width,   initial-scale=1"> How to debug different viewports in the browser Here, you will learn how to debug different viewports using the Google Chrome web browser. If you already know this, you can skip this section, although it might be important to refresh the steps for this. In the Google Chrome browser, open the Developer tools option. There are many ways to open this menu: Right-click anywhere on the page and click on the last option called Inspect. Go in the settings (the sandwich button on the right-hand side of the address bar), click on More tools, and finally on Developer tools. The shortcut to open it is Ctrl (cmd for OS X users) + Shift + I. F12 in Windows also works (Internet Explorer legacy…). With Developer tools, click on the mobile phone to the left of a magnifier, as showed in the following image: It will change the display of the viewport to a certain device, and you can also set a specific network usage to limit the data bandwidth. Chrome will show a message telling you that for proper visualization, you may need to reload the page to get the correct rendering: For the next image, we have activated the Device mode for an iPhone 5 device. When we set this viewport, the problems start to appear because we did not make the web page with the mobile first methodology. Bootstrap scaffolding for different devices Now that we know more about mobile first development and its important role in Bootstrap starting from version 3, we will cover Bootstrap usage for different devices and viewports. For doing this, we must apply the column class for the specific viewport, for example, for medium displays, we use the .col-md-*class. The following table presents the different classes and resolutions that will be applied for each viewport class:   Extra small devices (Phones < 544 px / 34 em) Small devices (Tablets ≥ 544 px / 34 em and < 768 px / 48 em) Medium devices (Desktops ≥ 768 px /48 em < 900px / 62 em) Large devices (Desktops ≥ 900 px / 62 em < 1200px 75 em) Extra large devices (Desktops ≥ 1200 px / 75 em) Grid behavior Horizontal lines at all times Collapse at start and fit column grid Container fixed width Auto 544px or 34rem 750px or 45rem 970px or 60rem 1170px or 72.25rem Class prefix .col-xs-* .col-sm-* .col-md-* .col-lg-* .col-xl-* Number of columns 12 columns Column fixed width Auto ~ 44px or 2.75 rem ~ 62px or 3.86 rem ~ 81px or 5.06 rem ~ 97px or 6.06 rem Mobile and extra small devices To exemplify the usage of Bootstrap scaffolding in mobile devices, we will have a predefined web page and want to adapt it to mobile devices. We will be using the Chrome mobile debug tool with the device, iPhone 5. You may have noticed that for small devices, Bootstrap just stacks each column without referring for different rows. In the layout, some of the Bootstrap rows may seem fine in this visualization, although the one in the following image is a bit strange as the portion of code and image are not in the same line, as it supposed to be: To fix this, we need to add the class column's prefix for extra small devices, which is .col-xs-*, where * is the size of the column from 1 to 12. Add the .col-xs-5 class and .col-xs-7 for the columns of this respective row. Refresh the page and you will see now how the columns are put side by side: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive">   </div> </div> Although the image of the web browser is too small on the right, it would be better if it was a vertical stretched image, such as a mobile phone. (What a coincidence!) To make this, we need to hide the browser image in extra small devices and display an image of the mobile phone. Add the new mobile image below the existing one as follows. You will see both images stacked up vertically in the right column: <img src="imgs/center.png" class="img-responsive"> <img src="imgs/mobile.png" class="img-responsive"> Then, we need to use the new concept of availability classes present in Bootstrap. We need to hide the browser image and display the mobile image just for this kind of viewport, which is extra small. For this, add the .hidden-xs class to the browser image and add the .visible-xs class to the mobile image: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive hidden-xs">     <img src="imgs/mobile.png" class="img-responsive visible-xs">   </div> </div> Now this row seems nice! With this, the browser image was hidden in extra small devices and the mobile image was shown for this viewport in question. The following image shows the final display of this row: Moving on, the next Bootstrap .row contains a testimonial surrounded by two images. It would be nicer if the testimonial appeared first and both images were displayed after it, splitting the same row, as shown in the following image. For this, we will repeat almost the same techniques presented in the last example: The first change is to hide the Bootstrap image using the .hidden-xs class. After this, create another image tag with the Bootstrap image in the same column of the PACKT image. The final code of the row should be as follows: <div class="row">   <div class="col-md-3 hidden-xs">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 visible-xs">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> We did plenty of things now; all the changes are highlighted. The first is the .hidden-xs class in the first column of the Bootstrap image, which hid the column for this viewport. Afterward, in the testimonial, we changed the grid for the mobile, adding a column offset with size 1 and making the testimonial fill the rest of the row with the .col-xs-11 class. Lastly, like we said, we want to split both images from PACKT and Bootstrap in the same row. For this, make the first image column fill seven columns with the .col-xs-7 class. The other image column is a little more complicated. As it is visible just for mobile devices, we add the .col-xs-5 class, which will make the element span five columns in extra small devices. Moreover, we hide the column for other viewports with the .visible-xs class. As you can see, this row has more than 12 columns (one offset, 11 testimonials, seven PACKT images, and five Bootstrap images). This process is called column wrapping and happens when you put more than 12 columns in the same row, so the groups of extra columns will wrap to the next lines. Availability classes Just like .hidden-*, there are the .visible-*-*classes for each variation of the display and column from 1 to 12. There is also a way to change the display of the CSS property using the .visible-*-* class, where the last * means block, inline, or inline-block. Use this to set the proper visualization for different visualizations. The following image shows you the final result of the changes. Note that we made the testimonial appear first, with one column of offset, and both images appear below it: Tablets and small devices Completing the mobile visualization devices, we move on to tablets and small devices, which are devices from 544 px (34 em) to 768 px (48 em). Most of this kind of devices are tablets or old desktops monitors. To work with this example, we are using the iPad mini in the portrait position. For this resolution, Bootstrap handles the rows just as in extra small devices by stacking up each one of the columns and making them fill the total width of the page. So, if we do not want this to happen, we have to set the column fill for each element with the .col-sm-* class manually. If you see now how our example is presented, there are two main problems. The first one is that the heading is in separate lines, whereas they could be in the same line. For this, we just need to apply the grid classes for small devices with the .col-sm-6 class for each column, splitting them into equal sizes: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> The result should be as follows: The second problem in this viewport is again the testimonial row! Due to the classes that we have added for the mobile viewport, the testimonial now has an offset column and different column span. We must add the classes for small devices and make this row with the Bootstrap image on the left, the testimonial in the middle, and the PACKT image on the right: <div class="row">   <div class="col-md-3 hidden-xs col-sm-3">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11     col-sm-6 col-sm-offset-0">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7 col-sm-3">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 hidden-sm hidden-md hidden-lg">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> As you can see, we had to reset the column offset in the testimonial column. It happened because it kept the offset that we had added for extra small devices. Moreover, we are just ensuring that the image columns had to fill just three columns with the .col-sm-3 classes in both the images. The result of the row is as follows: Everything else seems fine! These viewports were easier to set up. See how Bootstrap helps us a lot? Let's move on to the final viewport, which is a desktop or large devices. Desktop and large devices Last but not least, we enter the grid layout for a desktop and large devices. We skipped medium devices because we coded first for this viewport. Deactivate the Device mode in Chrome and put your page in a viewport with a width larger or equal to 1200 pixels. The grid prefix that we will be using is .col-lg-*, and if you take a look at the page, you will see that everything is well placed and we don't need to make changes! (Although we would like to make some tweaks to make our layout fancier and learn some stuff about the Bootstrap grid.) We want to talk about a thing called column ordering. It is possible to change the order of the columns in the same row by applying the.col-lg-push-* and .col-lg-pull-* classes. (Note that we are using the large devices prefix, but any other grid class prefix can be used.) The .col-lg-push-* class means that the column will be pushed to the right by the * columns, where * is the number of columns pushed. On the other hand, .col-lg-pull-* will pull the column in the left direction by *, where * is the number of columns as well. Let's test this trick in the second row by changing the order of both the columns: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6 col-lg-push-4">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6 col-lg-pull-4">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> We just added the .col-lg-push-4 class to the first column and .col-lg-pull-4 to the other one to get this result. By doing this, we have changed the order of both the columns in the second row, as shown in the following image: Summary In this article, you learned a little about the mobile first development and how Bootstrap can help us in this task. We started from an existing Bootstrap template, which was not ready for mobile visualization, and fixed that. While fixing, we used a lot of Bootstrap scaffolding properties and Bootstrap helpers, making it much easier to fix anything. We did all of this without a single line of CSS or JavaScript; we used only Bootstrap with its inner powers! Resources for Article:   Further resources on this subject: Bootstrap in a Box [article] The Bootstrap grid system [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 1652

article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 1699
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-accessing-data-spring
Packt
25 Jan 2016
8 min read
Save for later

Accessing Data with Spring

Packt
25 Jan 2016
8 min read
In this article written by Shameer Kunjumohamed and Hamidreza Sattari, authors of the book Spring Essentials, we will learn how to access data with Spring. (For more resources related to this topic, see here.) Data access or persistence is a major technical feature of data-driven applications. It is a critical area where careful design and expertise is required. Modern enterprise systems use a wide variety of data storage mechanisms, ranging from traditional relational databases such as Oracle, SQL Server, and Sybase to more flexible, schema-less NoSQL databases such as MongoDB, Cassandra, and Couchbase. Spring Framework provides comprehensive support for data persistence in multiple flavors of mechanisms, ranging from convenient template components to smart abstractions over popular Object Relational Mapping (ORM) tools and libraries, making them much easier to use. Spring's data access support is another great reason for choosing it to develop Java applications. Spring Framework offers developers the following primary approaches for data persistence mechanisms to choose from: Spring JDBC ORM data access Spring Data Furthermore, Spring standardizes the preceding approaches under a unified Data Access Object (DAO) notation called @Repository. Another compelling reason for using Spring is its first class transaction support. Spring provides consistent transaction management, abstracting different transaction APIs such as JTA, JDBC, JPA, Hibernate, JDO, and other container-specific transaction implementations. In order to make development and prototyping easier, Spring provides embedded database support, smart data source abstractions, and excellent test integration. This article explores various data access mechanisms provided by Spring Framework and its comprehensive support for transaction management in both standalone and web environments, with relevant examples. Why use Spring Data Access when we have JDBC? JDBC (short for Java Database Connectivity), the Java Standard Edition API for data connectivity from Java to relational databases, is a very a low-level framework. Data access via JDBC is often cumbersome; the boilerplate code that the developer needs to write makes the it error-prone. Moreover, JDBC exception handling is not sufficient for most use cases; there exists a real need for simplified but extensive and configurable exception handling for data access. Spring JDBC encapsulates the often-repeating code, simplifying the developer's code tremendously and letting him focus entirely on his business logic. Spring Data Access components abstract the technical details, including lookup and management of persistence resources such as connection, statement, and result set, and accept the specific SQL statements and relevant parameters to perform the operation. They use the same JDBC API under the hood while exposing simplified, straightforward interfaces for the client's use. This approach helps make a much cleaner and hence maintainable data access layer for Spring applications. DataSource The first step of connecting to a database from any Java application is obtaining a connection object specified by JDBC. DataSource, part of Java SE, is a generalized factory of java.sql.Connection objects that represents the physical connection to the database, which is the preferred means of producing a connection. DataSource handles transaction management, connection lookup, and pooling functionalities, relieving the developer from these infrastructural issues. DataSource objects are often implemented by database driver vendors and typically looked up via JNDI. Application servers and servlet engines provide their own implementations of DataSource, a connector to the one provided by the database vendor, or both. Typically configured inside XML-based server descriptor files, server-supplied DataSource objects generally provide built-in connection pooling and transaction support. As a developer, you just configure your DataSource objects inside the server (configuration files) declaratively in XML and look it up from your application via JNDI. In a Spring application, you configure your DataSource reference as a Spring bean and inject it as a dependency to your DAOs or other persistence resources. The Spring <jee:jndi-lookup/> tag (of  the http://www.springframework.org/schema/jee namespace) shown here allows you to easily look up and construct JNDI resources, including a DataSource object defined from inside an application server. For applications deployed on a J2EE application server, a JNDI DataSource object provided by the container is recommended. <jee:jndi-lookup id="taskifyDS" jndi-name="java:jboss/datasources/taskify"/> For standalone applications, you need to create your own DataSource implementation or use third-party implementations, such as Apache Commons DBCP, C3P0, and BoneCP. The following is a sample DataSource configuration using Apache Commons DBCP2: <bean id="taskifyDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> . . . </bean> Make sure you add the corresponding dependency (of your DataSource implementation) to your build file. The following is the one for DBCP2: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.1.1</version> </dependency> Spring provides a simple implementation of DataSource called DriverManagerDataSource, which is only for testing and development purposes, not for production use. Note that it does not provide connection pooling. Here is how you configure it inside your application: <bean id="taskifyDS" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> </bean> It can also be configured in a pure JavaConfig model, as shown in the following code: @Bean DataSource getDatasource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(pgDsProps.getProperty("url")); dataSource.setDriverClassName( pgDsProps.getProperty("driverClassName")); dataSource.setUsername(pgDsProps.getProperty("username")); dataSource.setPassword(pgDsProps.getProperty("password")); return dataSource; } Never use DriverManagerDataSource on production environments. Use third-party DataSources such as DBCP, C3P0, and BoneCP for standalone applications, and JNDI DataSource, provided by the container, for J2EE containers instead. Using embedded databases For prototyping and test environments, it would be a good idea to use Java-based embedded databases for quickly ramping up the project. Spring natively supports HSQL, H2, and Derby database engines for this purpose. Here is a sample DataSource configuration for an embedded HSQL database: @Bean DataSource getHsqlDatasource() { return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL) .addScript("db-scripts/hsql/db-schema.sql") .addScript("db-scripts/hsql/data.sql") .addScript("db-scripts/hsql/storedprocs.sql") .addScript("db-scripts/hsql/functions.sql") .setSeparator("/").build(); } The XML version of the same would look as shown in the following code: <jdbc:embedded-database id="dataSource" type="HSQL"> <jdbc:script location="classpath:db-scripts/hsql/db-schema.sql" /> . . . </jdbc:embedded-database> Handling exceptions in the Spring data layer With traditional JDBC based applications, exception handling is based on java.sql.SQLException, which is a checked exception. It forces the developer to write catch and finally blocks carefully for proper handling and to avoid resource leakages. Spring, with its smart exception hierarchy based on runtime exception, saves the developer from this nightmare. By having DataAccessException as the root, Spring bundles a big set of meaningful exceptions that translate the traditional JDBC exceptions. Besides JDBC, Spring covers the Hibernate, JPA, and JDO exceptions in a consistent manner. Spring uses SQLErrorCodeExceptionTranslator, which inherits SQLExceptionTranslator in order to translate SQLExceptions to DataAccessExceptions. We can extend this class for customizing the default translations. We can replace the default translator with our custom implementation by injecting it into the persistence resources (such as JdbcTemplate, which we will cover soon). DAO support and @Repository annotation The standard way of accessing data is via specialized DAOs that perform persistence functions under the data access layer. Spring follows the same pattern by providing DAO components and allowing developers to mark their data access components as DAOs using an annotation called @Repository. This approach ensures consistency over various data access technologies, such as JDBC, Hibernate, JPA, and JDO, as well as project-specific repositories. Spring applies SQLExceptionTranslator across all these methods consistently. Spring recommends your data access components to be annotated with the stereotype @Repository. The term "repository" was originally defined in Domain-Driven Design, Eric Evans, Addison-Wesley, as "a mechanism for encapsulating storage, retrieval, and search behavior which emulates a collection of objects." This annotation makes the class eligible for DataAccessException translation by Spring Framework. Spring Data, another standard data access mechanism provided by Spring, revolves around @Repository components. Summary We have so far explored Spring Framework's comprehensive coverage of all the technical aspects around data access and transaction. Spring provides multiple convenient data access methods, which takes away much of the hard work involved in building the data layer from the developer and also standardizes business components. Correct usage of Spring data access components will ensure that our data layer is clean and highly maintainable. Resources for Article: Further resources on this subject: So, what is Spring for Android?[article] Getting Started with Spring Security[article] Creating a Spring Application[article]
Read more
  • 0
  • 0
  • 2051

article-image-advanced-fetching
Packt
21 Jan 2016
6 min read
Save for later

Advanced Fetching

Packt
21 Jan 2016
6 min read
In this article by Ramin Rad, author of the book Mastering Hibernate, we have discussed various ways of fetching the data from the permanent store. We will focus a little more on annotations related to data fetch. (For more resources related to this topic, see here.) Fetching strategy In Java Persistence API, JPA, you can provide a hint to fetch the data lazily or eagerly using the FetchType. However, some implementations may ignore lazy strategy and just fetch everything eagerly. Hibernate's default strategy is FetchType.LAZY to reduce the memory footprint of your application. Hibernate offers additional fetch modes in addition to the commonly used JPA fetch types. Here, we will discuss how they are related and provide an explanation, so you understand when to use which. JOIN fetch mode The JOIN fetch type forces Hibernate to create a SQL join statement to populate both the entities and the related entities using just one SQL statement. However, the JOIN fetch mode also implies that the fetch type is EAGER, so there is no need to specify the fetch type. To understand this better, consider the following classes: @Entity public class Course { @Id @GeneratedValue private long id; private String title; @OneToMany(cascade=CascadeType.ALL, mappedBy="course") @Fetch(FetchMode.JOIN) private Set<Student> students = new HashSet<Student>(); // getters and setters } @Entity public class Student { @Id @GeneratedValue private long id; private String name; private char gender; @ManyToOne private Course course; // getters and setters } In this case, we are instructing Hibernate to use JOIN to fetch course and student in one SQL statement and this is the SQL that is composed by Hibernate: select course0_.id as id1_0_0_, course0_.title as title2_0_0_, students1_.course_id as course_i4_0_1_, students1_.id as id1_1_1_, students1_.gender as gender2_1_2_, students1_.name as name3_1_2_ from Course course0_ left outer join Student students1_ on course0_.id=students1_.course_id where course0_.id=? As you can see, Hibernate is using a left join all courses and any student that may have signed up for those courses. Another important thing to note is that if you use HQL, Hibernate will ignore JOIN fetch mode and you'll have to specify the join in the HQL. (we will discuss HQL in the next section) In other words, if you fetch a course entity using a statement such as this: List<Course> courses = session .createQuery("from Course c where c.id = :courseId") .setLong("courseId", chemistryId) .list(); Then, Hibernate will use SELECT mode; but if you don't use HQL, as shown in the next example, Hibernate will pay attention to the fetch mode instructions provided by the annotation. Course course = (Course) session.get(Course.class, chemistryId); SELECT fetch mode In SELECT mode, Hibernate uses an additional SELECT statement to fetch the related entities. This mode doesn't affect the behavior of the fetch type (LAZY, EAGER), so they will work as expected. To demonstrate this, consider the same example used in the last section and lets examine the output: select id, title from Course where id=? select course_id, id, gender, name from Student where course_id=? Note that the first Hibernate fetches and populates the Course entity and then uses the course ID to fetch the related students. Also, if your fetch type is set to LAZY and you never reference the related entities, the second SELECT is never executed. SUBSELECT fetch mode The SUBSELECT fetch mode is used to minimize the number of SELECT statements executed to fetch the related entities. If you first fetch the owner entities and then try to access the associated owned entities, without SUBSELECT, Hibernate will issue an additional SELECT statement for every one of the owner entities. Using SUBSELECT, you instruct Hibernate to use a SQL sub-select to fetch all the owners for the list of owned entities already fetched. To understand this better, let's explore the following entity classes. @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @Fetch(FetchMode.SUBSELECT) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } If you try to fetch from the Owner table, Hibernate will only issue two select statements; one to fetch the owners and another to fetch the cars for those owners, by using a sub-select, as follows: select id, name from Owner select owner_id, id, model from Car where owner_id in (select id from Owner) Without the SUBSELECT fetch mode, instead of the second select statement as shown in the preceding section, Hibernate will execute a select statement for every entity returned by the first statement. This is known as the n+1 problem, where one SELECT statement is executed, then, for each returned entity another SELECT statement is executed to fetch the associated entities. Finally, SUBSELECT fetch mode is not supported in the ToOne associations, such as OneToOne or ManyToOne because it was designed for relationships where the ownership of the entities is clear. Batch fetching Another strategy offered by Hibernate is batch fetching. The idea is very similar to SUBSELECT, except that instead of using SUBSELECT, the entity IDs are explicitly listed in the SQL and the list size is determined by the @BatchSize annotation. This may perform slightly better for smaller batches. (Note that all the commercial database engines also perform query optimization.) To demonstrate this, let's consider the following entity classes: @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @BatchSize(size=10) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } Using @BatchSize, we are instructing Hibernate to fetch the related entities (cars) using a SQL statement that uses a where in clause; thus listing the relevant ID for the owner entity, as shown: select id, name from Owner select owner_id, id, model from Car where owner_id in (?, ?) In this case, the first select statement only returned two rows, but if it returns more than the batch size there would be multiple select statements to fetch the owned entities, each fetching 10 entities at a time. Summary In this article, we covered many ways of fetching datasets from the database. Resources for Article: Further resources on this subject: Hibernate Types[article] Java Hibernate Collections, Associations, and Advanced Concepts[article] Integrating Spring Framework with Hibernate ORM Framework: Part 1[article]
Read more
  • 0
  • 0
  • 9034

article-image-ecmascript-6-standard
Packt
15 Jan 2016
21 min read
Save for later

ECMAScript 6 Standard

Packt
15 Jan 2016
21 min read
In this article by Ved Antani, the author of the book Mastering JavaScript, we will learn about ECMAScript 6 standard. ECMAScript 6 (ES6) is the latest version of the ECMAScript standard. This standard is evolving and the last round of modifications was done in June, 2015. ES2015 is significant in its scope and the recommendations of ES2015 are being implemented in most JavaScript engines. This is great news. ES6 introduces a huge number of features that add syntactic forms and helpers that enrich the language significantly. The pace at which ECMAScript standards keep evolving makes it a bit difficult for browsers and JavaScript engines to support new features. It is also a practical reality that most programmers have to write code that can be supported by older browsers. The notorious Internet Explorer 6 was once the most widely used browser in the world. To make sure that your code is compatible with the most number of browsers is a daunting task. So, while you want to jump to the next set of awesome ES6 features, you will have to consider the fact that several of the ES6 features may not be supported by the most popular of browsers or JavaScript frameworks. This may look like a dire scenario, but things are not that dark. Node.js uses the latest version of the V8 engine that supports majority of ES6 features. Facebook's React also supports them. Mozilla Firefox and Google Chrome are two of the most used browsers today and they support a majority of ES6 features. To avoid such pitfalls and unpredictability, certain solutions have been proposed. The most useful among these are polyfills/shims and transpilers. (For more resources related to this topic, see here.) ES6 syntax changes ES6 brings in significant syntactic changes to JavaScript. These changes need careful study and some getting used to. In this section, we will study some of the most important syntax changes and see how you can use Babel to start using these newer constructs in your code right away. Block scoping We discussed earlier that the variables in JavaScript are function-scoped. Variables created in a nested scope are available to the entire function. Several programming languages provide you with a default block scope where any variable declared within a block of code (usually delimited by {}) is scoped (available) only within this block. To achieve a similar block scope in JavaScript, a prevalent method is to use immediately-invoked function expressions (IIFE). Consider the following example: var a = 1; (function blockscope(){     var a = 2;     console.log(a);   // 2 })(); console.log(a);       // 1 Using the IIFE, we are creating a block scope for the a variable. When a variable is declared in the IIFE, its scope is restricted within the function. This is the traditional way of simulating the block scope. ES6 supports block scoping without using IIFEs. In ES6, you can enclose any statement(s) in a block defined by {}. Instead of using var, you can declare a variable using let to define the block scope. The preceding example can be rewritten using ES6 block scopes as follows: "use strict"; var a = 1; {   let a = 2;   console.log( a ); // 2 } console.log( a ); // 1 Using standalone brackets {} may seem unusual in JavaScript, but this convention is fairly common to create a block scope in many languages. The block scope kicks in other constructs such as if { } or for (){ } as well. When you use a block scope in this way, it is generally preferred to put the variable declaration on top of the block. One difference between variables declared using var and let is that variables declared with var are attached to the entire function scope, while variables declared using let are attached to the block scope and they are not initialized until they appear in the block. Hence, you cannot access a variable declared with let earlier than its declaration, whereas with variables declared using var, the ordering doesn't matter: function fooey() {   console.log(foo); // ReferenceError   let foo = 5000; } One specific use of let is in for loops. When we use a variable declared using var in a for loop, it is created in the global or parent scope. We can create a block-scoped variable in the for loop scope by declaring a variable using let. Consider the following example: for (let i = 0; i<5; i++) {   console.log(i); } console.log(i); // i is not defined As i is created using let, it is scoped in the for loop. You can see that the variable is not available outside the scope. One more use of block scopes in ES6 is the ability to create constants. Using the const keyword, you can create constants in the block scope. Once the value is set, you cannot change the value of such a constant: if(true){   const a=1;   console.log(a);   a=100;  ///"a" is read-only, you will get a TypeError } A constant has to be initialized while being declared. The same block scope rules apply to functions also. When a function is declared inside a block, it is available only within that scope. Default parameters Defaulting is very common. You always set some default value to parameters passed to a function or variables that you initialize. You may have seen code similar to the following: function sum(a,b){   a = a || 0;   b = b || 0;   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 Here, we are using || (OR operator) to default variables a and b to 0 if no value was supplied when this function was invoked. With ES6, you have a standard way of defaulting function arguments. The preceding example can be rewritten as follows: function sum(a=0, b=0){   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 You can pass any valid expression or function call as part of the default parameter list. Spread and rest ES6 has a new operator, …. Based on how it is used, it is called either spread or rest. Let's look at a trivial example: function print(a, b){   console.log(a,b); } print(...[1,2]);  //1,2 What's happening here is that when you add … before an array (or an iterable) it spreads the element of the array in individual variables in the function parameters. The a and b function parameters were assigned two values from the array when it was spread out. Extra parameters are ignored while spreading an array: print(...[1,2,3 ]);  //1,2 This would still print 1 and 2 because there are only two functional parameters available. Spreads can be used in other places also, such as array assignments: var a = [1,2]; var b = [ 0, ...a, 3 ]; console.log( b ); //[0,1,2,3] There is another use of the … operator that is the very opposite of the one that we just saw. Instead of spreading the values, the same operator can gather them into one: function print (a,...b){   console.log(a,b); } console.log(print(1,2,3,4,5,6,7));  //1 [2,3,4,5,6,7] In this case, the variable b takes the rest of the values. The a variable took the first value as 1 and b took the rest of the values as an array. Destructuring If you have worked on a functional language such as Erlang, you will relate to the concept of pattern matching. Destructuring in JavaScript is something very similar. Destructuring allows you to bind values to variables using pattern matching. Consider the following example: var [start, end] = [0,5]; for (let i=start; i<end; i++){   console.log(i); } //prints - 0,1,2,3,4 We are assigning two variables with the help of array destructuring: var [start, end] = [0,5]; As shown in the preceding example, we want the pattern to match when the first value is assigned to the first variable (start) and the second value is assigned to the second variable (end). Consider the following snippet to see how the destructuring of array elements works: function fn() {   return [1,2,3]; } var [a,b,c]=fn(); console.log(a,b,c); //1 2 3 //We can skip one of them var [d,,f]=fn(); console.log(d,f);   //1 3 //Rest of the values are not used var [e,] = fn(); console.log(e);     //1 Let's discuss how objects' destructuring works. Let's say that you have a function f that returns an object as follows: function f() {   return {     a: 'a',     b: 'b',     c: 'c'   }; } When we destructure the object being returned by this function, we can use the similar syntax as we saw earlier; the difference is that we use {} instead of []: var { a: a, b: b, c: c } = f(); console.log(a,b,c); //a b c Similar to arrays, we use pattern matching to assign variables to their corresponding values returned by the function. There is an even shorter way of writing this if you are using the same variable as the one being matched. The following example would do just fine: var { a,b,c } = f(); However, you would mostly be using a different variable name from the one being returned by the function. It is important to remember that the syntax is source: destination and not the usual destination: source. Carefully observe the following example: //this is target: source - which is incorrect var { x: a, x: b, x: c } = f(); console.log(x,y,z); //x is undefined, y is undefined z is undefined //this is source: target - correct var { a: x, b: y, c: z } = f(); console.log(x,y,z); // a b c This is the opposite of the target = source way of assigning values and hence will take some time in getting used to. Object literals Object literals are everywhere in JavaScript. You would think that there is no scope of improvement there. However, ES6 wants to improve this too. ES6 introduces several shortcuts to create a concise syntax around object literals: var firstname = "Albert", lastname = "Einstein",   person = {     firstname: firstname,     lastname: lastname   }; If you intend to use the same property name as the variable that you are assigning, you can use the concise property notation of ES6: var firstname = "Albert", lastname = "Einstein",   person = {     firstname,     lastname   }; Similarly, you are assigning functions to properties as follows: var person = {   getName: function(){     // ..   },   getAge: function(){     //..   } } Instead of the preceding lines, you can say the following: var person = {   getName(){     // ..   },   getAge(){     //..   } } Template literals I am sure you have done things like the following: function SuperLogger(level, clazz, msg){   console.log(level+": Exception happened in class:"+clazz+" -     Exception :"+ msg); } This is a very common way of replacing variable values to form a string literal. ES6 provides you with a new type of string literal using the backtick (`) delimiter. You can use string interpolation to put placeholders in a template string literal. The placeholders will be parsed and evaluated. The preceding example can be rewritten as follows: function SuperLogger(level, clazz, msg){   console.log(`${level} : Exception happened in class: ${clazz} -     Exception : {$msg}`); } We are using `` around a string literal. Within this literal, any expression of the ${..} form is parsed immediately. This parsing is called interpolation. While parsing, the variable's value replaces the placeholder within ${}. The resulting string is just a normal string with the placeholders replaced with actual variable values. With string interpolation, you can split a string into multiple lines also, as shown in the following code (very similar to Python): var quote = `Good night, good night! Parting is such sweet sorrow, that I shall say good night till it be morrow.`; console.log( quote ); You can use function calls or valid JavaScript expressions as part of the string interpolation: function sum(a,b){   console.log(`The sum seems to be ${a + b}`); } sum(1,2); //The sum seems to be 3 The final variation of the template strings is called tagged template string. The idea is to modify the template string using a function. Consider the following example: function emmy(key, ...values){   console.log(key);   console.log(values); } let category="Best Movie"; let movie="Adventures in ES6"; emmy`And the award for ${category} goes to ${movie}`;   //["And the award for "," goes to ",""] //["Best Movie","Adventures in ES6"] The strangest part is when we call the emmy function with the template literal. It's not a traditional function call syntax. We are not writing emmy(); we are just tagging the literal with the function. When this function is called, the first argument is an array of all the plain strings (the string between interpolated expressions). The second argument is the array where all the interpolated expressions are evaluated and stored. Now what this means is that the tag function can actually change the resulting template tag: function priceFilter(s, ...v){   //Bump up discount   return s[0]+ (v[0] + 5); } let default_discount = 20; let greeting = priceFilter `Your purchase has a discount of   ${default_discount} percent`; console.log(greeting);  //Your purchase has a discount of 25 As you can see, we modified the value of the discount in the tag function and returned the modified values. Maps and Sets ES6 introduces four new data structures: Map, WeakMap, Set, and WeakSet. We discussed earlier that objects are the usual way of creating key-value pairs in JavaScript. The disadvantage of objects is that you cannot use non-string values as keys. The following snippets demonstrate how Maps are created in ES6: let m = new Map(); let s = { 'seq' : 101 };   m.set('1','Albert'); m.set('MAX', 99); m.set(s,'Einstein');   console.log(m.has('1')); //true console.log(m.get(s));   //Einstein console.log(m.size);     //3 m.delete(s); m.clear(); You can initialize the map while declaring it: let m = new Map([   [ 1, 'Albert' ],   [ 2, 'Douglas' ],   [ 3, 'Clive' ], ]); If you want to iterate over the entries in the Map, you can use the entries() function that will return you an iterator. You can iterate through all the keys using the keys() function and you can iterate through the values of the Map using values() function: let m2 = new Map([     [ 1, 'Albert' ],     [ 2, 'Douglas' ],     [ 3, 'Clive' ], ]); for (let a of m2.entries()){   console.log(a); } //[1,"Albert"] [2,"Douglas"][3,"Clive"] for (let a of m2.keys()){   console.log(a); } //1 2 3 for (let a of m2.values()){   console.log(a); } //Albert Douglas Clive A variation of JavaScript Maps is a WeakMap—a WeakMap does not prevent its keys from being garbage-collected. Keys for a WeakMap must be objects and the values can be arbitrary values. While a WeakMap behaves in the same way as a normal Map, you cannot iterate through it and you can't clear it. There are reasons behind these restrictions. As the state of the Map is not guaranteed to remain static (keys may get garbage-collected), you cannot ensure correct iteration. There are not many cases where you may want to use WeakMap. The most uses of a Map can be written using normal Maps. While Maps allow you to store arbitrary values, Sets are a collection of unique values. Sets have similar methods as Maps; however, set() is replaced with add(), and the get() method does not exist. The reason that the get() method is not there is because a Set has unique values, so you are interested in only checking whether the Set contains a value or not. Consider the following example: let x = {'first': 'Albert'}; let s = new Set([1,2,'Sunday',x]); //console.log(s.has(x));  //true s.add(300); //console.log(s);  //[1,2,"Sunday",{"first":"Albert"},300]   for (let a of s.entries()){   console.log(a); } //[1,1] //[2,2] //["Sunday","Sunday"] //[{"first":"Albert"},{"first":"Albert"}] //[300,300] for (let a of s.keys()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 for (let a of s.values()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 The keys() and values() iterators both return a list of the unique values in the Set. The entries() iterator yields a list of entry arrays, where both items of the array are the unique Set values. The default iterator for a Set is its values() iterator. Symbols ES6 introduces a new data type called Symbols. A Symbol is guaranteed to be unique and immutable. Symbols are usually used as an identifier for object properties. They can be considered as uniquely generated IDs. You can create Symbols with the Symbol() factory method—remember that this is not a constructor and hence you should not use a new operator: let s = Symbol(); console.log(typeof s); //symbol Unlike strings, Symbols are guaranteed to be unique and hence help in preventing name clashes. With Symbols, we have an extensibility mechanism that works for everyone. ES6 comes with a number of predefined built-in Symbols that expose various meta behaviors on JavaScript object values. Iterators Iterators have been around in other programming languages for quite some time. They give convenience methods to work with collections of data. ES6 introduces iterators for the same use case. ES6 iterators are objects with a specific interface. Iterators have a next() method that returns an object. The returning object has two properties—value (the next value) and done (indicates whether the last result has been reached). ES6 also defines an Iterable interface, which describes objects that must be able to produce iterators. Let's look at an array, which is an iterable, and the iterator that it can produce to consume its values: var a = [1,2]; var i = a[Symbol.iterator](); console.log(i.next());      // { value: 1, done: false } console.log(i.next());      // { value: 2, done: false } console.log(i.next());      // { value: undefined, done: true } As you can see, we are accessing the array's iterator via Symbol.iterator() and calling the next() method on it to get each successive element. Both value and done are returned by the next() method call. When you call next() past the last element in the array, you get an undefined value and done: true indicating that you have iterated over the entire array. For..of loops ES6 adds a new iteration mechanism in form of for..of loop, which loops over the set of values produced by an iterator. The value that we iterate over with for..of is an iterable. Let's compare for..of to for..in: var list = ['Sunday','Monday','Tuesday']; for (let i in list){   console.log(i);  //0 1 2 } for (let i of list){   console.log(i);  //Sunday Monday Tuesday } As you can see, using the for  in loop, you can iterate over indexes of the list array, while the for..of loop lets you iterate over the values stored in the list array. Arrow functions One of the most interesting new parts of ECMAScript 6 is arrow functions. Arrow functions are, as the name suggests, functions defined with a new syntax that uses an arrow (=>) as part of the syntax. Let's first see how arrow functions look: //Traditional Function function multiply(a,b) {   return a*b; } //Arrow var multiply = (a,b) => a*b; console.log(multiply(1,2)); //2 The arrow function definition consists of a parameter list (of zero or more parameters and surrounding ( .. ) if there's not exactly one parameter), followed by the => marker, which is followed by a function body. The body of the function can be enclosed by { .. } if there's more than one expression in the body. If there's only one expression, and you omit the surrounding { .. }, there's an implied return in front of the expression. There are several variations of how you can write arrow functions. The following are the most commonly used: // single argument, single statement //arg => expression; var f1 = x => console.log("Just X"); f1(); //Just X   // multiple arguments, single statement //(arg1 [, arg2]) => expression; var f2 = (x,y) => x*y; console.log(f2(2,2)); //4   // single argument, multiple statements // arg => { //     statements; // } var f3 = x => {   if(x>5){     console.log(x);   }   else {     console.log(x+5);   } } f3(6); //6   // multiple arguments, multiple statements // ([arg] [, arg]) => { //   statements // } var f4 = (x,y) => {   if(x!=0 && y!=0){     return x*y;   } } console.log(f4(2,2));//4   // with no arguments, single statement //() => expression; var f5 = () => 2*2; console.log(f5()); //4   //IIFE console.log(( x => x * 3 )( 3 )); // 9 It is important to remember that all the characteristics of a normal function parameters are available to arrow functions, including default values, destructuring, and rest parameters. Arrow functions offer a convenient and short syntax, which gives your code a very functional programming flavor. Arrow functions are popular because they offer an attractive promise of writing concise functions by dropping function, return, and { .. } from the code. However, arrow functions are designed to fundamentally solve a particular and common pain point with this-aware coding. In normal ES5 functions, every new function defined its own value of this (a new object in case of a constructor, undefined in strict mode function calls, context object if the function is called as an object method, and so on). JavaScript functions always have their own this and this prevents you from accessing the this of, for example, a surrounding method from inside a callback. To understand this problem, consider the following example: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   // --> 1   'use strict';   return s.map(function (a){             // --> 2     return this.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //Cannot read property 'str' of undefined On the line marked with 3, we are trying to get this.str, but the anonymous function also has its own this, which shadows this from the method from line 1. To fix this in ES5, we can assign this to a variable and use the variable instead: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){     'use strict';   var that = this;                       // --> 1   return s.map(function (a){             // --> 2     return that.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] On the line marked with 1, we are assigning this to a variable, that, and in the anonymous function, we are using the that variable, which will have a reference to this from the correct context. ES6 arrow functions have lexical this, meaning that the arrow functions capture the this value of the enclosing context. We can convert the preceding function to an equivalent arrow function as follows: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   return s.map((a)=> {     return this.str + a;   }); }; var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] Summary In this article, we discussed a few important features being added to the language in ES6. It's an exciting collection of new language features and paradigms, and using polyfills and transpilers, you can start with them right away. JavaScript is an ever growing language and it is important to understand what the future holds. ES6 features make JavaScript an even more interesting and mature language. Resources for Article: Further resources on this subject: Using JavaScript with HTML [article] Getting Started with Tableau Public [article] Façade Pattern – Being Adaptive with Façade [article]
Read more
  • 0
  • 0
  • 1610

article-image-introducing-sailsjs
Packt
15 Jan 2016
5 min read
Save for later

Introducing Sails.js

Packt
15 Jan 2016
5 min read
In this article by Shahid Shaikh, author of the book Sails.js Essentials, you will learn a few basics of Sails.js. Sails.js is modern production-ready Node.js framework to develop Node.js web application by following the MVC pattern. If you are not looking to reinvent wheel as we do in MongoDB, ExpressJS, AngularJS, and NodeJS (MEAN) stack and focus on business logic, then Sails.js is the answer. Sails.js is a powerful enterprise-ready framework, you can write your business logic and deploy the application with the surety that the application won't fail due to other factors. Sails.js uses Express.js as a web framework and Socket.io to handle the web socket messages. It is integrated and coupled in the Sails.js, therefore, you don't need to install and configure them separately. Sails.js also has all the popular database supports such as MySQL, MongoDB, PostgreSQL, and so on. It also comes up with autogenerated API feature that let's you create API on the go. (For more resources related to this topic, see here.) Brief about MVC We know that Model-View-Controller (MVC) is a software architecture pattern coined by Smalltalk engineers. MVC separates the application in three internal components and data is passed via each component. Each component is responsible for their task and they pass their result to the next component. This separation provides a great opportunity of code reusability and loose coupling. MVC components The following are the components of MVC architecture: Model The main component of MVC is model. Model represents knowledge. It could be a single object or nested structure of objects. The model directly manages the data (store the data as well), logic, and rules of the application. View View is the visual representation of model. View takes data from the model and presents it (in browser or console and so on). View gets updated as soon as the model is changed. An ideal MVC application must have a system to notify other components about the changes. In a web application, view is the HTML Embedded JavaScript (EJS) pages that we present in a web browser. Controller As the name implies, task of a controller is to accept input and convert it into a proper command for model or view. Controller can send commands to the model to make changes or update the state and it can also send commands to the view to update the information. For example, consider Google Docs as an MVC application, View will be the screen where you type. As you can see in the defined interval, it automatically saves the information in the Google server, which is a controller that notifies the model (Google backend server) to update the changes. Installing Sails.js Make sure that you have the latest version of Node.js and npm installed in your system before installing Sails.js. You can install it using the following command: npm install -g sails You can then use Sails.js as a command-line tool, as shown in the following: Creating new project You can create a new project using Sails.js command-line tool. The command is as follows: sails create appName Once the project is created, you need to install the dependencies that are needed by it. You can do so by running the following command: npm install Adding database support Sails.js provides object-relational mapping (ORM) drivers for widely-used databases such as MySQL, MongoDB, and PostgreSQL. In order to add these to your project, you need to install the respective packages such as sails-mysql for MySQL, sails-mongo for MongoDB, and so on. Once it is added, you need to change the connections.js and models.js file located in the /config folder. Add the connection parameters such as host, user, password, and database name in the connections.js file. For example, for MySQL, add the following: module.exports.connections = {   mysqlAdapter: {     adapter: 'sails-mysql',     host: 'localhost,     user: 'root',     password: '',     database: 'sampleDB'   } }; In the models.js file, change the connection parameter to this one. Here is a snippet for that: module.exports.models = {   connection: 'mysqlAdapter' }; Now, Sails.js will communicate with MySQL using this connection. Adding grunt task Sails.js uses grunt as a default task builder and provides effective way to add or remove existing tasks. If you take a look at the tasks folder in the Sails.js project directory, you will see that there is the config and register folder, which holds the task and registers the task with a grunt runner. In order to add a new task, you can create new file in the /config folder and add the grunt task using the following snippet as default: module.exports = function(grunt) {   // Your grunt task code }; Once done, you can register it with the default task runner or create a new file in the /register folder and add this task using the following code: module.exports = function (grunt) {   grunt.registerTask('task-name', [ 'your custom task']); }; Run this code using grunt <task-name>. Summary In this article, you learned that you can develop rich a web application very effectively with Sails.js as you don't have to do a lot of extra work for configuration and set up. Sails.js also provides autogenerated REST API and built-in WebSocket integration in each routes, which will help you to develop the real-time application in an easy way. Resources for Article: Further resources on this subject: Using Socket.IO and Express together [article] Parallel Programming Patterns [article] Planning Your Site in Adobe Muse [article]
Read more
  • 0
  • 0
  • 2035
article-image-exploring-themes
Packt
14 Jan 2016
10 min read
Save for later

Exploring Themes

Packt
14 Jan 2016
10 min read
Drupal developers and interface engineers do not always create custom themes from scratch. Sometimes, we are asked to create starter themes that we begin any project from or subthemes that extend the functionality of a base theme. Having the knowledge of how to handle each of these situations is important. In this article by Chaz Chumley, the author of Drupal 8 Theming with Twig, we will be discussing some starter themes and how to work around the various libraries available to us. (For more resources related to this topic, see here.) Starter themes Any time we begin developing in Drupal, it is preferable to have a collection of commonly used functions and libraries that we can reuse. Being able to have a consistent starting point when creating multiple themes means we don't have to rethink much from design to design. The concept of a starter theme makes this possible, and we will walk through the steps involved in creating one. Before we begin, take a moment to use the drupal8.sql file that we already have with us to restore our current Drupal instance. This file will add the additional content and configuration required while creating a starter theme. Once the restore is complete, our home page should look like the following screenshot: This is a pretty bland-looking home page with no real styling or layout. So, one thing to keep in mind when first creating a starter theme is how we want our content to look. Do we want our starter theme to include another CSS framework, or do we want to create our own from scratch? Since this is our first starter theme, we should not be worried about reinventing the wheel but instead leverage an existing CSS framework, such as Twitter Bootstrap. Creating a Bootstrap starter Having an example or mockup that we can refer to while creating a starter theme is always helpful. So, to get the most out of our Twitter Bootstrap starter, let's go over to http://getbootstrap.com/examples/jumbotron/, where we will see an example of a home page layout: If we take a look at the mockup, we can see that the layout consists of two rows of content, with the first row containing a large callout known as a Jumbotron. The second row contains three featured blocks of content. The remaining typography and components take advantage of the Twitter Bootstrap CSS framework to display content. One advantage of integrating the Twitter Bootstrap framework into our starter theme is that our markup will be responsive in nature. This means that as the browser window is resized, the content will scale down accordingly. At smaller resolutions, the three columns will stack on top of one another, enabling the user to view the content more easily on smaller devices. We will be recreating this home page for our starter theme, so let's take a moment and familiarize ourselves with some basic Bootstrap layout terminology before creating our theme. Understanding grids and columns Bootstrap uses a 12-column grid system to structure content using rows and columns. The page layout begins with a parent container that wraps all child elements and allows you to maintain a specific page width. Each row and column then has CSS classes identifying how the content should appear. So, for example, if we want to have a row with two equal-width columns, we would build our page using the following markup: <div class="container">     <div class="row">         <div class="col-md-6"></div>         <div class="col-md-6"></div>     </div> </div> The two columns within a row must combine to a value of 12, since Bootstrap uses a 12-column grid system. Using this simple math, we can have variously sized columns and multiple columns, as long as their total is 12. We should also take notice of these following column classes, as we have great flexibility in targeting different breakpoints: Extra small (col-xs-x) Small (col-sm-x) Medium (col-md-x) Large (col-lg-x) Each breakpoint references the various devices, from smartphones all the way up to television-size monitors. We can use multiple classes like  class="col-sm-6 col-md-4" to manipulate our layout, which gives us a two-column row on small devices and a three-column row on medium devices when certain breakpoints are reached. To get a more detailed understanding of the remaining Twitter Bootstrap documentation, you can go to http://getbootstrap.com/getting-started/ any time. For now, it's time we begin creating our starter theme. Setting up a theme folder The initial step in our process of creating a starter theme is fairly simple: we need to open up Finder or Windows Explorer, navigate to the themes folder, and create a folder for our theme. We will name our theme tweet, as shown in the following screenshot: Adding a screenshot Every theme deserves a screenshot, and in Drupal 8, all we need to do is have a file named screenshot.png, and the Appearance screen will use it to display an image above our theme. Configuring our theme Next, we will need to create our theme configuration file, which will allow our theme to be discoverable. We will only worry about general configuration information to start with and then add library and region information in the next couple of steps. Begin by creating a new file called tweet.info.yml in your themes/tweet folder, and add the following metadata to your file: name: Tweet type: theme description: 'A Twitter Bootstrap starter theme' core: 8.x base theme: false Notice that we are setting the base theme configuration to false. Setting this value to false lets Drupal know that our theme will not rely on any other theme files. This allows us to have full control of our theme's assets and Twig templates. We will save our changes at this juncture and clear the Drupal cache. Now we can take a look to see whether our theme is available to install. Installing our theme Navigate to /admin/appearance in your browser and you should see your new theme located in the Uninstalled themes section. Go ahead and install the theme by clicking on the Install and set as default link. If we navigate to the home page, we should see an unstyled home page: This clean palate is perfect while creating a starter theme, as it allows us to begin theming without worrying about overriding any existing markup that a base theme might include. Working with libraries While Drupal 8 ships with some improvements to its default CSS and JavaScript libraries, we will generally find ourselves wanting to add additional third-party libraries that can enhance the function and feel of our website. In our case, we have decided to add Twitter Bootstrap (http://getbootstrap.com), which provides us with a responsive CSS framework and JavaScript library that utilize a component-based approach to theming. The process actually involves three steps. The first is downloading or installing the assets that make up the framework or library. The second is creating a *.libraries.yml file and adding library entries that point to our assets. Finally, we will need to add a libraries reference to our *.info.yml file. Adding assets We can easily add Twitter Bootstrap framework assets by following these steps: Navigate to http://getbootstrap.com/getting-started/#download Click on the Download Bootstrap button Extract the zip file Copy the contents of the bootstrap folder to our themes/tweet folder Once we are done, our themes/tweet folder content should look like the following screenshot: Now that we have the Twitter Bootstrap assets added to our theme, we need to create a *.libraries.yml file that we can use to reference our assets. Creating a library reference Any time we want to add CSS or JS files to our theme, we will either need to create or modify an existing *.libraries.yml file that allows us to organize our assets. Each library entry can include one to multiple pointers to the file and location within our theme structure. Remember that the filename of our *.libraries.yml file should follow the same naming convention as our theme. We can begin by following these steps: Create a new file called tweet.libraries.yml. Add a library entry called bootstrap. Add a version that reflects the current version of Bootstrap that we are using. Add the CSS entry for bootstrap.min.css and bootstrap-theme.min.css. Add the JS entry for bootstrap.min.js. Add a dependency to jQuery located in Drupal's core: bootstrap:   version: 3.3.6   css:     theme:       css/bootstrap.min.css: {}       css/bootstrap-theme.min.css: {}     js:       js/bootstrap.min.js     dependencies:       - core/jquery Save tweet.libraries.yml. In the preceding library entry, we have added both CSS and JS files as well as introduced dependencies. Dependencies allow any JS file that relies on a specific JS library to make sure that the file can include the library as a dependency, which makes sure that the library is loaded before our JS file. In the case of Twitter Bootstrap, it relies on jQuery, and since Drupal 8 has it as part of its core.libraries.yml file, we can reference it by pointing to that library and its entry. Including our library Just because we added a library to our theme doesn't mean it will automatically be added to our website. In order for us to add Bootstrap to our theme, we need to include it in our tweet.info.yml configuration file. We can add Bootstrap by following these steps: Open tweet.info.yml Add a libraries reference to bootstrap to the bottom of our configuration: libraries:   - tweet/bootstrap Save tweet.info.yml. Make sure to clear Drupal's cache to allow our changes to be added to the theme registry. Finally, navigate to our home page and refresh the browser so that we can preview our changes: If we inspect the HTML using Chrome's developer tools, we should see that the Twitter Bootstrap library has been included along with the rest of our files. Both the CSS and JS files are being loaded into the proper flow of our document. Summary Whether a starter theme or subthemes, they are all just different variations of the same techniques. The level of effort required to create each type of theme may vary, but as we saw, there was a lot of repetition. We began with a discussion around starter themes and learned what steps were involved in working with libraries. Resources for Article: Further resources on this subject: Using JavaScript with HTML [article] Custom JavaScript and CSS and tokens [article] Concurrency Principles [article]
Read more
  • 0
  • 0
  • 1984

article-image-setup-routine-enterprise-spring-application
Packt
14 Jan 2016
6 min read
Save for later

Setup Routine for an Enterprise Spring Application

Packt
14 Jan 2016
6 min read
In this article by Alex Bretet, author of the book Spring MVC Cookbook, you will learn to install Eclipse for Java EE developers and Java SE 8. (For more resources related to this topic, see here.) Introduction The choice of the Eclipse IDE needs to be discussed as there is some competition in this domain. Eclipse is popular in the Java community for being an active open source product; it is consequently accessible online to anyone with no restrictions. It also provides, among other usages, a very good support to web implementations, particularly to MVC approaches. Why use the Spring Framework? The Spring Framework and its community have also contributed to pull forward the Java platform for more than a decade. Presenting the whole framework in detail would require us to write more than a article. However, the core functionality based on the principles of Inversion of Control and Dependency Injection through a performant access to the bean repository allows massive reusability. Staying lightweight, the Spring Framework secures great scaling capabilities and could probably suit all modern architectures. The following recipe is about downloading and installing the Eclipse IDE for JEE developers and downloading and installing JDK 8 Oracle Hotspot. Getting ready This first sequence could appear as redundant or unnecessary with regard to your education or experience. For instance, you will, for sure, stay away from unidentified bugs (integration or development). You will also be assured of experiencing the same interfaces as the presented screenshots and figures. Also, because third-party products are living, you will not have to face the surprise of encountering unexpected screens or windows. How to do it... You need to perform the following steps to install the Eclipse IDE: Download a distribution of the Eclipse IDE for Java EE developers. We will be using in this article an Eclipse Luna distribution. We recommend you to install this version, which can be found at https://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/lunasr1, so that you can follow along with our guidelines and screenshots completely. Download a Luna distribution for the OS and environment of your choice: The product to be downloaded is not a binary installer but a ZIP archive. If you feel confident enough to use another version (more actual) of the Eclipse IDE for Java EE developers, all of them can be found at https://www.eclipse.org/downloads. For the upcoming installations, on Windows, a few target locations are suggested to be at the root directory C:. To avoid permission-related issues, it would be better if your Windows user is configured to be a local administrator. If you can't be part of this group, feel free to target installation directories you have write access to. Extract the downloaded archive into an eclipse directory:     If you are on Windows, archive into the C:Users{system.username}eclipse directory     If you are using Linux, archive into the /home/usr/{system.username}/eclipse directory     If you are using Mac OS X, archive into the /Users/{system.username}/eclipse directory Select and download a JDK 8. We suggest you to download the Oracle Hotspot JDK. Hotspot is a performant JVM implementation that has originally been built by Sun Microsystems. Now owned by Oracle, the Hotspot JRE and JDK are downloadable for free. Choose the product corresponding to your machine through Oracle website's link, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. To avoid a compatibility issue later on, do stay consistent with the architecture choice (32 or 64 bits) that you have made earlier for the Eclipse archive. Install the JDK 8. On Windows, perform the following steps:     Execute the downloaded file and wait until you reach the next installation step.     On the installation-step window, pay attention to the destination directory and change it to C:javajdk1.8.X_XX (X_XX as the latest current version. We will be using jdk1.8.0_25 in this article.)     Also, it won't be necessary to install an external JRE, so uncheck the Public JRE feature. On a Linux/Mac OS, perform the following steps:     Download the tar.gz archive corresponding to your environment.     Change the current directory to where you want to install Java. For easier instructions, let's agree on the /usr/java directory.     Move the downloaded tar.gz archive to this current directory.     Unpack the archive with the following command line, targeting the name of your archive: tar zxvf jdk-8u25-linux-i586.tar.gz (This example is for a binary archive corresponding to a Linux x86 machine).You must end up with the /usr/java/jdk1.8.0_25 directory structure that contains the subdirectories /bin, /db, /jre, /include, and so on. How it works… Eclipse for Java EE developers We have installed the Eclipse IDE for Java EE developers. Comparatively to Eclipse IDE for Java developers, there are some additional packages coming along, such as Java EE Developer Tools, Data Tools Platform, and JavaScript Development Tools. This version is appreciated for its capability to manage development servers as part of the IDE itself and its capability to customize the Project Facets and to support JPA. The Luna version is officially Java SE 8 compatible; it has been a decisive factor here. Choosing a JVM The choice of JVM implementation could be discussed over performance, memory management, garbage collection, and optimization capabilities. There are lots of different JVM implementations, and among them, a couple of open source solutions, such as OpenJDK and IcedTea (RedHat). It really depends on the application requirements. We have chosen Oracle Hotspot from experience, and from reference implementations, deployed it in production; it can be trusted for a wide range of generic purposes. Hotspot also behaves very well to run Java UI applications. Eclipse is one of them. Java SE 8 If you haven't already played with Scala or Clojure, it is time to take the functional programming train! With Java SE 8, Lambda expressions reduce the amount of code dramatically with an improved readability and maintainability. We won't implement only this Java 8 feature, but it being probably the most popular, it must be highlighted as it has given a massive credit to the paradigm change. It is important nowadays to feel familiar with these patterns. Summary In this article, you learned how to install Eclipse for Java EE developers and Java SE 8. Resources for Article: Further resources on this subject: Support for Developers of Spring Web Flow 2[article] Design with Spring AOP[article] Using Spring JMX within Java Applications[article]
Read more
  • 0
  • 0
  • 1453

article-image-debugging
Packt
14 Jan 2016
11 min read
Save for later

Debugging

Packt
14 Jan 2016
11 min read
In this article by Brett Ohland and Jayant Varma, the authors of Xcode 7 Essentials (Second Edition), we learn about the debugging process, as the Grace Hopper had a remarkable career. She not only became a Rear Admiral in the U.S. Navy but also contributed to the field of computer science in many important ways during her lifetime. She was one of the first programmers on an early computer called Mark I during World War II. She invented the first compiler and was the head of a group that created the FLOW-MATIC language, which would later be extended to create the COBOL programming language. How does this relate to debugging? Well, in 1947, she was leading a team of engineers who were creating the successor of the Mark I computer (called Mark II) when the computer stopped working correctly. The culprit turned out to be a moth that had managed to fly into, and block the operation of, an important relay inside of the computer; she remarked that they had successfully debugged the system and went on to popularize the term. The moth and the log book page that it is attached to are on display in The Smithsonian Museum of American History in Washington D.C. While physical bugs in your systems are an astronomically rare occurrence, software bugs are extremely common. No developer sets out to write a piece of code that doesn't act as expected and crash, but bugs are inevitable. Luckily, Xcode has an assortment of tools and utilities to help you become a great detective and exterminator. In this article, we will cover the following topics: Breakpoints The LLDB console Debugging the view hierarchy Tooltips and a Quick Look (For more resources related to this topic, see here.) Breakpoints The typical development cycle of writing code, compiling, and then running your app on a device or in the simulator doesn't give you much insight into the internal state of the program. Clicking the stop button will, as the name suggests, stop the program and remove it from the memory. If your app crashes, or if you notice that a string is formatted incorrectly or the wrong photo is loaded into a view, you have no idea what code caused the issue. Surely, you could use the print statement in your code to output values on the console, but as your application involves more and more moving parts, this becomes unmanageable. A better option is to create breakpoints. A breakpoint is simply a point in your source code at which the execution of your program will pause and put your IDE into a debugging mode. While in this mode, you can inspect any objects that are in the memory, see a trace of the program's execution flow, and even step the program into or over instructions in your source code. Creating a breakpoint is simple. In the standard editor, there is a gutter that is shown to the immediate left of your code. Most often, there are line numbers in this area, and clicking on a line number will add a blue arrow to that line. That is a breakpoint. If you don't see the line numbers in your gutter, simply open Xcode's settings by going to Xcode | Settings (or pressing the Cmd + , keyboard shortcut) and toggling the line numbers option in the editor section. Clicking on the breakpoint again will dim the arrow and disable the breakpoint. You can remove the breakpoint completely by clicking, dragging, and releasing the arrow indicator well outside of the gutter. A dust ball animation will let you know that it has been removed. You can also delete breakpoints by right-clicking on the arrow indicator and selecting Delete. The Standard Editor showing a breakpoint on line 14 Listing breakpoints You can see a list of all active or inactive breakpoints in your app by opening the breakpoint navigator in the sidebar to the left (Cmd + 7). The list will show the file and line number of each breakpoint. Selecting any of them will quickly take the source editor to that location. Using the + icon at the bottom of the sidebar will let you set many more types of advanced breakpoints, such as the Exception, Symbolic, Open GL Errors, and Test Failure breakpoints. More information about these types can be found on Apple's Developer site at https://developer.apple.com. The debug area When Xcode reaches a breakpoint, its debugging mode will become active. The debug navigator will appear in the sidebar on the left, and if you've printed any information onto the console, the debug area will automatically appear below the editor area: The buttons in the top bar of the debug area are as follows (from left to right): Hide/Show debug area: Toggles the visibility of the debug area Activate/Deactivate breakpoints: This icon activates or deactivates all breakpoints Step Over: Execute the current line or function and pause at the next line Step Into: This executes the current line and jumps to any functions that are called Step Out: This exits the current function and places you at the line of code that called the current function Debug view hierarchy: This shows you a stacked representation of the view hierarchy Location: Since the simulator doesn't have GPS, you can set its location here (Apple HQ, London, and City Bike Ride are some of them) Stack Frame selector: This lets you choose a frame in which the current breakpoint is running The main window of the debug area can be split into two panes: the variables view and the LLDB console. The variables view The variables view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address, as shown in the following screenshot: This view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address. For collection types (arrays and dictionaries), you have the ability to drill down to the contents of the collection by toggling the arrow indicator on the left-hand side of each instance. In the bottom toolbar, there are three sections: Scope selector: This let's you toggle between showing the current scope, showing the global scope, or setting it to auto to select for you Information icons: The Quick Look icon will show quick look information in a popup, while the print information button will print the instance's description on the console Filter area: You can filter the list of values using this standard filter input box The console area The console area is the area where the system will place all system-generated messages as well as any messages that are printed using the print or NSLog statements. These messages will be displayed only while the application is running. While you are in debug mode, the console area becomes an interactive console in the LLDB debugger. This interactive mode lets you print the values of instances in memory, run debugger commands, inspect code, evaluate code, step through, and even skip code. As Xcode has matured over the years, more and more of the advanced information available for you in the console has become accessible in the GUI. Two important and useful commands for use within the console are p and po: po: Prints the description of the object p: Prints the value of an object Depending on the type of variable or constant, p and po may give different information. As an example, let's take a UITableViewCell that was created in our showcase app, now place a breakpoint in the tableView:cellForRowAtIndexPath method in the ExamleTableView class: (lldb) p cell (UITableViewCell) $R0 = 0x7a8f0000 {   UIView = {     UIResponder = {       NSObject = {         isa = 0x7a8f0000       }       _hasAlternateNextResponder = ' '       _hasInputAssistantItem = ' '     }     ... removed 100 lines  (lldb) po cell <UITableViewCell: 0x7a8f0000; frame = (0 0; 320 44); text = 'Helium'; clipsToBounds = YES; autoresize = W; layer = <CALayer: 0x7a6a02c0>> The p has printed out a lot of detail about the object while po has displayed a curated list of information. It's good to know and use each of these commands; each one displays different information. The debug navigator To open the debug navigator, you can click on its icon in the navigator sidebar or use the Cmd + 6 keyboard shortcut. This navigator will show data only while you are in debug mode: The navigator has several groups of information: The gauges area: At a glance, this shows information about how your application is using the CPU, Memory, Disk, Network, and iCloud (only when your app is using it) resources. Clicking on each gauge will show more details in the standard editor area. The processes area: This lists all currently active threads and their processes. This is important information for advanced debugging. The information in the gauges area used to be accessible only if you ran your application in and attached the running process to a separate instruments application. Because of the extra step in running our app in this separate application, Apple started including this information inside of Xcode starting from Xcode 6. This information is invaluable for spotting issues in your application, such as memory leaks, or spotting inefficient code by watching the CPU usage. The preceding screenshot shows the Memory Report screen. Because this is running in the simulator, the amount of RAM available for your application is the amount available on your system. The gauge on the left-hand side shows the current usage as a percentage of the available RAM. The Usage Comparison area shows how much RAM your application is using compared to other processes and the available free space. Finally, the bottom area shows a graph of memory usage over time. Each of these pieces of information is useful for the debugging of your application for crashes and performance. Quick Look There, we were able to see from within the standard editor the contents of variables and constants. While Xcode is in debug mode, we have this very ability. Simply hover the mouse over most instance variables and a Quick Look popup will appear to show you a graphical representation, like this: Quick Look knows how to handle built-in classes, such as CGRect or UIImage, but what about custom classes that you create? Let's say that you create a class representation of an Employee object. What information would be the best way to visualize an instance? You could show the employee's photo, their name, their employee number, and so on. Since the system can't judge what information is important, it will rely on the developer to decide. The debugQuickLookObject() method can be added to any class, and any object or value that is returned will show up within the Quick Look popup: func debugQuickLookObject() -> AnyObject {     return "This is the custom preview text" } Suppose we were to add that code to the ViewController class and put a breakpoint on the super.viewDidLoad() line: import UIKit class ViewController: UIViewController {       override func viewDidLoad() {         super.viewDidLoad()         // Do any additional setup after loading the view, typically from a nib.     }       override func didReceiveMemoryWarning() {         super.didReceiveMemoryWarning()         // Dispose of any resources that can be recreated.     }       func debugQuickLookObject() -> AnyObject {         return "I am a quick look value"     }   } Now, in the variables list in the debug area, you can click on the Quick Look icon, and the following screen will appear: Summary Debugging is an art as well as a science. Because of this, it easily can—and does—take entire books to cover the subject. The best way to learn techniques and tools is by experimenting on your own code. Resources for Article:   Further resources on this subject: Introduction to GameMaker: Studio [article] Exploring Swift [article] Using Protocols and Protocol Extensions [article]
Read more
  • 0
  • 0
  • 1706
article-image-optimizing-jquery-applications
Packt
13 Jan 2016
19 min read
Save for later

Optimizing jQuery Applications

Packt
13 Jan 2016
19 min read
This article, by Thodoris Greasidis, the author of jQuery Design Patterns, presents some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We will start with simple practices to write performant JavaScript code, and learn how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We will continue with jQuery-specific practices, such as caching of jQuery Composite Collection Objects, how to minimize DOM manipulations, and how to use the Delegate Event Observer pattern as a good example of the Flyweight pattern. (For more resources related to this topic, see here.) Optimizing the common JavaScript code In this section, we will analyze some performance tips that are not jQuery-specific and can be applied to most JavaScript implementations. Writing better for loops When iterating over the items of an array or an array-like collection with a for loop, a simple way to improve the performance of the iteration is to avoid accessing the length property on every loop. This can easily be done by storing the iteration length in a separate variable, which is declared just before the loop or even along with it, as shown in the following code: for (var i = 0, len = myArray.length; i < len; i++) {     var item = myArray[i];     /*...*/ } Moreover, if we need to iterate over the items of an array that does not contain "falsy" values, we can use an even better pattern, which is commonly applied when iterating over arrays that contain objects: var objects = [{ }, { }, { }]; for (var i = 0, item; item = objects[i]; i++) {     console.log(item); } In this case, instead of relying on the length property of the array, we are exploiting the fact that access to an out-of-bounds position of the array returns undefined, which is "falsy" and stops the iteration. Another sample case in which this trick can be used is when iterating over Node Lists or jQuery Composite Collection Objects, as shown in the following code: var anchors = $('a'); // or document.getElementsByTagName('a'); for (var i = 0, anchor; anchor = anchors[i]; i++) {     console.log(anchor.href); } For more information on the "truthy" and "falsy" JavaScript values, you can visit https://developer.mozilla.org/en-US/docs/Glossary/Truthy and https://developer.mozilla.org/en-US/docs/Glossary/Falsy. Using performant CSS selectors Even though Sizzle (jQuery's selector engine) hides the complexity of DOM traversals that are based on a complex CSS selector, we should have an idea of how our selectors perform. By understanding how CSS selectors can be matched against the elements of the DOM can help us write more efficient selectors, which will perform in a better way when used with jQuery. The key characteristic of efficient CSS selectors is specificity. According to this, IDs and Class selectors will always be more efficient than selectors with many results, such as div and *. When writing complex CSS selectors, keep in mind that they are evaluated from right to left, and a selector gets rejected after recursively testing it against every parent element until the root of the DOM is reached. As a result, try to be as specific as possible with the right-most selector in order to cut down the matched elements as early as possible during the execution of the selector: // initially matches all the anchors of the page // and then removes those that are not children of the container $('.container a');   // performs better, since it matches fewer elements // in the first step of the selector's evaluation $('.container .mySpecialLinks'); Another performance tip is to use the Child Selector (parent > child) wherever applicable in an effort to eliminate the recursion over all the hierarchies of the DOM tree. A great example of this can be applied in cases where the target elements can be found at a specific descendant level of a common ancestor element: // initially matches all the div's of the page, which is bad $('.container div') ;   // a lot better better, since it avoids the recursion // until the root of the DOM tree $('.container > div');   // best of all, but can't be used always $('.container > .specialDivs'); The same tips can also be applied to CSS selectors that are used to style pages. Even though browsers have been trying to optimize any given CSS selector, the tips mentioned earlier can greatly reduce the time that is required to render a web page. For more information on jQuery CSS selector performance, you can visit http://learn.jquery.com/performance/optimize-selectors/. Writing efficient jQuery code Let's now proceed and analyze the most important jQuery-specific performance tips. For more information on the most up-to-date performance tips about jQuery, you can go to the respective page of jQuery's Learning Center at http://learn.jquery.com/performance. Minimizing DOM traversals Since jQuery has made DOM traversals such simple tasks, a big number of web developers have started to overuse the $() function everywhere, even in subsequent lines of code, making their implementations slower by executing unnecessary code. One of the main reasons that the complexity of the operation is so often overlooked is the elegant and minimalistic syntax that jQuery uses. Despite the fact that JavaScript browser engines have already become more faster, with performance comparable with many compiled languages, the DOM API is still one of their slowest parts, and as a result, developers have to minimize their interactions with it. Caching jQuery objects Storing the result of the $() function to a local variable, and subsequently, using it to operate on the retrieved elements is the simplest way to eliminate unnecessary executions of the same DOM traversals: var $element = $('.Header'); if ($element.css('position') === 'static') {     $element.css({ position: 'relative' }); } $element.height('40px'); $element.wrapInner('<b>'); It is also highly suggested that you store Composite Collection Objects of important page elements as properties of our modules and reuse them everywhere in our application: window.myApp = window.myApp || {}; myApp.$container = null; myApp.init = function() {     myApp.$container = $('.myAppContainer');     }; $(document).ready(myApp.init); Caching retrieved elements on modules is a very good practice when the elements are not going to be removed from the page. Keep in mind that when dealing with elements with shorter lifespans, in order to avoid memory leaks, you either need to ensure that you clear their references when they are removed from the page, or have a fresh reference retrieved when required and cache it only inside your functions. Scoping element traversals Instead of writing complex CSS selectors for your traversals, as follows: $('.myAppContainer .myAppSection'); You can instead have the same result in a more efficient way using an already retried ancestor element to scope the DOM traversal. This way, you are not only using simpler CSS selectors that are faster to match against page elements, but you are also reducing the number of elements that need to be checked. Moreover, the resulting implementations have less code repetitions (they are DRYer), and the CSS selectors used are simple and more readable: var $container = $('.myAppContainer'); $container.find('.myAppSection'); Additionally, this practice works even better with module-wide cached elements: var $sections = myApp.$container.find('.myAppSection'); Chaining jQuery methods One of the characteristics of all jQuery APIs is that they are fluent interface implementations that enable you to chain several method invocations on a single Composite Collection Object: $('.Content').html('')     .append('<a href="#">')     .height('40px')     .wrapInner('<b>'); Chaining allows us to reduce the number of used variables and leads us to more readable implementations with less code repetitions. Don't overdo it Keep in mind that jQuery also provides the $.fn.end() method (http://api.jquery.com/end/) as a way to move back from a chained traversal: $('.box')     .filter(':even')     .find('.Header')     .css('background-color', '#0F0')     .end()     .end() // undo the filter and find traversals     .filter(':odd') // applied on  the initial .box results     .find('.Header')     .css('background-color', '#F00'); Even though this is a handy method for many cases, you should avoid overusing it, since it can affect the readability of your code. In many cases, using cached element collections instead of $.fn.end() can result in faster and more readable implementations. Improving DOM manipulations Extensive use of the DOM API is one of the most common things that makes an application slower, especially when it is used to manipulate the state of the DOM tree. In this section, we will showcase some tips that can reduce the performance hit when manipulating the DOM tree. Creating DOM elements The most efficient way to create DOM elements is to construct an HTML string and append it to the DOM tree using the $.fn.html() method. Additionally, since this can be too restrictive for some use cases, you can also use the $.fn.append() and $.fn.prepend() methods, which are slightly slower but can be better matches for your implementation. Ideally, if multiple elements need to be created, you should try to minimize the invocation of these methods by creating an HTML string that defines all the elements and then inserting it into the DOM tree, as follows: var finalHtml = ''; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     finalHtml += '<div><label><span>' + question.title + ':</span>' + '<input type="checkbox" name="' + question.name + '" />' + '</label></div>'; } $('form').html(finalHtml); Another way to achieve the same result is using an array to store the HTML for each intermediate element and then joining them right before inserting them into the DOM tree: var parts = []; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     parts.push('<div><label><span>' + question.title + ':</span>' +     '<input type="checkbox" name="' + question.name + '" />' +     '</label></div>'); } $('form').html(parts.join('')); This is a commonly used pattern, since until recently it was performing better than concatenating the intermediate results with "+=". Styling and animating Whenever possible, try using CSS classes for your styling manipulations by utilizing the $.fn.addClass() and $.fn.removeClass() methods instead of manually manipulating the style of elements with the $.fn.css() method. This is especially beneficial when you need to style a big number of elements, since this is the main use case of CSS classes and the browsers have already spent years optimizing it. As an extra optimization step used to minimize the number of manipulated elements, you can try to apply CSS classes on a single common ancestor element, and use a descendant CSS selector to apply your styling ( https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_selectors). When you still need to use the $.fn.css() method; for example, when your implementation needs to be imperative, we prefer using the invocation overload that accepts object parameters: http://api.jquery.com/css/#css-properties. This way, when applying multiple styles on elements, the required method invocations are minimized, and your code also gets better organized. Moreover, we need to avoid mixing methods that manipulate the DOM with methods that are read from the DOM, since this will force a reflow of the page, so that the browser can calculate the new positions of the page elements. Instead of doing something like this: $('h1').css('padding-left', '2%'); $('h1').css('padding-right', '2%'); $('h1').append('<b>!!</b>'); var h1OuterWidth = $('h1').outerWidth();   $('h1').css('margin-top', '5%'); $('body').prepend('<b>--!!--</b>'); var h1Offset = $('h1').offset(); We will prefer grouping the nonconflicting manipulations together like this: $('h1').css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>'); $('body').prepend('<b>--!!--</b>');   var h1OuterWidth = $('h1').outerWidth(); var h1Offset = $('h1').offset(); This way, the browser can try to skip some rerenderings of the page, resulting in less pauses of the execution of your code. For more information on reflows, you can refer to https://developers.google.com/speed/articles/reflow. Lastly, note that all jQuery generated animations in v1.x and v2.x are implemented using the setTimeout() function. This is going to change in v3.x of jQuery, which is designed to use the requestAnimationFrame() function, which is a better match to create imperative animations. Until then, you can use the jQuery-requestAnimationFrame plugin (https://github.com/gnarf/jquery-requestAnimationFrame), which monkey-patches jQuery to use the requestAnimationFrame() function for its animations when it is available. Manipulating detached elements Another way to avoid unnecessary repaints of the page while manipulating DOM elements is to detach the element from the page and reattach it after completing your manipulations. Working with a detached in-memory element is more faster and does not cause reflows of the page. In order to achieve this, we will use the $.fn.detach() method, which in contrast with $.fn.remove() preserves all event handlers and jQuery data on the detached element: var $h1 = $('#pageHeader'); var $h1Cont = $h1.parent(); $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1Cont.append($h1); Additionally, in order to be able to place the manipulated element back to its original position, we can create and insert a hidden "placeholder" element into the DOM. This empty and hidden element does not affect the rendering of the page and is removed right after the original item is placed back into its original position: var $h1PlaceHolder = $('<div style="display: none;"></div>'); var $h1 = $('#pageHeader'); $h1PlaceHolder.insertAfter($h1);   $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1.insertAfter($h1PlaceHolder); $h1PlaceHolder.remove(); $h1PlaceHolder = null; For more information on the $.fn.detach() method, you can visit its documentation page at http://api.jquery.com/detach/. Using the Flyweight pattern According to computer science, a Flyweight is an object that is used to reduce the memory consumption of an implementation that provides the functionality and/or data that will be shared with other object instances. The prototypes of JavaScript constructor functions can be characterized as Flyweights to some degree since every object instance can use all the methods and properties that are defined on its Prototype, until it overwrites them. On the other hand, classical Flyweights are separate objects from the object family that they are used with and often hold the shared data and functionality in special data structures. Using Delegated Event Observers A great example of Flyweights in jQuery applications are Delegated Event Observers that can greatly reduce the memory requirements of an implementation by working as a centralized event handler for a large group of elements. This way, we can avoid the cost of setting up separate observers and event handlers for every element and utilize the browser's event bubbling mechanism to observe them on a single common ancestor element and filter their origin. Moreover, this pattern can also simplify our implementation when we need to deal with dynamically constructed elements since it removes the need of attaching extra event handlers for each created element. For example, the following code attaches a single observer on the common ancestor element of several <button> elements. Whenever a click happens on one of the <button> elements, the event will bubble up to the parent element with the buttonsContainer CSS class, and the attached handler will be fired. Even if we add extra buttons later to that container, clicking on them will still fire the original handler: $('.buttonsContainer').on('click', 'button', function() {     var $button = $(this);     alert($button.text()); }); The actual Flyweight object is the event handler along with the callback that is attached to the ancestor element. Using $.noop() The jQuery library offers the $.noop() method, which is actually an empty function that can be shared with implementations. Using empty functions as default callback values can simplify and increase the readability of an implementation by reducing the number of the required if statements. Such a thing can be greatly beneficial for jQuery plugins that encapsulate the complex functionality: function doLater(callbackFn) {     setTimeout(function() {         if (callbackFn) {             callbackFn();         }     }, 500); }   // with $.noop() function doLater(callbackFn) {     callbackFn = callbackFn || $.noop();     setTimeout(function() {         callbackFn();     }, 500); } In such situations, where the implementation requirements or the personal taste of the developer leads to using empty functions, the $.noop() method can be beneficial to lower the memory consumption by sharing a single empty function instance among all the different parts of an implementation. As an added benefit of using the $.noop() method for every part of an implementation, we can also check whether a passed function reference is actually the empty function by simply checking whether callbackFn is equal to $.noop(). For more information, you can visit its documentation page at http://api.jquery.com/jQuery.noop/. Using the $.single plugin Another simple example of the Flyweight pattern in a jQuery application is the jQuery.single plugin, as described by James Padolsey in his article 76 bytes for faster jQuery, which tries to eliminate the creation of new jQuery objects in cases where we need to apply jQuery methods on a single page element. The implementation is quite small and creates a single jQuery Composite Collection Object that is returned on every invocation of the jQuery.single() method, containing the page element that was used as an argument: jQuery.single = (function(){     var collection = jQuery([1]); // Fill with 1 item, to make sure length === 1     return function(element) {         collection[0] = element; // Give collection the element:         return collection; // Return the collection:     }; }()); The jQuery.single plugin can be quite useful when used in observers, such as $.fn.on() and iterations with methods, such as $.each(): $buttonsContainer.on('click', '.button', function() {     // var $button = $(this);     var $button = $.single(this); // this is not creating any new object     alert($button); }); The advantages of using the jQuery.single plugin originate from the fact that we are creating less objects, and as a result, the browser's Garbage Collector will also have less work to do when freeing the memory of short-lived objects. Keep in mind that the side effects of having a single jQuery object returned by every invocation of the $.single() method, and the fact that the last invocation argument will be stored until the next invocation of the method: var buttons = document.getElementsByTagName('button'); var $btn0 = $.single(buttons[0]); var $btn1 = $.single(buttons[1]); $btn0 === $btn1 Also, if you use something, such as $btn1.remove(), then the element will not be freed until the next invocation of the $.single() method, which will remove it from the plugin's internal collection object. Another similar but more extensive plugin is the jQuery.fly plugin, which supports the case of being invoked with arrays and jQuery objects as parameters. For more information on jQuery.single and jQuery.fly, you can visit http://james.padolsey.com/javascript/76-bytes-for-faster-jquery/ and https://github.com/matjaz/jquery.fly. On the other hand, the jQuery implementation that handles the invocation of the $() method with a single page element is not complex at all and only creates a single simple object: jQuery = function( selector, context ) {     return new jQuery.fn.init( selector, context ); }; /*...*/ jQuery = jQuery.fn.init = function( selector, context ) {     /*... else */     if ( selector.nodeType ) {         this.context = this[0] = selector;         this.length = 1;         return this;     } /* ... */ }; Moreover, the JavaScript engines of modern browsers have already become quite efficient when dealing with short lived objects, since such objects are commonly passed around an application as method invocation parameters. Summary In this article, we learned some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We initially started with simple practices to write performant JavaScript code, and learned how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We continued with jQuery-specific practices, such as caching of jQuery Composite Collection Objects and ways to minimize DOM manipulations. Lastly, we saw some representatives of the Flyweight pattern and took a look at an example of the Delegated Event Observer pattern. Resources for Article: Further resources on this subject: Working with Events [article] Learning jQuery [article] Preparing Your First jQuery Mobile Project [article]
Read more
  • 0
  • 0
  • 2445

article-image-test-driven-data-model
Packt
13 Jan 2016
17 min read
Save for later

A Test-Driven Data Model

Packt
13 Jan 2016
17 min read
In this article by Dr. Dominik Hauser, author of Test-driven Development with Swift, we will cover the following topics: Implementing a To-Do item Implementing the location iOS apps are often developed using a design pattern called Model-View-Controller (MVC). In this pattern, each class (also, a struct or enum) is either a model object, a view, or a controller. Model objects are responsible to store data. They should be independent from the kind of presentation. For example, it should be possible to use the same model object for an iOS app and command-line tool on Mac. View objects are the presenters of data. They are responsible for making the objects visible (or in case of a VoiceOver-enabled app, hearable) for users. Views are special for the device that the app is executed on. In the case of a cross-platform application view, objects cannot be shared. Each platform needs its own implementation of the view layer. Controller objects communicate between the model and view objects. They are responsible for making the model objects presentable. We will use MVC for our to-do app because it is one of the easiest design patterns, and it is commonly used by Apple in their sample code. This article starts with the test-driven development of the model layer of our application. for more info: (For more resources related to this topic, see here.) Implementing the To-Do item A to-do app needs a model class/struct to store information for to-do items. We start by adding a new test case to the test target. Open the To-Do project and select the ToDoTests group. Navigate to File | New | File, go to iOS | Source | Unit Test Case Class, and click on Next. Put in the name ToDoItemTests, make it a subclass of XCTestCase, select Swift as the language, and click on Next. In the next window, create a new folder, called Model, and click on Create. Now, delete the ToDoTests.swift template test case. At the time of writing this article, if you delete ToDoTests.swift before you add the first test case in a test target, you will see a pop up from Xcode, telling you that adding the Swift file will create a mixed Swift and Objective-C target: This is a bug in Xcode 7.0. It seems that when adding the first Swift file to a target, Xcode assumes that there have to be Objective-C files already. Click on Don't Create if this happens to you because we will not use Objective-C in our tests. Adding a title property Open ToDoItemTests.swift, and add the following import expression right below import XCTest: @testable import ToDo This is needed to be able to test the ToDo module. The @testable keyword makes internal methods of the ToDo module accessible by the test case. Remove the two testExample() and testPerformanceExample()template test methods. The title of a to-do item is required. Let's write a test to ensure that an initializer that takes a title string exists. Add the following test method at the end of the test case (but within the ToDoItemTests class): func testInit_ShouldTakeTitle() {    ToDoItem(title: "Test title") } The static analyzer built into Xcode will complain about the use of unresolved identifier 'ToDoItem': We cannot compile this code because Xcode cannot find the ToDoItem identifier. Remember that not compiling a test is equivalent to a failing test, and as soon as we have a failing test, we need to write an implementation code to make the test pass. To add a file to the implementation code, first click on the ToDo group in Project navigator. Otherwise, the added file will be put into the test group. Go to File | New | File, navigate to the iOS | Source | Swift File template, and click on Next. Create a new folder called Model. In the Save As field, put in the name ToDoItem.swift, make sure that the file is added to the ToDo target and not to the ToDoTests target, and click on Create. Open ToDoItem.swift in the editor, and add the following code: struct ToDoItem { } This code is a complete implementation of a struct named ToDoItem. So, Xcode should now be able to find the ToDoItem identifier. Run the test by either going to Product | Test or use the ⌘U shortcut. The code does not compile because there is Extra argument 'title' in call. This means that at this stage, we could initialize an instance of ToDoItem like this: let item = ToDoItem() But we want to have an initializer that takes a title. We need to add a property, named title, of the String type to store the title: struct ToDoItem {    let title: String } Run the test again. It should pass. We have implemented the first micro feature of our to-do app using TDD. And it wasn't even hard. But first, we need to check whether there is anything to refactor in the existing test and implementation code. The tests and code are clean and simple. There is nothing to refactor as yet. Always remember to check whether refactoring is needed after you have made the tests green. But there are a few things to note about the test. First, Xcode shows a warning that Result of initializer is unused. To make this warning go away, assign the result of the initializer to an underscore _ = ToDoItem(title: "Test title"). This tells Xcode that we know what we are doing. We want to call the initializer of ToDoItem, but we do not care about its return value. Secondly, there is no XCTAssert function call in the test. To add an assert, we could rewrite the test as follows: func testInit_ShouldTakeTitle() {    let item = ToDoItem(title: "Test title")    XCTAssertNotNil(item, "item should not be nil") } But in Swift an non-failable initializer cannot return nil. It always returns a valid instance. This means that the XCTAssertNotNil() method is useless. We do not need it to ensure that we have written enough code to implement the tested micro feature. It is not needed to drive the development and it does not make the code better. In the following tests, we will omit the XCTAssert functions when they are not needed in order to make a test fail. Before we proceed to the next tests, let's set up the editor in a way that makes the TDD workflow easier and faster. Open ToDoItemTests.swift in the editor. Open Project navigator, and hold down the option key while clicking on ToDoItem.swift in the navigator to open it in the assistant editor. Depending on the size of your screen and your preferences, you might prefer to hide the navigator again. With this setup, you have the tests and code side by side, and switching from a test to code and vice versa takes no time. In addition to this, as the relevant test is visible while you write the code, it can guide the implementation. Adding an item description property A to-do item can have a description. We would like to have an initializer that also takes a description string. To drive the implementation, we need a failing test for the existence of that initializer: func testInit_ShouldTakeTitleAndDescription() {    _ = ToDoItem(title: "Test title",    itemDescription: "Test description") } Again, this code does not compile because there is Extra argument 'itemDescription' in call. To make this test pass, we add a itemDescription of type String? property to ToDoItem: struct ToDoItem {    let title: String    let itemDescription: String? } Run the tests. The testInit_ShouldTakeTitleAndDescription()test fails (that is, it does not compile) because there is Missing argument for parameter 'itemDescription' in call. The reason for this is that we are using a feature of Swift where structs have an automatic initializer with arguments setting their properties. The initializer in the first test only has one argument, and, therefore, the test fails. To make the two tests pass again, replace the initializer in testInit_ShouldTakeTitleAndDescription() with this: toDoItem(title: "Test title", itemDescription: nil) Run the tests to check whether all the tests pass again. But now the initializer in the first test looks bad. We would like to be able to have a short initializer with only one argument in case the to-do item only has a title. So, the code needs refactoring. To have more control over the initialization, we have to implement it ourselves. Add the following code to ToDoItem: init(title: String, itemDescription: String? = nil) {    self.title = title    self.itemDescription = itemDescription } This initializer has two arguments. The second argument has a default value, so we do not need to provide both arguments. When the second argument is omitted, the default value is used. Before we refactor the tests, run the tests to make sure that they still pass. Then, remove the second argument from the initializer in testInit_ShouldTakeTitle(): func testInit_ShouldTakeTitle() {    _ = ToDoItem(title: "Test title") } Run the tests again to make sure that everything still works. Removing a hidden source for bugs To be able to use a short initializer, we need to define it ourselves. But this also introduces a new source for potential bugs. We can remove the two micro features we have implemented and still have both tests pass. To see how this works, open ToDoItem.swift, and comment out the properties and assignment in the initializer: struct ToDoItem {    //let title: String    //let itemDescription: String?       init(title: String, itemDescription: String? = nil) {               //self.title = title        //self.itemDescription = itemDescription    } } Run the tests. Both tests still pass. The reason for this is that they do not check whether the values of the initializer arguments are actually set to any the ToDoItem properties. We can easily extend the tests to make sure that the values are set. First, let's change the name of the first test to testInit_ShouldSetTitle(), and replace its contents with the following code: let item = ToDoItem(title: "Test title") XCTAssertEqual(item.title, "Test title",    "Initializer should set the item title") This test does not compile because ToDoItem does not have a property title (it is commented out). This shows us that the test is now testing our intention. Remove the comment signs for the title property and assignment of the title in the initializer, and run the tests again. All the tests pass. Now, replace the second test with the following code: func testInit_ShouldSetTitleAndDescription() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description")      XCTAssertEqual(item.itemDescription , "Test description",        "Initializer should set the item description") } Remove the remaining comment signs in ToDoItem, and run the tests again. Both tests pass again, and they now test whether the initializer works. Adding a timestamp property A to-do item can also have a due date, which is represented by a timestamp. Add the following test to make sure that we can initialize a to-do item with a title, a description, and a timestamp: func testInit_ShouldSetTitleAndDescriptionAndTimestamp() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0)      XCTAssertEqual(0.0, item.timestamp,        "Initializer should set the timestamp") } Again, this test does not compile because there is an extra argument in the initializer. From the implementation of the other properties, we know that we have to add a timestamp property in ToDoItem and set it in the initializer: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp    } } Run the tests. All the tests pass. The tests are green, and there is nothing to refactor. Adding a location property The last property that we would like to be able to set in the initializer of ToDoItem is its location. The location has a name and can optionally have a coordinate. We will use a struct to encapsulate this data into its own type. Add the following code to ToDoItemTests: func testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name") } The test is not finished, but it already fails because Location is an unresolved identifier. There is no class, struct, or enum named Location yet. Open Project navigator, add Swift File with the name Location.swift, and add it to the Model folder. From our experience with the ToDoItem struct, we already know what is needed to make the test green. Add the following code to Location.swift: struct Location {    let name: String } This defines a Location struct with a name property and makes the test code compliable again. But the test is not finished yet. Add the following code to testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation(): func testInit_ShouldTakeTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name")    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0,        location: location)      XCTAssertEqual(location.name, item.location?.name,        "Initializer should set the location") } Unfortunately, we cannot use location itself yet to check for equality, so the following assert does not work: XCTAssertEqual(location, item.location,    "Initializer should set the location") The reason for this is that the first two arguments of XCTAssertEqual() have to conform to the Equatable protocol. Again, this does not compile because the initializer of ToDoItem does not have an argument called location. Add the location property and the initializer argument to ToDoItem. The result should look like this: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?    let location: Location?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil,        location: Location? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp            self.location = location    } } Run the tests again. All the tests pass and there is nothing to refactor. We have now implemented a struct to hold the to-do items using TDD. Implementing the location In the previous section, we added a struct to hold the location information. We will now add tests to make sure Location has the needed properties and initializer. The tests could be added to ToDoItemTests, but they are easier to maintain when the test classes mirror the implementation classes/structs. So, we need a new test case class. Open Project navigator, select the ToDoTests group, and add a unit test case class with the name LocationTests. Make sure to go to iOS | Source | Unit Test Case Class because we want to test the iOS code and Xcode sometimes preselects OS X | Source. Choose to store the file in the Model folder we created previously. Set up the editor to show LocationTests.swift on the left-hand side and Location.swift in the assistant editor on the right-hand side. In the test class, add @testable import ToDo, and remove the testExample() and testPerformanceExample()template tests. Adding a coordinate property To drive the addition of a coordinate property, we need a failing test. Add the following test to LocationTests: func testInit_ShouldSetNameAndCoordinate() {    let testCoordinate = CLLocationCoordinate2D(latitude: 1,        longitude: 2)    let location = Location(name: "",        coordinate: testCoordinate)      XCTAssertEqual(location.coordinate?.latitude,        testCoordinate.latitude,        "Initializer should set latitude")    XCTAssertEqual(location.coordinate?.longitude,        testCoordinate.longitude,        "Initializer should set longitude") } First, we create a coordinate and use it to create an instance of Location. Then, we assert that the latitude and the longitude of the location's coordinate are set to the correct values. We use the 1 and 2 values in the initializer of CLLocationCoordinate2D because it has also an initializer that takes no arguments (CLLocationCoordinate2D()) and sets the longitude and latitude to zero. We need to make sure in the test that the initializer of Location assigns the coordinate argument to its property. The test does not compile because CLLocationCoordinate2D is an unresolved identifier. We need to import CoreLocation in LocationTests.swift: import XCTest @testable import ToDo import CoreLocation The test still does not compile because Location does not have a coordinate property yet. Like ToDoItem, we would like to have a short initializer for locations that only have a name argument. Therefore, we need to implement the initializer ourselves and cannot use the one provided by Swift. Replace the contents of Location.swift with the following code: import CoreLocation   struct Location {    let name: String    let coordinate: CLLocationCoordinate2D?       init(name: String,        coordinate: CLLocationCoordinate2D? = nil) {                   self.name = ""            self.coordinate = coordinate    } } Note that we have intentionally set the name in the initializer to an empty string. This is the easiest implementation that makes the tests pass. But it is clearly not what we want. The initializer should set the name of the location to the value in the name argument. So, we need another test to make sure that the name is set correctly. Add the following test to LocationTests: func testInit_ShouldSetName() {    let location = Location(name: "Test name")    XCTAssertEqual(location.name, "Test name",        "Initializer should set the name") } Run the test to make sure it fails. To make the test pass, change self.name = "" in the initializer of Location to self.name = name. Run the tests again to check that now all the tests pass. There is nothing to refactor in the tests and implementation. Let's move on. Summary In this article, we covered the implementation of a to-do item by adding a title property, item description property, timestamp property, and more. We also covered the implementation of a location using the coordinate property. Resources for Article: Further resources on this subject: Share and Share Alike [article] Introducing Test-driven Machine Learning[article] Testing a UI Using WebDriverJS [article]
Read more
  • 0
  • 0
  • 1924