Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-writing-modules
Packt
14 Aug 2017
15 min read
Save for later

Writing Modules

Packt
14 Aug 2017
15 min read
In this article, David Mark Clements, the author of the book, Node.js Cookbook, we will be covering the following points to introduce you to using Node.js  for exploratory data analysis: Node's module system Initializing a module Writing a module Tooling around modules Publishing modules Setting up a private module repository Best practices (For more resources related to this topic, see here.) In idiomatic Node, the module is the fundamental unit of logic. Any typical application or system consists of generic code and application code. As a best practice, generic shareable code should be held in discrete modules, which can be composed together at the application level with minimal amounts of domain-specific logic. In this article, we'll learn how Node's module system works, how to create modules for various scenarios, and how we can reuse and share our code. Scaffolding a module Let's begin our exploration by setting up a typical file and directory structure for a Node module. At the same time, we'll be learning how to automatically generate a package.json file (we refer to this throughout as initializing a folder as a package) and to configure npm (Node's package managing tool) with some defaults, which can then be used as part of the package generation process. In this recipe, we'll create the initial scaffolding for a full Node module. Getting ready Installing Node If we don't already have Node installed, we can go to https://nodejs.org to pick up the latest version for our operating system. If Node is on our system, then so is the npm executable; npm is the default package manager for Node. It's useful for creating, managing, installing, and publishing modules. Before we run any commands, let's tweak the npm configuration a little: npm config set init.author.name "<name here>" This will speed up module creation and ensure that each package we create has a consistent author name, thus avoiding typos and variations of our name. npm stands for... Contrary to popular belief, npm is not an acronym for Node Package Manager; in fact, it stands for npm is Not An Acronym, which is why it's not called NINAA. How to do it… Let's say we want to create a module that converts HSL (hue, saturation, luminosity) values into a hex-based RGB representation, such as will be used in CSS (for example,  #fb4a45 ). The name hsl-to-hex seems good, so let's make a new folder for our module and cd into it: mkdir hsl-to-hex cd hsl-to-hex Every Node module must have a package.json file, which holds metadata about the module. Instead of manually creating a package.json file, we can simply execute the following command in our newly created module folder: npm init This will ask a series of questions. We can hit enter for every question without supplying an answer. Note how the default module name corresponds to the current working directory, and the default author is the init.author.name value we set earlier. An npm init should look like this: Upon completion, we should have a package.json file that looks something like the following: { "name": "hsl-to-hex", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT" } How it works… When Node is installed on our system, npm comes bundled with it. The npm executable is written in JavaScript and runs on Node. The npm config command can be used to permanently alter settings. In our case, we changed the init.author.name setting so that npm init would reference it for the default during a module's initialization. We can list all the current configuration settings with npm config ls . Config Docs Refer to https://docs.npmjs.com/misc/config for all possible npm configuration settings. When we run npm init, the answers to prompts are stored in an object, serialized as JSON and then saved to a newly created package.json file in the current directory. There's more… Let's find out some more ways to automatically manage the content of the package.json file via the npm command. Reinitializing Sometimes additional metadata can be available after we've created a module. A typical scenario can arise when we initialize our module as a git repository and add a remote endpoint after creating the module. Git and GitHub If we've not used the git tool and GitHub before, we can refer to http://help.github.com to get started. If we don't have a GitHub account, we can head to http://github.com to get a free account. To demonstrate, let's create a GitHub repository for our module. Head to GitHub and click on the plus symbol in the top-right, then select New repository: Select New repository. Specify the name as hsl-to-hex and click on Create Repository. Back in the Terminal, inside our module folder, we can now run this: echo -e "node_modulesn*.log" > .gitignore git init git add . git commit -m '1st' git remote add origin http://github.com/<username>/hsl-to-hex git push -u origin master Now here comes the magic part; let's initialize again (simply press enter for every question): npm init This time the Git remote we just added was detected and became the default answer for the git repository question. Accepting this default answer meant that the repository, bugs, and homepage fields were added to package.json . A repository field in package.json is an important addition when it comes to publishing open source modules since it will be rendered as a link on the modules information page at http://npmjs.com. A repository link enables potential users to peruse the code prior to installation. Modules that can't be viewed before use are far less likely to be considered viable. Versioning The npm tool supplies other functionalities to help with module creation and management workflow. For instance, the npm version command can allow us to manage our module's version number according to SemVer semantics. SemVer SemVer is a versioning standard. A version consists of three numbers separated by a dot, for example, 2.4.16. The position of a number denotes specific information about the version in comparison to the other versions. The three positions are known as MAJOR.MINOR.PATCH. The PATCH number is increased when changes have been made that don't break the existing functionality or add any new functionality. For instance, a bug fix will be considered a patch. The MINOR number should be increased when new backward compatible functionality is added. For instance, the adding of a method. The MAJOR number increases when backwards-incompatible changes are made. Refer to http://semver.org/ for more information. If we were to a fix a bug, we would want to increase the PATCH number. We can either manually edit the version field in package.json , setting it to 1.0.1, or we can execute the following: npm version patch This will increase the version field in one command. Additionally, if our module is a Git repository, it will add a commit based on the version (in our case, v1.0.1), which we can then immediately push. When we ran the command, npm output the new version number. However, we can double-check the version number of our module without opening package.json: npm version This will output something similar to the following: { 'hsl-to-hex': '1.0.1', npm: '2.14.17', ares: '1.10.1-DEV', http_parser: '2.6.2', icu: '56.1', modules: '47', node: '5.7.0', openssl: '1.0.2f', uv: '1.8.0', v8: '4.6.85.31', zlib: '1.2.8' } The first field is our module along with its version number. If we added a new backwards-compatible functionality, we can run this: npm version minor Now our version is 1.1.0. Finally, we can run the following for a major version bump: npm version major This sets our modules version to 2.0.0. Since we're just experimenting and didn't make any changes, we should set our version back to 1.0.0. We can do this via the npm command as well: npm version 1.0.0 See also Refer to the following recipes: Writing module code Publishing a module Installing dependencies In most cases, it's most wise to compose a module out of other modules. In this recipe, we will install a dependency. Getting ready For this recipe, all we need is Command Prompt open in the hsl-to-hex folder from the Scaffolding a module recipe. How to do it… Our hsl-to-hex module can be implemented in two steps: Convert the hue degrees, saturation percentage, and luminosity percentage to corresponding red, green, and blue numbers between 0 and 255. Convert the RGB values to HEX. Before we tear into writing an HSL to the RGB algorithm, we should check whether this problem has already been solved. The easiest way to check is to head to http://npmjs.com and perform a search: Oh, look! Somebody already solved this. After some research, we decide that the hsl-to-rgb-for-reals module is the best fit. Ensuring that we are in the hsl-to-hex folder, we can now install our dependency with the following: npm install --save hsl-to-rgb-for-reals Now let's take a look at the bottom of package.json: tail package.json #linux/osx type package.json #windows Tail output should give us this: "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to-hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" } } We can see that the dependency we installed has been added to a dependencies object in the package.json file. How it works… The top two results of the npm search are hsl-to-rgb and hsl-to-rgb-for-reals . The first result is unusable because the author of the package forgot to export it and is unresponsive to fixing it. The hsl-to-rgb-for-reals module is a fixed version of hsl-to-rgb . This situation serves to illustrate the nature of the npm ecosystem. On the one hand, there are over 200,000 modules and counting, and on the other many of these modules are of low value. Nevertheless, the system is also self-healing in that if a module is broken and not fixed by the original maintainer, a second developer often assumes responsibility and publishes a fixed version of the module. When we run npm install in a folder with a package.json file, a node_modules folder is created (if it doesn't already exist). Then, the package is downloaded from the npm registry and saved into a subdirectory of node_modules (for example, node_modules/hsl-to-rgb-for-reals ). npm 2 vs npm 3 Our installed module doesn't have any dependencies of its own. However, if it did, the sub-dependencies would be installed differently depending on whether we're using version 2 or version 3 of npm. Essentially, npm 2 installs dependencies in a tree structure, for instance, node_modules/dep/node_modules/sub-dep-of-dep/node_modules/sub-dep-of-sub-dep. Conversely, npm 3 follows a maximally flat strategy where sub-dependencies are installed in the top level node_modules folder when possible, for example, node_modules/dep, node_modules/sub-dep-of-dep, and node_modules/sub-dep-of-sub-dep. This results in fewer downloads and less disk space usage; npm 3 resorts to a tree structure in cases where there are two versions of a sub-dependency, which is why it's called a maximally flat strategy. Typically, if we've installed Node 4 or above, we'll be using npm version 3. There's more… Let's explore development dependencies, creating module management scripts and installing global modules without requiring root access. Installing development dependencies We usually need some tooling to assist with development and maintenance of a module or application. The ecosystem is full of programming support modules, from linting to testing to browser bundling to transpilation. In general, we don't want consumers of our module to download dependencies they don't need. Similarly, if we're deploying a system built-in node, we don't want to burden the continuous integration and deployment processes with superfluous, pointless work. So, we separate our dependencies into production and development categories. When we use npm --save install <dep>, we're installing a production module. To install a development dependency, we use --save-dev. Let's go ahead and install a linter. JavaScript Standard Style A standard is a JavaScript linter that enforces an unconfigurable ruleset. The premise of this approach is that we should stop using precious time up on bikeshedding about syntax. All the code in this article uses the standard linter, so we'll install that: npm install --save-dev standard semistandard If the absence of semicolons is abhorrent, we can choose to install semistandard instead of standard at this point. The lint rules match those of standard, with the obvious exception of requiring semicolons. Further, any code written using standard can be reformatted to semistandard using the semistandard-format command tool. Simply, run npm -g i semistandard-format to get started with it. Now, let's take a look at the package.json file: { "name": "hsl-to-hex", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT", "repository": { "type": "git", "url": "git+ssh://[email protected]/davidmarkclements/hsl-to-hex.git" }, "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to- hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" }, "devDependencies": { "standard": "^6.0.8" } } We now have a devDependencies field alongside the dependencies field. When our module is installed as a sub-dependency of another package, only the hsl-to-rgb-for-reals module will be installed while the standard module will be ignored since it's irrelevant to our module's actual implementation. If this package.json file represented a production system, we could run the install step with the --production flag, as shown: npm install --production Alternatively, this can be set in the production environment with the following command: npm config set production true Currently, we can run our linter using the executable installed in the node_modules/.bin folder. Consider this example: ./node_modules/.bin/standard This is ugly and not at all ideal. Refer to Using npm run scripts for a more elegant approach. Using npm run scripts Our package.json file currently has a scripts property that looks like this: "scripts": { "test": "echo "Error: no test specified" && exit 1" }, Let's edit the package.json file and add another field, called lint, as follows: "scripts": { "test": "echo "Error: no test specified" && exit 1", "lint": "standard" }, Now, as long as we have standard installed as a development dependency of our module (refer to Installing Development Dependencies), we can run the following command to run a lint check on our code: npm run-script lint This can be shortened to the following: npm run lint When we run an npm script, the current directory's node_modules/.bin folder is appended to the execution context's PATH environment variable. This means even if we don't have the standard executable in our usual system PATH, we can reference it in an npm script as if it was in our PATH. Some consider lint checks to be a precursor to tests. Let's alter the scripts.test field, as illustrated: "scripts": { "test": "npm run lint", "lint": "standard" }, Chaining commands Later, we can append other commands to the test script using the double ampersand (&&) to run a chain of checks. For instance, "test": "npm run lint && tap test". Now, let's run the test script: npm run test Since the test script is special, we can simply run this: npm test Eliminating the need for sudo The npm executable can install both the local and global modules. Global modules are mostly installed so to allow command line utilities to be used system wide. On OS X and Linux, the default npm setup requires sudo access to install a module. For example, the following will fail on a typical OS X or Linux system with the default npm setup: npm -g install cute-stack # <-- oh oh needs sudo This is unsuitable for several reasons. Forgetting to use sudo becomes frustrating; we're trusting npm with root access and accidentally using sudo for a local install causes permission problems (particularly with the npm local cache). The prefix setting stores the location for globally installed modules; we can view this with the following: npm config get prefix Usually, the output will be /usr/local . To avoid the use of sudo, all we have to do is set ownership permissions on any subfolders in /usr/local used by npm: sudo chown -R $(whoami) $(npm config get prefix)/{lib/node_modules,bin,share} Now we can install global modules without root access: npm -g install cute-stack # <-- now works without sudo If changing ownership of system folders isn't feasible, we can use a second approach, which involves changing the prefix setting to a folder in our home path: mkdir ~/npm-global npm config set prefix ~/npm-global We'll also need to set our PATH: export PATH=$PATH:~/npm-global/bin source ~/.profile The source essentially refreshes the Terminal environment to reflect the changes we've made. See also Scaffolding a module Writing module code Publishing a module Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [article] Working with Pluginlib, Nodelets, and Gazebo Plugins [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 1342

article-image-introduction-moodle-3
Packt
17 Jul 2017
13 min read
Save for later

Introduction to Moodle 3

Packt
17 Jul 2017
13 min read
In this article, Ian Wild, the author of the book, Moodle 3.x Developer's Guide will be intoroducing you to Moodle 3.For any organization considering implementing an online learning environment, Moodle is often the number one choice. Key to its success is the free, open source ethos that underpins it. Not only is the Moodle source code fully available to developers, but Moodle itself has been developed to allow for the inclusion of third party plugins. Everything from how users access the platform, the kinds of teaching interactions that are available, through to how attendance and success can be reported – in fact all the key Moodle functionality – can be adapted and enhanced through plugins. (For more resources related to this topic, see here.) What is Moodle? There are three reasons why Moodle has become so important, and much talked about, in the world of online learning, one technical, one philosophical and the third educational. From a technical standpoint Moodle - an acronym for Modular Object Oriented Dynamic Learning Environment – is highly customizable. As a Moodle developer, always remember that the ‘M’ in Moodle stands for modular. If you are faced with a client feature request that demands a feature Moodle doesn’t support then don’t panic. The answer is simple: we create a new custom plugin to implement it. Check out the Moodle Plugins Directory (https://moodle.org/plugins/) for a comprehensive library of supported 3rd party plugins that Moodle developers have created and given back to the community. And this leads to the philosophical reason why Moodle dominates. Free open source software for education Moodle is grounded firmly in a community-based, open source philosophy (see https://en.wikipedia.org/wiki/Open-source_model). But what does this mean for developers? Fundamentally, it means that we have complete access to the source code and, within reason, unfettered access to the people who develop it. Access to the application itself is free – you don’t need to pay to download it and you don’t need to pay to run it. But be aware of what ‘free’ means in this context. Hosting and administration, for example, take time and resources and are very unlikely to be free. As an educational tool, Moodle was developed to support social constructionism (see https://docs.moodle.org/31/en/Pedagogy) – if you are not familiar with this concept then it is essentially suggesting that building an understanding of a concept or idea can be best achieved by interacting with a broad community. The impact on us as Moodle plugin developers is that there is a highly active group of users and developers. Before you begin developing any Moodle plugins come and join us at https://moodle.org. Plugin Development – Authentication In this article, we will be developing a novel plugin that will seamlessly integrate Moodle and the WordPress content management system. Our plugin will authorize users via WordPress when they click on a link to Moodle when on a WordPress page. The plugin discussed in this article has already been released to the Moodle community – check out the Moodle Plugins Directory at https://moodle.org/plugins/auth_wordpress for details. Let us start by learning what Moodle authentication is and how new user accounts are created. Authentication Moodle supports a range of different authentication methods out of the box, each one supported by its own plugin. To go to the list of available plugins, from the Administration block, click on Site administration, click Plugins, then click Authentication, and finally click on Manage authentication. The list of currently installed authentication plugins is displayed: Each plugin interfaces with an internal Application Programming Interface (API), the Access API – see the Moodle developer documentation for details here: https://docs.moodle.org/dev/Access_API Getting logged in There are two ways of prompting the Moodle authentication process: Attempting to log in from the log in page. Clicking on a link to a protected resource (i.e. a page or file that you can’t view or download without logging in). For an overview of the process, take a look in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins#Overview_of_Moodle_authentication_process. After checks to determine if an upgrade is required (or if we are partway through the upgrade process), there is a short fragment of code that loads the configured authentication plugins and for each one calls a special method called loginpage_hook(): $authsequence = get_enabled_auth_plugins(true); // auths, in sequence foreach($authsequence as $authname) { $authplugin = get_auth_plugin($authname); $authplugin->loginpage_hook(); } The loginpage_hook() function gives each authentication plugin the chance to intercept the login. Assuming that the login has not been intercepted, the process then continues with a check to ensure the supplied username conforms to the configured standard before calling authenticate_user_login() which, if successful, returns a $user object. OAuth Overview The OAuth authentication mechanism provides secure delegated access. OAuth supports a number of scenarios, including: A client requests access from a server and the server responds with either a ‘confirm’ or ‘deny’. This is called two legged authentication A client requests access from a server and the server, the server then pops up a confirmation dialog so that the user can authorize the access, and then it finally responds with either a ‘confirm’ or ‘deny’. This is called three legged authentication In this article we will be implementing such a mechanism means, in practice, that: An authentication server will only talk to configured clients No passwords are exchanged between server and client – only tokens are exchanged, which are meaningless on their own By default, users need to give permission before resources are accessed Having given an overview, here is the process again described in a little more detail: A new client is configured in the authentication server. A client is allocated a unique client key, along with a secret token (referred to as client secret) The client POSTs an HTTP request to the server (identifying itself using the client key and client secret) and the server responds with a temporary access token. This token is used to request authorization to access protected resources from the server. In this case ‘protected resources’ mean the WordPress API. Access to the WordPress API will allow us to determine details of the currently logged in user. The server responds not with an HTTP response but by POSTing new permanent tokens back to the client via a callback URI (i.e. the server talks to the client directly in order to ensure security). The process ends with the client possessing permanent authorization tokens that can be used to access WP-API functions. Obviously, the most effective way of learning about this process is to implement it so let’s go ahead and do that now. Installing the WordPress OAuth 1.0a server The first step will be to add the OAuth 1.0a server plugin to WordPress. Why not the more recent OAuth 2.0 server plugin? This is because 2.0 only supports https:// and not http://. Also, internally (at least at time of writing) WordPress will only authenticate internally using either OAuth 1.0a or cookies. Log into WordPress as an administrator and, from the Dashboard, hover the mouse over the Plugins menu item and click on Installed Plugins. The Plugins page is displayed. At the top of the page, press the Add New button: As described previously, ensure that you install version 1.0a and not 2.0: Once installed, we need to configure a new client. From the Dashboard menu, hover the mouse over Users and you will see a new Applications menu item has been added. Click on this to display the Registered Applications page. Click the Add New button to display the Add Application page: The Consumer Name is the title for our client that will appear in the Applications page, and Description is a brief explanation of that client to aid with the identification. The Callback is the URI that WordPress will talk to (refer to the outline of the OAuth authentication steps). As we have not yet developed the Moodle/OAuth client end yet you can specify oob in Callback (this stands for ‘out of band’). Once configured, WordPress will generate new OAuth credentials, a Client Key and a Client Secret: Having installed and configured the server end now it’s time to develop the client. Creating a new Moodle auth plugin Before we begin, download the finished plugin from https://github.com/iandavidwild/moodle-auth_wordpress and install it in your local development Moodle instance. The development of a new authentication plugin is described in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins. As described there, let us start with copying the none plugin (the no login authentication method) and using this as a template for our new plugin. In Eclipse, I’m going to copy the none plugin to a new authentication method called wordpress: That done, we need to update the occurrences of auth_none to auth_wordpress. Firstly, rename /auth/wordpress/lang/en/auth_none.php to auth_wordpress.php. Then, in auth.php we need to rename the class auth_plugin_none to auth_plugin_wordpress. As described, the Eclipse Find/Replace function is great for updating scripts: Next, we need to update version information in version.php. Update all the relevant names, descriptions and dates. Finally, we can check that Moodle recognises our new plugin by navigating to the Site administration menu and clicking on Notifications. If installation is successful, our new plugin will be listed on the Available authentication plugins page: Configuration Let us start with considering the plugin configuration. We will need to allow a Moodle administrator to configure the following: The URL of the WordPress installation The client key and client secret provided by WordPress There is very little flexibility in the design of an authentication plugin configuration page so at this stage, rather than creating a wireframe drawing and having this agreed with the client, we can simply go ahead and write the code. The configuration page is defined in /config.html. Remember to start declaring the relevant language strings in /lang/en/auth_wordpress.php. Configuration settings themselves will be managed by the Moodle framework by calling our plugin’s process_config() method. Here is the declaration: /** * Processes and stores configuration data for this authentication plugin. * * @return @bool */ function process_config($config) { // Set to defaults if undefined if (!isset($config->wordpress_host)) { $config->wordpress_host = ‘‘; } if (!isset($config->client_key)) { $config->client_key = ‘‘; } if (!isset($config->client_secret)) { $config->client_secret = ‘‘; } set_config(‘wordpress_host’, trim($config->wordpress_host), ‘auth/wordpress’); set_config(‘client_key’, trim($config->client_key), ‘auth/wordpress’); set_config(‘client_secret’, trim($config->client_secret), ‘auth/wordpress’); return true; } Having dealt with configuration, now let us start managing the actual OAuth process. Handling OAuth calls Rather than go into the details of how we can send HTTP requests to WordPress let’s use a third party library to do this work. The code I’m going to use is based on Abraham Williamstwitteroauth library (see https://github.com/abraham/twitteroauth). In Eclipse, take a look at the files OAuth.php and BasicOAuth.php for details. To use the library, you will need to add the following lines to the top of /wordpress/auth.php: require_once($CFG->dirroot . ‘/auth/wordpress/OAuth.php’); require_once($CFG->dirroot . ‘/auth/wordpress/BasicOAuth.php’); use OAuth1BasicOauth; Let’s now start work on handling the Moodle login event. Handling the Moodle login event When a user clicks on link to a protected resource Moodle calls loginpage_hook() in each enabled authentication plugin. To handle this, let us first implement loginpage_hook(). We need to add the following lines to auth.php: /** * Will get called before the login page is shown. * */ function loginpage_hook() { $client_key = $this->config->client_key; $client_secret = $this->config->client_secret; $wordpress_host = $this->config->wordpress_host; if( (strlen($wordpress_host) > 0) && (strlen($client_key) > 0) && (strlen($client_secret) > 0) ) { // kick ff the authentication process $connection = new BasicOAuth($client_key, $client_secret); // strip the trailing slashes from the end of the host URL to avoid any confusion (and to make the code easier to read) $wordpress_host = rtrim($wordpress_host, ‘/’); $connection->host = $wordpress_host . “/wp-json”; $connection->requestTokenURL = $wordpress_host . “/oauth1/request”; $callback = $CFG->wwwroot . ‘/auth/wordpress/callback.php’; $tempCredentials = $connection->getRequestToken($callback); // Store temporary credentials in the $_SESSION }// if } This implements the first leg of the authentication process and the variable $tempCredentials will now contain a temporary access token. We will need to store these temporary credentials and then call on the server to ask the user to authorize the connection (leg two). Add the following lines immediately after the // Store temporary credentials in the $_SESSION comment: $_SESSION[‘oauth_token’] = $tempCredentials[‘oauth_token’]; $_SESSION[‘oauth_token_secret’] = $tempCredentials[‘oauth_token_secret’]; $connection->authorizeURL = $wordpress_host . “/oauth1/authorize”; $redirect_url = $connection->getAuthorizeURL($tempCredentials); header(‘Location: ‘ . $redirect_url); die; Next, we need to implement the OAuth callback. Create a new script called callback.php: The callback.php script will need to: Sanity check the data being passed back from WordPress and fail gracefully if there is an issue Get the wordpress authentication plugin instance (an instance of auth_plugin_wordpress) Call on a handler method that will perform the authentication (which we will then need to implement) The script is simple, short and available here: https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php Now, in the auth.php script, we need to implement the callback_handler() method to auth_plugin_wordpress. You can check out the code in GitHub. Visit https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/auth.php and scroll down to the call_backhandler() method. Lastly, let us add a fragment of code to the loginpage_hook() method that allows us to turn off WordPress authentication in config.php. Add the following to the very beginning of the loginpage_hook() function: global $CFG; if(isset($CFG->disablewordpressauth) && ($CFG->disablewordpressauth == true)) { return; } Summary In this article, we introduced the Moodle learning platform, investigated the open source philosophy that underpins it, and how Moodle’s functionality can be extended and enhanced through plugins. We took a pre-existing plugin to develop a new WordPress authentication module. This will allow a user logged into WordPress to automatically log into Moodle. To do so we implemented three legged Oauth 1.0a WordPress to Moodle authentication. Check out the complete code at https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php. More information on the plugin described in this article is available from the main Moodle website at https://moodle.org/plugins/auth_wordpress. Resources for Article:   Further resources on this subject: Introduction to Moodle [article] Moodle for Online Communities [article] An Introduction to Moodle 3 and MoodleCloud [article]
Read more
  • 0
  • 0
  • 2301

article-image-chart-model-and-draggable-and-droppable-directives
Packt
06 Jul 2017
9 min read
Save for later

Chart Model and Draggable and Droppable Directives

Packt
06 Jul 2017
9 min read
In this article by Sudheer Jonna and Oleg Varaksin, the author of the book Learning Angular UI Development with PrimeNG, we will see how to work with the chart model and learn about draggable and droppable directives. (For more resources related to this topic, see here.) Working with the chart model The chart component provides a visual representation of data using chart on a web page. PrimeNG chart components are based on charts.js 2.x library (as a dependency), which is a HTML5 open source library. The chart model is based on UIChart class name and it can be represented with element name as p-chart. The chart components will work efficiently by attaching the chart model file (chart.js) to the project root folder entry point. For example, in this case it would be index.html file. It can be configured as either CDN resource, local resource, or CLI configuration. CDN resource configuration: <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.bu ndle.min.js"></script> Angular CLI configuration: "scripts": [ "../node_modules/chart.js/dist/Chart.js", //..others ] More about the chart configuration and options will be available in the official documentation of the chartJS library (http://www.chartjs.org/). Chart types The chart type is defined through the type property. It supports six types of charts with an options such as pie, bar, line, doughnut, polarArea, and radar. Each type has it's own format of data and it can be supplied through the data property. For example, in doughnut chart, the type should refer to doughnut and the data property should bind to the data options as shown here: <p-chart type="doughnut" [data]="doughnutdata"></p-chart> The component class has to define data the options with labels and datasets as follows: this.doughnutdata = { labels: ['PrimeNG', 'PrimeUI', 'PrimeReact'], datasets: [ { data: [3000, 1000, 2000], backgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ], hoverBackgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ] } ] }; Along with labels and data options, other properties related to skinning can be applied too. The legends are closable by default (that is, if you want to visualize only particular data variants then it is possible by collapsing legends which are not required). The collapsed legend is represented with a strike line. The respective data component will be disappeared after click operation on legend. Customization Each series is customized on a dataset basis but you can customize the general or common options via the options attribute. For example, the line chart which customize the default options would be as follows: <p-chart type="line" [data]="linedata" [options]="options"></p-chart> The component class needs to define chart options with customized title and legend properties as follows: this.options = { title: { display: true, text: 'PrimeNG vs PrimeUI', fontSize: 16 }, legend: { position: 'bottom' } }; As per the preceding example, the title option is customized with a dynamic title, font size, and conditional display of the title. Where as legend attribute is used to place the legend in top, left, bottom, and right positions. The default legend position is top. In this example, the legend position is bottom. The line chart with preceding customized options would results as a snapshot shown here: The Chart API also supports the couple of utility methods as shown here: refresh Redraws the graph with new data reinit Destroys the existing graph and then creates it again generateLegend Returns an HTML string of a legend for that chart Events The chart component provides a click event on data sets to process the select data using onDataSelect event callback. Let us take a line chart example with onDataSelect event callback by passing an event object as follows: <p-chart type="line" [data]="linedata" (onDataSelect)="selectData($event)"></p-chart> In the component class, an event callback is used to display selected data information in a message format as shown: selectData(event: any) { this.msgs = []; this.msgs.push({ severity: 'info', summary: 'Data Selected', 'detail': this.linedata.datasets[event.element._datasetIndex] .data[event.element._index] }); } In the preceding event callback (onDataSelect), we used an index of the dataset to display information. There are also many other options from an event object: event.element = Selected element event.dataset = Selected dataset event.element._datasetIndex = Index of the dataset in data event.element._index = Index of the data in dataset Learning Draggable and Droppable directives Drag and drop is an action, which means grabbing an object and dragging it to a different location. The components capable of being dragged and dropped enrich the web and make a solid base for modern UI patterns. The drag and drop utilities in PrimeNG allow us to create draggable and droppable user interfaces efficiently. They make it abstract for the developers to deal with the implementation details at the browser level. In this section, we will learn about pDraggable and pDroppable directives. We will introduce a DataGrid component containing some imaginary documents and make these documents draggable in order to drop them onto a recycle bin. The recycle bin is implemented as DataTable component which shows properties of dropped documents. For the purpose of better understanding the developed code, a picture comes first: This picture shows what happens after dragging and dropping three documents. The complete demo application with instructions is available on GitHub at https://github.com/ova2/angular-development-with-primeng/tree/master/chapter9/dragdrop. Draggable pDraggable is attached to an element to add a drag behavior. The value of the pDraggable attribute is required--it defines the scope to match draggables with droppables. By default, the whole element is draggable. We can restrict the draggable area by applying the dragHandle attribute. The value of dragHandle can be any CSS selector. In the DataGrid with available documents, we only made the panel's header draggable: <p-dataGrid [value]="availableDocs"> <p-header> Available Documents </p-header> <ng-template let-doc pTemplate="item"> <div class="ui-g-12 ui-md-4" pDraggable="docs" dragHandle=".uipanel- titlebar" dragEffect="move" (onDragStart)="dragStart($event, doc)" (onDragEnd)="dragEnd($event)"> <p-panel [header]="doc.title" [style]="{'text-align':'center'}"> <img src="/assets/data/images/docs/{{doc.extension}}.png"> </p-panel> </div> </ng-template> </p-dataGrid> The draggable element can fire three events when dragging process begins, proceeds, and ends. These are onDragStart, onDrag, and onDragEnd respectively. In the component class, we buffer the dragged document at the beginning and reset it at the end of the dragging process. This task is done in two callbacks: dragStart and dragEnd. class DragDropComponent { availableDocs: Document[]; deletedDocs: Document[]; draggedDoc: Document; constructor(private docService: DocumentService) { } ngOnInit() { this.deletedDocs = []; this.docService.getDocuments().subscribe((docs: Document[]) => this.availableDocs = docs); } dragStart(event: any, doc: Document) { this.draggedDoc = doc; } dragEnd(event: any) { this.draggedDoc = null; } ... } In the shown code, we used the Document interface with the following properties: interface Document { id: string; title: string; size: number; creator: string; creationDate: Date; extension: string; } In the demo application, we set the cursor to move when the mouse is moved over any panel's header. This trick provides a better visual feedback for draggable area: body .ui-panel .ui-panel-titlebar { cursor: move; } We can also set the dragEffect attribute to specifies the effect that is allowed for a drag operation. Possible values are none, copy, move, link, copyMove, copyLink, linkMove, and all. Refer the official documentation to read more details at https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer/effectAllowed. Droppable pDroppable is attached to an element to add a drop behavior. The value of the pDroppable attribute should have the same scope as pDraggable. Droppable scope can also be an array to accept multiple droppables. The droppable element can fire four events. Event name Description onDragEnter Invoked when a draggable element enters the drop area onDragOver Invoked when a draggable element is being dragged over the drop area onDrop Invoked when a draggable is dropped onto the drop area onDragLeave Invoked when a draggable element leaves the drop area In the demo application, the whole code of the droppable area looks as follows: <div pDroppable="docs" (onDrop)="drop($event)" [ngClass]="{'dragged-doc': draggedDoc}"> <p-dataTable [value]="deletedDocs"> <p-header>Recycle Bin</p-header> <p-column field="title" header="Title"></p-column> <p-column field="size" header="Size (bytes)"></p-column> <p-column field="creator" header="Creator"></p-column> <p-column field="creationDate" header="Creation Date"> <ng-template let-col let-doc="rowData" pTemplate="body"> {{doc[col.field].toLocaleDateString()}} </ng-template> </p-column> </p-dataTable> </div> Whenever a document is being dragged and dropped into the recycle bin, the dropped document is removed from the list of all available documents and added to the list of deleted documents. This happens in the onDrop callback: drop(event: any) { if (this.draggedDoc) { // add draggable element to the deleted documents list this.deletedDocs = [...this.deletedDocs, this.draggedDoc]; // remove draggable element from the available documents list this.availableDocs = this.availableDocs.filter((e: Document) => e.id !== this.draggedDoc.id); this.draggedDoc = null; } } Both available and deleted documents are updated by creating new arrays instead of manipulating existing arrays. This is necessary in data iteration components to force Angular run change detection. Manipulating existing arrays would not run change detection and the UI would not be updated. The Recycle Bin area gets a red border while dragging any panel with document. We have achieved this highlighting by setting ngClass as follows: [ngClass]="{'dragged-doc': draggedDoc}". The style class dragged-doc is enabled when the draggedDoc object is set. The style class is defined as follows: .dragged-doc { border: solid 2px red; } Summary Initially we started with chart components. At first we started with chart Model API and then will learn how to create charts programmatically using various chart types such as pie, bar, line, doughnut, polar and radar charts. We also learned features of Draggable and Droppable. Resources for Article: Further resources on this subject: Building Components Using Angular [article] Get Familiar with Angular [article] Writing a Blog Application with Node.js and AngularJS [article]
Read more
  • 0
  • 0
  • 4131
Banner background image

article-image-tangled-web-not-all
Packt
22 Jun 2017
20 min read
Save for later

Tangled Web? Not At All!

Packt
22 Jun 2017
20 min read
In this article by Clif Flynt, the author of the book Linux Shell Scripting Cookbook - Third Edition, we can see a collection of shell-scripting recipes that talk to services on the Internet. This articleis intended to help readers understand how to interact with the Web using shell scripts to automate tasks such as collecting and parsing data from web pages. This is discussed using POST and GET to web pages, writing clients to web services. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Downloading a web page as plain text Parsing data from a website Image crawler and downloader Web photo album generator Twitter command-line client Tracking changes to a website Posting to a web page and reading response Downloading a video from the Internet The Web has become the face of technology and the central access point for data processing. The primary interface to the web is via a browser that's designed for interactive use. That's great for searching and reading articles on the web, but you can also do a lot to automate your interactions with shell scripts. For instance, instead of checking a website daily to see if your favorite blogger has added a new blog, you can automate the check and be informed when there's new information. Similarly, twitter is the current hot technology for getting up-to-the-minute information. But if I subscribe to my local newspaper's twitter account because I want the local news, twitter will send me all news, including high-school sports that I don't care about. With a shell script, I can grab the tweets and customize my filters to match my desires, not rely on their filters. Downloading a web page as plain text Web pages are simply text with HTML tags, JavaScript and CSS. The HTML tags define the content of a web page, which we can parse for specific content. Bash scripts can parse web pages. An HTML file can be viewed in a web browser to see it properly formatted. Parsing a text document is simpler than parsing HTML data because we aren't required to strip off the HTML tags. Lynx is a command-line web browser which download a web page as plaintext. Getting Ready Lynx is not installed in all distributions, but is available via the package manager. # yum install lynx or apt-get install lynx How to do it... Let's download the webpage view, in ASCII character representation, in a text file by using the -dump flag with the lynx command: $ lynx URL -dump > webpage_as_text.txt This command will list all the hyperlinks <a href="link"> separately under a heading References, as the footer of the text output. This lets us parse links separately with regular expressions. For example: $lynx -dump http://google.com > plain_text_page.txt You can see the plaintext version of text by using the cat command: $ cat plain_text_page.txt Search [1]Images [2]Maps [3]Play [4]YouTube [5]News [6]Gmail [7]Drive [8]More » [9]Web History | [10]Settings | [11]Sign in [12]St. Patrick's Day 2017 _______________________________________________________ Google Search I'm Feeling Lucky [13]Advanced search [14]Language tools [15]Advertising Programs [16]Business Solutions [17]+Google [18]About Google © 2017 - [19]Privacy - [20]Terms References Parsing data from a website The lynx, sed, and awk commands can be used to mine data from websites. How to do it... Let's go through the commands used to parse details of actresses from the website: $ lynx -dump -nolist http://www.johntorres.net/BoxOfficefemaleList.html | grep -o "Rank-.*" | sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' | sort -nk 1 > actresslist.txt The output is: # Only 3 entries shown. All others omitted due to space limits 1 Keira Knightley 2 Natalie Portman 3 Monica Bellucci How it works... Lynx is a command-line web browser—it can dump a text version of a website as we would see in a web browser, instead of returning the raw html as wget or cURL do. This saves the step of removing HTML tags. The -nolist option shows the links without numbers. Parsing and formatting the lines that contain Rank is done with sed: sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' These lines are then sorted according to the ranks. See also The Downloading a web page as plain text recipe in this article explains the lynx command. Image crawler and downloader Image crawlers download all the images that appear in a web page. Instead of going through the HTML page by hand to pick the images, we can use a script to identify the images and download them automatically. How to do it... This Bash script will identify and download the images from a web page: #!/bin/bash #Desc: Images downloader #Filename: img_downloader.sh if [ $# -ne 3 ]; then echo "Usage: $0 URL -d DIRECTORY" exit -1 fi while [ -n $1 ] do case $1 in -d) shift; directory=$1; shift ;; *) url=$1; shift;; esac done mkdir -p $directory; baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") echo Downloading $url curl -s $url | egrep -o "<imgsrc=[^>]*>" | sed's/<imgsrc="([^"]*).*/1/g' | sed"s,^/,$baseurl/,"> /tmp/$$.list cd $directory; while read filename; do echo Downloading $filename curl -s -O "$filename" --silent done < /tmp/$$.list An example usage is: $ ./img_downloader.sh http://www.flickr.com/search/?q=linux -d images How it works... The image downloader script reads an HTML page, strips out all tags except <img>, parses src="URL" from the <img> tag, and downloads them to the specified directory. This script accepts a web page URL and the destination directory as command-line arguments. The [ $# -ne 3 ] statement checks whether the total number of arguments to the script is three, otherwise it exits and returns a usage example. Otherwise, this code parses the URL and destination directory: while [ -n "$1" ] do case $1 in -d) shift; directory=$1; shift ;; *) url=${url:-$1}; shift;; esac done The while loop runs until all the arguments are processed. The shift command shifts arguments to the left so that $1 will take the next argument's value; that is, $2, and so on. Hence, we can evaluate all arguments through $1 itself. The case statement checks the first argument ($1). If that matches -d, the next argument must be a directory name, so the arguments are shifted and the directory name is saved. If the argument is any other string it is a URL. The advantage of parsing arguments in this way is that we can place the -d argument anywhere in the command line: $ ./img_downloader.sh -d DIR URL Or: $ ./img_downloader.sh URL -d DIR The egrep -o "<imgsrc=[^>]*>"code will print only the matching strings, which are the <img> tags including their attributes. The phrase [^>]*matches all the characters except the closing >, that is, <imgsrc="image.jpg">. sed's/<imgsrc="([^"]*).*/1/g' extracts the url from the string src="url". There are two types of image source paths—relative and absolute. Absolute paths contain full URLs that start with http:// or https://. Relative URLs starts with / or image_name itself. An example of an absolute URL is http://example.com/image.jpg. An example of a relative URL is /image.jpg. For relative URLs, the starting / should be replaced with the base URL to transform it to http://example.com/image.jpg. The script initializes the baseurl by extracting it from the initial url with the command: baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") The output of the previously described sed command is piped into another sed command to replace a leading / with the baseurl, and the results are saved in a file named for the script's PID: /tmp/$$.list. sed"s,^/,$baseurl/,"> /tmp/$$.list The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. Web photo album generator Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically.  Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically. Getting ready This script uses a for loop to iterate over every image in the current directory. The usual Bash utilities such as cat and convert (from the Image Magick package) are used. These will generate an HTML album, using all the images, in index.html. How to do it... This Bash script will generate an HTML album page: #!/bin/bash #Filename: generate_album.sh #Description: Create a photo album using images in current directory echo "Creating album.." mkdir -p thumbs cat <<EOF1 > index.html <html> <head> <style> body { width:470px; margin:auto; border: 1px dashed grey; padding:10px; } img { margin:5px; border: 1px solid black; } </style> </head> <body> <center><h1> #Album title </h1></center> <p> EOF1 for img in *.jpg; do convert "$img" -resize "100x""thumbs/$img" echo "<a href="$img">">>index.html echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html done cat <<EOF2 >> index.html </p> </body> </html> EOF2 echo Album generated to index.html Run the script as follows: $ ./generate_album.sh Creating album.. Album generated to index.html How it works... The initial part of the script is used to write the header part of the HTML page. The following script redirects all the contents up to EOF1 to index.html: cat <<EOF1 > index.html contents... EOF1 The header includes the HTML and CSS styling. for img in *.jpg *.JPG; iterates over the file names and evaluates the body of the loop. convert "$img" -resize "100x""thumbs/$img" creates images of 100 px width as thumbnails. The following statements generate the required <img> tag and appends it to index.html: echo "<a href="$img">" echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html Finally, the footer HTML tags are appended with cat as done in the first part of the script. Twitter command-line client Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Let's see how to do it. Getting ready Recently, Twitter stopped allowing people to log in by using plain HTTP Authentication, so we must use OAuth to authenticate ourselves.  Perform the following steps: Download the bash-oauth library from https://github.com/livibetter/bash-oauth/archive/master.zip, and unzip it to any directory. Go to that directory and then inside the subdirectory bash-oauth-master, run make install-all as root.Go to https://apps.twitter.com/ and register a new app. This will make it possible to use OAuth. After registering the new app, go to your app's settings and change Access type to Read and Write. Now, go to the Details section of the app and note two things—Consumer Key and Consumer Secret, so that you can substitute these in the script we are going to write. Great, now let's write the script that uses this. How to do it... This Bash script uses the OAuth library to read tweets or send your own updates. #!/bin/bash #Filename: twitter.sh #Description: Basic twitter client oauth_consumer_key=YOUR_CONSUMER_KEY oauth_consumer_scret=YOUR_CONSUMER_SECRET config_file=~/.$oauth_consumer_key-$oauth_consumer_secret-rc if [[ "$1" != "read" ]] && [[ "$1" != "tweet" ]]; then echo -e "Usage: $0 tweet status_messagen ORn $0 readn" exit -1; fi #source /usr/local/bin/TwitterOAuth.sh source bash-oauth-master/TwitterOAuth.sh TO_init if [ ! -e $config_file ]; then TO_access_token_helper if (( $? == 0 )); then echo oauth_token=${TO_ret[0]} > $config_file echo oauth_token_secret=${TO_ret[1]} >> $config_file fi fi source $config_file if [[ "$1" = "read" ]]; then TO_statuses_home_timeline'''YOUR_TWEET_NAME''10' echo $TO_ret | sed's/,"/n/g' | sed's/":/~/' | awk -F~ '{} {if ($1 == "text") {txt=$2;} else if ($1 == "screen_name") printf("From: %sn Tweet: %snn", $2, txt);} {}' | tr'"''' elif [[ "$1" = "tweet" ]]; then shift TO_statuses_update''"$@" echo 'Tweeted :)' fi Run the script as follows: $./twitter.sh read Please go to the following link to get the PIN: https://api.twitter.com/oauth/authorize?oauth_token=LONG_TOKEN_STRING PIN: PIN_FROM_WEBSITE Now you can create, edit and present Slides offline. - by A Googler $./twitter.sh tweet "I am reading Packt Shell Scripting Cookbook" Tweeted :) $./twitter.sh read | head -2 From: Clif Flynt Tweet: I am reading Packt Shell Scripting Cookbook How it works... First of all, we use the source command to include the TwitterOAuth.sh library, so we can use its functions to access Twitter. The TO_init function initializes the library. Every app needs to get an OAuth token and token secret the first time it is used. If these are not present, we use the library function TO_access_token_helper to acquire them. Once we have the tokens, we save them to a config file so we can simply source it the next time the script is run. The library function TO_statuses_home_timeline fetches the tweets from Twitter. This data is retuned as a single long string in JSON format, which starts like this: [{"created_at":"Thu Nov 10 14:45:20 +0000 "016","id":7...9,"id_str":"7...9","text":"Dining... Each tweet starts with the created_at tag and includes a text and a screen_nametag. The script will extract the text and screen name data and display only those fields. The script assigns the long string to the variable TO_ret. The JSON format uses quoted strings for the key and may or may not quote the value. The key/value pairs are separated by commas, and the key and value are separated by a colon :. The first sed to replaces each," character set with a newline, making each key/value a separate line. These lines are piped to another sed command to replace each occurrence of ": with a tilde ~ which creates a line like screen_name~"Clif_Flynt" The final awk script reads each line. The -F~ option splits the line into fields at the tilde, so $1 is the key and $2 is the value. The if command checks for text or screen_name. The text is first in the tweet, but it's easier to read if we report the sender first, so the script saves a text return until it sees a screen_name, then prints the current value of $2 and the saved value of the text. The TO_statuses_updatelibrary function generates a tweet. The empty first parameter defines our message as being in the default format, and the message is a part of the second parameter. Tracking changes to a website Tracking website changes is useful to both web developers and users. Checking a website manually impractical, but a change tracking script can be run at regular intervals. When a change occurs, it generate a notification. Getting ready Tracking changes in terms of Bash scripting means fetching websites at different times and taking the difference by using the diff command. We can use curl and diff to do this. How to do it... This bash script combines different commands, to track changes in a webpage: #!/bin/bash #Filename: change_track.sh #Desc: Script to track changes to webpage if [ $# -ne 1 ]; then echo -e "$Usage: $0 URLn" exit 1; fi first_time=0 # Not first time if [ ! -e "last.html" ]; then first_time=1 # Set it is first time run fi curl --silent $1 -o recent.html if [ $first_time -ne 1 ]; then changes=$(diff -u last.html recent.html) if [ -n "$changes" ]; then echo -e "Changes:n" echo "$changes" else echo -e "nWebsite has no changes" fi else echo "[First run] Archiving.." fi cp recent.html last.html Let's look at the output of the track_changes.sh script on a website you control. First we'll see the output when a web page is unchanged, and then after making changes. Note that you should change MyWebSite.org to your website name. First, run the following command: $ ./track_changes.sh http://www.MyWebSite.org [First run] Archiving.. Second, run the command again. $ ./track_changes.sh http://www.MyWebSite.org Website has no changes Third, run the following command after making changes to the web page: $ ./track_changes.sh http://www.MyWebSite.org Changes: --- last.html 2010-08-01 07:29:15.000000000 +0200 +++ recent.html 2010-08-01 07:29:43.000000000 +0200 @@ -1,3 +1,4 @@ +added line :) data How it works... The script checks whether the script is running for the first time by using [ ! -e "last.html" ];. If last.html doesn't exist, it means that it is the first time and, the webpage must be downloaded and saved as last.html. If it is not the first time, it downloads the new copy recent.html and checks the difference with the diff utility. Any changes will be displayed as diff output.Finally, recent.html is copied to last.html. Note that changing the website you're checking will generate a huge diff file the first time you examine it. If you need to track multiple pages, you can create a folder for each website you intend to watch. Posting to a web page and reading the response POST and GET are two types of requests in HTTP to send information to, or retrieve information from a website. In a GET request, we send parameters (name-value pairs) through the webpage URL itself. The POST command places the key/value pairs in the message body instead of the URL. POST is commonly used when submitting long forms or to conceal the information submitted from a casual glance. Getting ready For this recipe, we will use the sample guestbook website included in the tclhttpd package.  You can download tclhttpd from http://sourceforge.net/projects/tclhttpd and then run it on your local system to create a local webserver. The guestbook page requests a name and URL which it adds to a guestbook to show who has visited a site when the user clicks the Add me to your guestbook button. This process can be automated with a single curl (or wget) command. How to do it... Download the tclhttpd package and cd to the bin folder. Start the tclhttpd daemon with this command: tclsh httpd.tcl The format to POST and read the HTML response from generic website resembles this: $ curl URL -d "postvar=postdata2&postvar2=postdata2" Consider the following example: $ curl http://127.0.0.1:8015/guestbook/newguest.html -d "name=Clif&url=www.noucorp.com&http=www.noucorp.com" curl prints a response page like this: <HTML> <Head> <title>Guestbook Registration Confirmed</title> </Head> <Body BGCOLOR=white TEXT=black> <a href="www.noucorp.com">www.noucorp.com</a> <DL> <DT>Name <DD>Clif <DT>URL <DD> </DL> www.noucorp.com </Body> -d is the argument used for posting. The string argument for -d is similar to the GET request semantics. var=value pairs are to be delimited by &. You can POST the data using wget by using --post-data "string". For example: $ wgethttp://127.0.0.1:8015/guestbook/newguest.cgi --post-data "name=Clif&url=www.noucorp.com&http=www.noucorp.com" -O output.html Use the same format as cURL for name-value pairs. The text in output.html is the same as that returned by the cURL command. The string to the post arguments (for example, to -d or --post-data) should always be given in quotes. If quotes are not used, & is interpreted by the shell to indicate that this should be a background process. How to do it... If you look at the website source (use the View Source option from the web browser), you will see an HTML form defined, similar to the following code: <form action="newguest.cgi"" method="post"> <ul> <li> Name: <input type="text" name="name" size="40"> <li> Url: <input type="text" name="url" size="40"> <input type="submit"> </ul> </form> Here, newguest.cgi is the target URL. When the user enters the details and clicks on the Submit button, the name and url inputs are sent to newguest.cgi as a POST request, and the response page is returned to the browser. Downloading a video from the internet There are many reasons for downloading a video. If you are on a metered service, you might want to download videos during off-hours when the rates are cheaper. You might want to watch videos where the bandwidth doesn't support streaming, or you might just want to make certain that you always have that video of cute cats to show your friends. Getting ready One program for downloading videos is youtube-dl. This is not included in most distributions and the repositories may not be up to date, so it's best to go to the youtube-dl main site:http://yt-dl.org You'll find links and information on that page for downloading and installing youtube-dl. How to do it… Using youtube-dl is easy. Open your browser and find a video you like. Then copy/paste that URL to the youtube-dl command line. youtube-dl  https://www.youtube.com/watch?v=AJrsl3fHQ74 While youtube-dl is downloading the file it will generate a status line on your terminal. How it works… The youtube-dl program works by sending a GET message to the server, just as a browser would do. It masquerades as a browser so that YouTube or other video providers will download a video as if the device were streaming. The –list-formats (-F) option will list the available formats a video is available in, and the –format (-f) option will specify which format to download. This is useful if you want to download a higher-resolution video than your internet connection can reliably stream. Summary In this article we learned how to download and parse website data, send data to forms, and automate website-usage tasks and similar activities. We can automate many activities that we perform interactively through a browser with a few lines of scripting. Resources for Article: Further resources on this subject: Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Tips and Tricks [article] Linux Shell Script: Monitoring Activities [article]
Read more
  • 0
  • 0
  • 3824

article-image-what-are-microservices
Packt
20 Jun 2017
12 min read
Save for later

What are Microservices?

Packt
20 Jun 2017
12 min read
In this article written by Gaurav Kumar Aroraa, Lalit Kale, Kanwar Manish, authors of the book Building Microservices with .NET Core, we will start with a brief introduction. Then, we will define its predecessors: monolithic architecture and service-oriented architecture (SOA). After this, we will see how microservices fare against both SOA and the monolithic architecture. We will then compare the advantages and disadvantages of each one of these architectural styles. This will enable us to identify the right scenario for these styles. We will understand the problems that arise from having a layered monolithic architecture. We will discuss the solutions available to these problems in the monolithic world. At the end, we will be able to break down a monolithic application into a microservice architecture. We will cover the following topics in this article: Origin of microservices Discussing microservices (For more resources related to this topic, see here.) Origin of microservices The term microservices was used for the first time in mid-2011 at a workshop of software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions on microservices, and by 2014, it had become popular enough to be considered a serious contender for large enterprises. There is no official introduction available for microservices. The understanding of the term is purely based on the use cases and discussions held in the past. We will discuss this in detail, but before that, let's check out the definition of microservices as per Wikipedia (https://en.wikipedia.org/wiki/Microservices), which sums it up as: Microservices is a specialization of and implementation approach for SOA used to build flexible, independently deployable software systems. In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples and presented microservices (refer to http://martinfowler.com/microservices/) in their own words and further detailed it as follows: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. It is very important that you see all the attributes James and Martin defined here. They defined it as an architectural style that developers could utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes: it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages. We want to emphasize this specific definition since it is the crux of the whole concept. And as we move along, it will come together by the time we finish this book. Discussing microservices Until now, we have gone through a few definitions of microservices; now, let's discuss microservices in detail. In short, a microservice architecture removes most of the drawbacks of SOA architectures.  Slicing your application into a number of services is neither SOA nor microservices. However, combining service design and best practices from the SOA world along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming, is microservices. We implement microservices to satisfy business features and implement them with reduced time to market and greater flexibility. Before we move on to understand the architecture, let's discuss the two important architectures that have led to its existence: The monolithic architecture style SOA Most of us would be aware of the scenario where during the life cycle of an enterprise application development, a suitable architectural style is decided. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, further leading up to microservices. Monolithic architecture The monolithic architectural style is a traditional architecture type and has been widely used in the industry. The term "monolithic" is not new and is borrowed from the Unix world. In Unix, most of the commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the succeeding image, we can have different components in the application such as: User interface: This handles all of the user interaction while responding with HTML or JSON or any other preferred data interchange format (in the case of web services). Business logic: All the business rules applied to the input being received in the form of user input, events, and database exist here. Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components. Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as described in the following image: As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario where we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical. Having all the components tightly coupled poses another challenge: whenever we execute or compile such software, all the components should be available or the build will fail; refer to the preceding image that represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, at run time, all of them will come together and run as one process.  A user interface depends on other components' direct sale and inventory in a manner similar to all other components that depend upon each other. In this scenario, we will not be able to execute this project in the absence of any one of these components. The process of upgrading any one of these components will be more complex as we may have to consider other components that require code changes too. This results in more development time than required for the actual change. Deploying such an application will become another challenge. During deployment, we will have to make sure that each and every component is deployed properly; otherwise, we may end up facing a lot of issues in our production environments. If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges: Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we will have to bear with a repetitive code base. Too many business modules: This is in regard to modules within the same system. Code base complexity: This results in a higher chance of code breaking due to the fix required in other modules or services. Complex code deployment: You may come across minor changes that would require whole system deployment. One module failure affecting the whole system: This is in regard to modules that depend on each other. Scalability: This is required for the entire system and not just the modules in it. Intermodule dependency: This is due to tight coupling. Spiraling development time: This is due to code complexity and interdependency. Inability to easily adapt to a new technology: In this case, the entire system would need to be upgraded. As discussed earlier, if we want to reduce development time, ease of deployment, and improve maintainability of software for enterprise applications, we should avoid the traditional or monolithic architecture. Service-oriented architecture In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should go with some modular approach where we can separate the components such that they should come out of the self-contained or single .NET assembly. The main difference between SOA & monolithic is not one or multiple assembly. But as the service in SOA runs as separate process, SOA scales better compared to monolithic. Let's discuss the modular architecture, that is, SOA. This is a famous architectural style using which the enterprise applications are designed with a collection of services as its base. These services may be RESTful or ASMX Web services. To understand SOA in more detail, let's discuss "service" first. What is service? Service, in this case, is an essential concept of SOA. It can be a piece of code, program, or software that provides some functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may either be a website, desktop app, mobile app, or any other device app. Refer to the following diagram: Service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, it can be represented by a piece of code, program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario. In the following image, Service – direct selling is directly interacting with Database, and three different clients, namely Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service – partner selling, which is interacting with Service – channel partners for database access. A product selling service is a set of services that interacts with client applications and provides database access directly or through another service, in this case, Service – Channel partner.  In the case of Service – direct selling, shown in the preceding example, it is providing some functionality to a Web Store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely fetching data, persisting data, and so on. Normally, services interact with other systems via some communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers. In the preceding image, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. Firstly, our services can be spread across different physical machines. Here, Service-direct selling is hosted on two separate machines. It is a possible scenario that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining on Server 2. Similarly, Service – partner selling appears to be having the same arrangement on Server 3 and Server 4. However, it doesn't stop Service – channel partners being hosted as a complete set on both the servers: Server 5 and Server 6. A system that uses a service or multiple services in a fashion mentioned in the preceding figure is called an SOA. We will discuss SOA in detail in the following sections. Let's recall the monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to preceding image and subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the wake of the preceding figure: Reusability: Multiple clients can consume the service. The service can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. Now, OrderService can also be used by the Reporting Dashboard UI. Stateless: Services do not persist any state between requests from the client, that is, the service doesn't know, nor care, that the subsequent request has come from the client that has/hasn't made the previous request. Contract-based: Interfaces make it technology-agnostic on both sides of implementation and consumption. It also serves to make it immune to the code updates in the underlying functionality. Scalability: A system can be scaled up; SOA can be individually clustered with appropriate load balancing. Upgradation: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality. Summary In this article, we discussed what the microservice architectural style is in detail, its history, and how it differs from its predecessors: monolithic and SOA. We further defined the various challenges that monolithic faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic. We also discussed the limitations of the monolithic architecture, including scaling problems, by implementing a real-life monolithic application. The microservice architecture style resolves all these issues by reducing code interdependency and isolating the dataset size that any one of the microservices works upon. We utilized dependency injection and database refactoring for this. We further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose what industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and removal of human dependency. Resources for Article: Further resources on this subject: Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article] Microservices – Brave New World [article]
Read more
  • 0
  • 0
  • 3273

article-image-cors-nodejs
Packt
20 Jun 2017
14 min read
Save for later

CORS in Node.js

Packt
20 Jun 2017
14 min read
In this article by Randall Goya, and Rajesh Gunasundaram the author of the book CORS Essentials, Node.js is a cross-platform JavaScript runtime environment that executes JavaScript code at server side. This enables to have a unified language across the web application development. JavaScript becomes the unified language that runs both on client side and server side. (For more resources related to this topic, see here.) In this article we will learn about: Node.js is a JavaScript platform for developing server-side web applications. Node.js can provide the web server for other frameworks including Express.js, AngularJS, Backbone,js, Ember.js and others. Some other JavaScript frameworks such as ReactJS, Ember.js and Socket.IO may also use Node.js as the web server. Isomorphic JavaScript can add server-side functionality for client-side frameworks. JavaScript frameworks are evolving rapidly. This article reviews some of the current techniques, and syntax specific for some frameworks. Make sure to check the documentation for the project to discover the latest techniques. Understanding CORS concepts, you may create your own solution, because JavaScript is a loosely structured language. All the examples are based on the fundamentals of CORS, with allowed origin(s), methods, and headers such as Content-Type, or preflight, that may be required according to the CORS specification. JavaScript frameworks are very popular JavaScript is sometimes called the lingua franca of the Internet, because it is cross-platform and supported by many devices. It is also a loosely-structured language, which makes it possible to craft solutions for many types of applications. Sometimes an entire application is built in JavaScript. Frequently JavaScript provides a client-side front-end for applications built with Symfony, Content Management Systems such as Drupal, and other back-end frameworks. Node.js is server-side JavaScript and provides a web server as an alternative to Apache, IIS, Nginx and other traditional web servers. Introduction to Node.js Node.js is an open-source and cross-platform library that enables in developing server-side web applications. Applications will be written using JavaScript in Node.js can run on many operating systems, including OS X, Microsoft Windows, Linux, and many others. Node.js provides a non-blocking I/O and an event-driven architecture designed to optimize an application's performance and scalability for real-time web applications. The biggest difference between PHP and Node.js is that PHP is a blocking language, where commands execute only after the previous command has completed, while Node.js is a non-blocking language where commands execute in parallel, and use callbacks to signal completion. Node.js can move files, payloads from services, and data asynchronously, without waiting for some command to complete, which improves performance. Most JS frameworks that work with Node.js use the concept of routes to manage pages and other parts of the application. Each route may have its own set of configurations. For example, CORS may be enabled only for a specific page or route. Node.js loads modules for extending functionality via the npm package manager. The developer selects which packages to load with npm, which reduces bloat. The developer community creates a large number of npm packages created for specific functions. JXcore is a fork of Node.js targeting mobile devices and IoTs (Internet of Things devices). JXcore can use both Google V8 and Mozilla SpiderMonkey as its JavaScript engine. JXcore can run Node applications on iOS devices using Mozilla SpiderMonkey. MEAN is a popular JavaScript software stack with MongoDB (a NoSQL database), Express.js and AngularJS, all of which run on a Node.js server. JavaScript frameworks that work with Node.js Node.js provides a server for other popular JS frameworks, including AngularJS, Express.js. Backbone.js, Socket.IO, and Connect.js. ReactJS was designed to run in the client browser, but it is often combined with a Node.js server. As we shall see in the following descriptions, these frameworks are not necessarily exclusive, and are often combined in applications. Express.js is a Node.js server framework Express.js is a Node.js web application server framework, designed for building single-page, multi-page, and hybrid web applications. It is considered the "standard" server framework for Node.js. The package is installed with the command npm install express –save. AngularJS extends static HTML with dynamic views HTML was designed for static content, not for dynamic views. AngularJS extends HTML syntax with custom tag attributes. It provides model–view–controller (MVC) and model–view–viewmodel (MVVM) architectures in a front-end client-side framework.  AngularJS is often combined with a Node.js server and other JS frameworks. AngularJS runs client-side and Express.js runs on the server, therefore Express.js is considered more secure for functions such as validating user input, which can be tampered client-side. AngularJS applications can use the Express.js framework to connect to databases, for example in the MEAN stack. Connect.js provides middleware for Node.js requests Connect.js is a JavaScript framework providing middleware to handle requests in Node.js applications. Connect.js provides middleware to handle Express.js and cookie sessions, to provide parsers for the HTML body and cookies, and to create vhosts (virtual hosts) and error handlers, and to override methods. Backbone.js often uses a Node.js server Backbone.js is a JavaScript framework with a RESTful JSON interface and is based on the model–view–presenter (MVP) application design. It is designed for developing single-page web applications, and for keeping various parts of web applications (for example, multiple clients and the server) synchronized. Backbone depends on Underscore.js, plus jQuery for use of all the available fetures. Backbone often uses a Node.js server, for example to connect to data storage. ReactJS handles user interfaces ReactJS is a JavaScript library for creating user interfaces while addressing challenges encountered in developing single-page applications where data changes over time. React handles the user interface in model–view–controller (MVC) architecture. ReactJS typically runs client-side and can be combined with AngularJS. Although ReactJS was designed to run client-side, it can also be used server-side in conjunction with Node.js. PayPal and Netflix leverage the server-side rendering of ReactJS known as Isomorphic ReactJS. There are React-based add-ons that take care of the server-side parts of a web application. Socket.IO uses WebSockets for realtime event-driven applications Socket.IO is a JavaScript library for event-driven web applications using the WebSocket protocol ,with realtime, bi-directional communication between web clients and servers. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js. Although it can be used as simply a wrapper for WebSocket, it provides many more features, including broadcasting to multiple sockets, storing data associated with each client, and asynchronous I/O. Socket.IO provides better security than WebSocket alone, since allowed domains must be specified for its server. Ember.js can use Node.js Ember is another popular JavaScript framework with routing that uses Moustache templates. It can run on a Node.js server, or also with Express.js. Ember can also be combined with Rack, a component of Ruby On Rails (ROR). Ember Data is a library for  modeling data in Ember.js applications. CORS in Express.js The following code adds the Access-Control-Allow-Origin and Access-Control-Allow-Headers headers globally to all requests on all routes in an Express.js application. A route is a path in the Express.js application, for example /user for a user page. app.all sets the configuration for all routes in the application. Specific HTTP requests such as GET or POST are handled by app.get and app.post. app.all('*', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); app.get('/', function(req, res, next) { // Handle GET for this route }); app.post('/', function(req, res, next) { // Handle the POST for this route }); For better security, consider limiting the allowed origin to a single domain, or adding some additional code to validate or limit the domain(s) that are allowed. Also, consider limiting sending the headers only for routes that require CORS by replacing app.all with a more specific route and method. The following code only sends the CORS headers on a GET request on the route/user, and only allows the request from http://www.localdomain.com. app.get('/user', function(req, res, next) { res.header("Access-Control-Allow-Origin", "http://www.localdomain.com"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); Since this is JavaScript code, you may dynamically manage the values of routes, methods, and domains via variables, instead of hard-coding the values. CORS npm for Express.js using Connect.js middleware Connect.js provides middleware to handle requests in Express.js. You can use Node Package Manager (npm) to install a package that enables CORS in Express.js with Connect.js: npm install cors The package offers flexible options, which should be familiar from the CORS specification, including using credentials and preflight. It provides dynamic ways to validate an origin domain using a function or a regular expression, and handler functions to process preflight. Configuration options for CORS npm origin: Configures the Access-Control-Allow-Origin CORS header with a string containing the full URL and protocol making the request, for example http://localdomain.com. Possible values for origin: Default value TRUE uses req.header('Origin') to determine the origin and CORS is enabled. When set to FALSE CORS is disabled. It can be set to a function with the request origin as the first parameter and a callback function as the second parameter. It can be a regular expression, for example /localdomain.com$/, or an array of regular expressions and/or strings to match. methods: Sets the Access-Control-Allow-Methods CORS header. Possible values for methods: A comma-delimited string of HTTP methods, for example GET, POST An array of HTTP methods, for example ['GET', 'PUT', 'POST'] allowedHeaders: Sets the Access-Control-Allow-Headers CORS header. Possible values for allowedHeaders: A comma-delimited string of  allowed headers, for example "Content-Type, Authorization'' An array of allowed headers, for example ['Content-Type', 'Authorization'] If unspecified, it defaults to the value specified in the request's Access-Control-Request-Headers header exposedHeaders: Sets the Access-Control-Expose-Headers header. Possible values for exposedHeaders: A comma-delimited string of exposed headers, for example 'Content-Range, X-Content-Range' An array of exposed headers, for example ['Content-Range', 'X-Content-Range'] If unspecified, no custom headers are exposed credentials: Sets the Access-Control-Allow-Credentials CORS header. Possible values for credentials: TRUE—passes the header for preflight FALSE or unspecified—omit the header, no preflight maxAge: Sets the Access-Control-Allow-Max-Age header. Possible values for maxAge An integer value in milliseconds for TTL to cache the request If unspecified, the request is not cached preflightContinue: Passes the CORS preflight response to the next handler. The default configuration without setting any values allows all origins and methods without preflight. Keep in mind that complex CORS requests other than GET, HEAD, POST will fail without preflight, so make sure you enable preflight in the configuration when using them. Without setting any values, the configuration defaults to: { "origin": "*", "methods": "GET,HEAD,PUT,PATCH,POST,DELETE", "preflightContinue": false } Code examples for CORS npm These examples demonstrate the flexibility of CORS npm for specific configurations. Note that the express and cors packages are always required. Enable CORS globally for all origins and all routes The simplest implementation of CORS npm enables CORS for all origins and all requests. The following example enables CORS for an arbitrary route " /product/:id" for a GET request by telling the entire app to use CORS for all routes: var express = require('express') , cors = require('cors') , app = express(); app.use(cors()); // this tells the app to use CORS for all re-quests and all routes app.get('/product/:id', function(req, res, next){ res.json({msg: 'CORS is enabled for all origins'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80'); }); Allow CORS for dynamic origins for a specific route The following example uses corsOptions to check if the domain making the request is in the whitelisted array with a callback function, which returns null if it doesn't find a match. This CORS option is passed to the route "product/:id" which is the only route that has CORS enabled. The allowed origins can be dynamic by changing the value of the variable "whitelist." var express = require('express') , cors = require('cors') , app = express(); // define the whitelisted domains and set the CORS options to check them var whitelist = ['http://localdomain.com', 'http://localdomain-other.com']; var corsOptions = { origin: function(origin, callback){ var originWhitelisted = whitelist.indexOf(origin) !== -1; callback(null, originWhitelisted); } }; // add the CORS options to a specific route /product/:id for a GET request app.get('/product/:id', cors(corsOptions), function(req, res, next){ res.json({msg: 'A whitelisted domain matches and CORS is enabled for route product/:id'}); }); // log that CORS is enabled on the server app.listen(80, function(){ console.log(''CORS is enabled on the web server listening on port 80''); }); You may set different CORS options for specific routes, or sets of routes, by defining the options assigned to unique variable names, for example "corsUserOptions." Pass the specific configuration variable to each route that requires that set of options. Enabling CORS preflight CORS requests that use a HTTP method other than GET, HEAD, POST (for example DELETE), or that use custom headers, are considered complex and require a preflight request before proceeding with the CORS requests. Enable preflight by adding an OPTIONS handler for the route: var express = require('express') , cors = require('cors') , app = express(); // add the OPTIONS handler app.options('/products/:id', cors()); // options is added to the route /products/:id // use the OPTIONS handler for the DELETE method on the route /products/:id app.del('/products/:id', cors(), function(req, res, next){ res.json({msg: 'CORS is enabled with preflight on the route '/products/:id' for the DELETE method for all origins!'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); You can enable preflight globally on all routes with the wildcard: app.options('*', cors()); Configuring CORS asynchronously One of the reasons to use NodeJS frameworks is to take advantage of their asynchronous abilities, handling multiple tasks at the same time. Here we use a callback function corsDelegateOptions and add it to the cors parameter passed to the route /products/:id. The callback function can handle multiple requests asynchronously. var express = require('express') , cors = require('cors') , app = express(); // define the allowed origins stored in a variable var whitelist = ['http://example1.com', 'http://example2.com']; // create the callback function var corsDelegateOptions = function(req, callback){ var corsOptions; if(whitelist.indexOf(req.header('Origin')) !== -1){ corsOptions = { origin: true }; // the requested origin in the CORS response matches and is allowed }else{ corsOptions = { origin: false }; // the requested origin in the CORS response doesn't match, and CORS is disabled for this request } callback(null, corsOptions); // callback expects two parameters: error and options }; // add the callback function to the cors parameter for the route /products/:id for a GET request app.get('/products/:id', cors(corsDelegateOptions), function(req, res, next){ res.json({msg: ''A whitelisted domain matches and CORS is enabled for route product/:id'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); Summary We have learned important stuffs of applying CORS in Node.js. Let us have a qssuick recap of what we have learnt: Node.js provides a web server built with JavaScript, and can be combined with many other JS frameworks as the application server. Although some frameworks have specific syntax for implementing CORS, they all follow the CORS specification by specifying allowed origin(s) and method(s). More robust frameworks allow custom headers such as Content-Type, and preflight when required for complex CORS requests. JavaScript frameworks may depend on the jQuery XHR object, which must be configured properly to allow Cross-Origin requests. JavaScript frameworks are evolving rapidly. The examples here may become outdated. Always refer to the project documentation for up-to-date information. With knowledge of the CORS specification, you may create your own techniques using JavaScript based on these examples, depending on the specific needs of your application. https://en.wikipedia.org/wiki/Node.js  Resources for Article: Further resources on this subject: An Introduction to Node.js Design Patterns [article] Five common questions for .NET/Java developers learning JavaScript and Node.js [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 9839
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-understanding-basics-gulp
Packt
19 Jun 2017
15 min read
Save for later

Understanding the Basics of Gulp

Packt
19 Jun 2017
15 min read
In this article written by Travis Maynard, author of the book Getting Started with Gulp - Second Edition, we will take a look at the basics of gulp and how it works. Understanding some of the basic principles and philosophies behind the tool, it's plugin system will assist you as you begin writing your own gulpfiles. We'll start by taking a look at the engine behind gulp and then follow up by breaking down the inner workings of gulp itself. By the end of this article, you will be prepared to begin writing your own gulpfile. (For more resources related to this topic, see here.) Installing node.js and npm As you learned in the introduction, node.js and npm are the engines that work behind the scenes that allow us to operate gulp and keep track of any plugins we decide to use. Downloading and installing node.js For Mac and Windows, the installation is quite simple. All you need to do is navigate over to http://nodejs.org and click on the big green install button. Once the installer has finished downloading, run the application and it will install both node.js and npm. For Linux, there are a couple more steps, but don't worry; with your newly acquired command-line skills, it should be relatively simple. To install node.js and npm on Linux, you'll need to run the following three commands in Terminal: sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs The details of these commands are outside the scope of this book, but just for reference, they add a repository to the list of available packages, update the total list of packages, and then install the application from the repository we added. Verify the installation To confirm that our installation was successful, try the following command in your command line: node -v If node.js is successfully installed, node -v will output a version number on the next line of your command line. Now, let's do the same with npm: npm -v Like before, if your installation was successful, npm -v should output the version number of npm on the next line. The versions displayed in this screenshot reflect the latest Long Term Support (LTS) release currently available as of this writing. This may differ from the version that you have installed depending on when you're reading this. It's always suggested that you use the latest LTS release when possible. The -v  command is a common flag used by most command-line applications to quickly display their version number. This is very useful to debug version issues while using command-line applications. Creating a package.json file Having npm in our workflow will make installing packages incredibly easy; however, we should look ahead and establish a way to keep track of all the packages (or dependencies) that we use in our projects. Keeping track of dependencies is very important to keep your workflow consistent across development environments. Node.js uses a file named package.json to store information about your project, and npm uses this same file to manage all of the package dependencies your project requires to run properly. In any project using gulp, it is always a great practice to create this file ahead of time so that you can easily populate your dependency list as you are installing packages or plugins. To create the package.json file, we will need to run npm's built in init action using the following command: npm init Now, using the preceding command, the terminal will show the following output: Your command line will prompt you several times asking for basic information about the project, such as the project name, author, and the version number. You can accept the defaults for these fields by simply pressing the Enter key at each prompt. Most of this information is used primarily on the npm website if a developer decides to publish a node.js package. For our purposes, we will just use it to initialize the file so that we can properly add our dependencies as we move forward. The screenshot for the preceding command is as follows: Installing gulp With npm installed and our package.json file created, we are now ready to begin installing node.js packages. The first and most important package we will install is none other than gulp itself. Locating gulp Locating and gathering information about node.js packages is very simple, thanks to the npm registry. The npm registry is a companion website that keeps track of all the published node.js modules, including gulp and gulp plugins. You can find this registry at http://npmjs.org. Take a moment to visit the npm registry and do a quick search for gulp. The listing page for each node.js module will give you detailed information on each project, including the author, version number, and dependencies. Additionally, it also features a small snippet of command-line code that you can use to install the package along with readme information that will outline basic usage of the package and other useful information. Installing gulp locally Before we install gulp, make sure you are in your project's root directory, gulp-book, using the cd and ls commands you learned earlier. If you ever need to brush up on any of the standard commands, feel free to take a moment to step back and review as we progress through the book. To install packages with npm, we will follow a similar pattern to the ones we've used previously. Since we will be covering both versions 3.x and 4.x in this book, we'll demonstrate installing both: For installing gulp 3.x, you can use the following: npm install --save-dev gulp For installing gulp 4.x, you can use the following: npm install --save-dev gulpjs/gulp#4.0 This command is quite different from the 3.x command because this command is installing the latest development release directly from GitHub. Since the 4.x version is still being actively developed, this is the only way to install it at the time of writing this book. Once released, you will be able to run the previous command to without installing from GitHub. Upon executing the command, it will result in output similar to the following: To break this down, let's examine each piece of this command to better understand how npm works: npm: This is the application we are running install: This is the action that we want the program to run. In this case, we are instructing npm to install something in our local folder --save-dev: This is a flag that tells npm to add this module to the dev dependencies list in our package.json file gulp: This is the package we would like to install Additionally, npm has a –-save flag that saves the module to the list of dependencies instead of devDependencies. These dependency lists are used to separate the modules that a project depends on to run, and the modules a project depends on when in development. Since we are using gulp to assist us in development, we will always use the --save-dev flag throughout the book. So, this command will use npm to contact the npm registry, and it will install gulp to our local gulp-book directory. After using this command, you will note that a new folder has been created that is named node_modules. It is where node.js and npm store all of the installed packages and dependencies of your project. Take a look at the following screenshot: Installing gulp-cli globally For many of the packages that we install, this will be all that is needed. With gulp, we must install a companion module gulp-cli globally so that we can use the gulp command from anywhere in our filesystem. To install gulp-cli globally, use the following command: npm install -g gulp-cli In this command, not much has changed compared to the original command where we installed the gulp package locally. We've only added a -g flag to the command, which instructs npm to install the package globally. On Windows, your console window should be opened under an administrator account in order to install an npm package globally. At first, this can be a little confusing, and for many packages it won't apply. Similar build systems actually separate these usages into two different packages that must be installed separately; once that is installed globally for command-line use and another installed locally in your project. Gulp was created so that both of these usages could be combined into a single package, and, based on where you install it, it could operate in different ways. Anatomy of a gulpfile Before we can begin writing tasks, we should take a look at the anatomy and structure of a gulpfile. Examining the code of a gulpfile will allow us to better understand what is happening as we run our tasks. Gulp started with four main methods:.task(), .src(), .watch(), and .dest(). The release of version 4.x introduced additional methods such as: .series() and .parallel(). In addition to the gulp API methods, each task will also make use of the node.js .pipe() method. This small list of methods is all that is needed to understand how to begin writing basic tasks. They each represent a specific purpose and will act as the building blocks of our gulpfile. The task() method The .task() method is the basic wrapper for which we create our tasks. Its syntax is .task(string, function). It takes two arguments—string value representing the name of the task and a function that will contain the code you wish to execute upon running that task. The src() method The .src() method is our input or how we gain access to the source files that we plan on modifying. It accepts either a single glob string or an array of glob strings as an argument. Globs are a pattern that we can use to make our paths more dynamic. When using globs, we can match an entire set of files with a single string using wildcard characters as opposed to listing them all separately. The syntax is for this method is .src(string || array).  The watch() method The .watch() method is used to specifically look for changes in our files. This will allow us to keep gulp running as we code so that we don't need to rerun gulp any time we need to process our tasks. This syntax is different between the 3.x and 4.x version. For version 3.x the syntax is—.watch(string || array, array) with the first argument being our paths/globs to watch and the second argument being the array of task names that need to be run when those files change. For version 4.x the syntax has changed a bit to allow for two new methods that provide more explicit control of the order in which tasks are executed. When using 4.x instead of passing in an array as the second argument, we will use either the .series() or .parallel() method like so—.watch(string || array, gulp.series() || gulp.parallel()). The dest() method The dest() method is used to set the output destination of your processed file. Most often, this will be used to output our data into a build or dist folder that will be either shared as a library or accessed by your application. The syntax for this method is .dest(string). The pipe() method The .pipe() method will allow us to pipe together smaller single-purpose plugins or applications into a pipechain. This is what gives us full control of the order in which we would need to process our files. The syntax for this method is .pipe(function). The parallel() and series() methods The parallel and series methods were added in version 4.x as a way to easily control whether your tasks are run together all at once or in a sequence one after the other. This is important if one of your tasks requires that other tasks complete before it can be ran successfully. When using these methods the arguments will be the string names of your tasks separated by a comma. The syntax for these methods is—.series(tasks) and .parallel(tasks); Understanding these methods will take you far, as these are the core elements of building your tasks. Next, we will need to put these methods together and explain how they all interact with one another to create a gulp task. Including modules/plugins When writing a gulpfile, you will always start by including the modules or plugins you are going to use in your tasks. These can be both gulp plugins or node.js modules, based on what your needs are. Gulp plugins are small node.js applications built for use inside of gulp to provide a single-purpose action that can be chained together to create complex operations for your data. Node.js modules serve a broader purpose and can be used with gulp or independently. Next, we can open our gulpfile.js file and add the following code: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); The gulpfile.js file will look as shown in the following screenshot: In this code, we have included gulp and two gulp plugins: gulp-concat and gulp-uglify. As you can now see, including a plugin into your gulpfile is quite easy. After we install each module or plugin using npm, you simply use node.js' require() function and pass it in the name of the module. You then assign it to a new variable so that you can use it throughout your gulpfile. This is node.js' way of handling modularity, and because a gulpfile is essentially a small node.js application, it adopts this practice as well. Writing a task All tasks in gulp share a common structure. Having reviewed the five methods at the beginning of this section, you will already be familiar with most of it. Some tasks might end up being larger than others, but they still follow the same pattern. To better illustrate how they work, let's examine a bare skeleton of a task. This skeleton is the basic bone structure of each task we will be creating. Studying this structure will make it incredibly simple to understand how parts of gulp work together to create a task. An example of a sample task is as follows: gulp.task(name, function() { return gulp.src(path) .pipe(plugin) .pipe(plugin) .pipe(gulp.dest(path)); }); In the first line, we use the new gulp variable that we created a moment ago and access the .task() method. This creates a new task in our gulpfile. As you learned earlier, the task method accepts two arguments: a task name as a string and a callback function that will contain the actions we wish to run when this task is executed. Inside the callback function, we reference the gulp variable once more and then use the .src() method to provide the input to our task. As you learned earlier, the source method accepts a path or an array of paths to the files that we wish to process. Next, we have a series of three .pipe() methods. In each of these pipe methods, we will specify which plugin we would like to use. This grouping of pipes is what we call our pipechain. The data that we have provided gulp with in our source method will flow through our pipechain to be modified by each piped plugin that it passes through. The order of the pipe methods is entirely up to you. This gives you a great deal of control in how and when your data is modified. You may have noticed that the final pipe is a bit different. At the end of our pipechain, we have to tell gulp to move our modified file somewhere. This is where the .dest() method comes into play. As we mentioned earlier, the destination method accepts a path that sets the destination of the processed file as it reaches the end of our pipechain. If .src() is our input, then .dest() is our output. Reflection To wrap up, take a moment to look at a finished gulpfile and reflect on the information that we just covered. This is the completed gulpfile that we will be creating from scratch, so don't worry if you still feel lost. This is just an opportunity to recognize the patterns and syntaxes that we have been studying so far. We will begin creating this file step by step: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); // Process Styles gulp.task('styles', function() {     return gulp.src('css/*.css')         .pipe(concat('all.css'))         .pipe(gulp.dest('dist/')); }); // Process Scripts gulp.task('scripts', function() {     return gulp.src('js/*.js')         .pipe(concat('all.js'))         .pipe(uglify())         .pipe(gulp.dest('dist/')); }); // Watch Files For Changes gulp.task('watch', function() {     gulp.watch('css/*.css', 'styles');     gulp.watch('js/*.js', 'scripts'); }); // Default Task gulp.task('default', gulp.parallel('styles', 'scripts', 'watch')); The gulpfile.js file will look as follows: Summary In this article, you installed node.js and learned the basics of how to use npm and understood how and why to install gulp both locally and globally. We also covered some of the core differences between the 3.x and 4.x versions of gulp and how they will affect your gulpfiles as we move forward. To wrap up the article, we took a small glimpse into the anatomy of a gulpfile to prepare us for writing our own gulpfiles from scratch. Resources for Article: Further resources on this subject: Performing Task with Gulp [article] Making a Web Server in Node.js [article] Developing Node.js Web Applications [article]
Read more
  • 0
  • 0
  • 3574

article-image-wordpress-web-application-framework
Packt
15 Jun 2017
20 min read
Save for later

WordPress as a Web Application Framework

Packt
15 Jun 2017
20 min read
In this article written by Rakhitha Ratanayake, author of the book Wordpress Web Application Development - Third Edition you will learn that WordPress has matured from the most popular blogging platform to the most popular content management system. Thousands of developers around the world are making a living from WordPress design and development. As more and more people are interested in using WordPress, the dream of using this amazing framework for web application development is becoming possible. The future seems bright as WordPress hasalready got dozens of built-in features, which can be easily adapted to web application development using slight modifications. Since you are already reading this article, you have to be someone who is really excited to see how WordPress fits into web application development. Throughout this article, we will learn how we can inject the best practices of web development into WordPress framework to build web applications in rapid process.Basically, this article will be important for developers from two different perspectives. On one hand, beginner- to intermediate-level WordPress developers can get knowledge of cutting-edge web development technologies and techniques to build complex applications. On the other hand, web development experts who are already familiar with popular PHP frameworks can learn WordPress for rapid application development. So, let's get started! In this article, we will cover the following topics: WordPress as a CMS WordPress as a web application framework Simplifying development with built-in features Identifying the components of WordPress Making a development plan for forum management application Understanding limitations and sticking with guidelines Building a question-answer interface Enhancing features of the questions plugin (For more resources related to this topic, see here.) In order to work with this article, you should be familiar with WordPress themes, plugins, and its overall process. Developers who are experienced in PHP frameworks can work with this article while using the reference sources to learn WordPress. By the end of this article, you will have the ability to make the decision to choose WordPress for web development. WordPress as a CMS Way back in 2003, WordPress released its first version as a simple blogging platform and continued to improve until it became the most popular blogging tool. Later, it continued to improve as a CMS and now has a reputation for being the most popular CMS for over 5 years. These days everyone sees WordPress as a CMS rather than just a blogging tool. Now the question is, where will it go next? Recent versions of WordPress have included popular web development libraries such as Backbone.js and Underscore.js and developers are building different types of applications with WordPress. Also the most recent introduction of REST API is a major indication that WordPress is moving towards the direction of building web applications. The combination of REST API and modern JavaScript frameworks will enable developers to build complex web applications with WordPress. Before we consider the application development aspects of WordPress, it's ideal to figure out the reasons for it being such a popular CMS. The following are some of the reasons behind the success of WordPress as a CMS: Plugin-based architecture for adding independent features and the existence of over 40,000 open source plugins Ability to create unlimited free websites at www.wordpress.com and use the basic WordPress features A super simple and easy-to-access administration interface A fast learning curve and comprehensive documentation for beginners A rapid development process involving themes and plugins An active development community with awesome support Flexibility in building websites with its themes, plugins, widgets, and hooks Availability of large premium theme and plugin marketplaces for developers to sell advanced plugin/themes and users to build advanced sites with those premium plugins/themes without needing a developer. These reasons prove why WordPress is the top CMS for website development. However, experienced developers who work with full stack web applications don't believe that WordPress has a future in web application development. While it's up for debate, we'll see what WordPress has to offer for web development. Once you complete reading this article, you will be able to decide whether WordPress has a future in web applications. I have been working with full stack frameworks for several years, and I certainly believe the future of WordPress for web development. WordPress as a web application framework In practice, the decision to choose a development framework depends on the complexity of your application. Developers will tend to go for frameworks in most scenarios. It's important to figure out why we go with frameworks for web development. Here's a list of possible reasons why frameworks become a priority in web application development: Frameworks provide stable foundations for building custom functionalities Usually, stable frameworks have a large development community with an active support They have built-in features to address the common aspects of application development, such as routing, language support, form validation, user management, and more They have a large amount of utility functions to address repetitive tasks Full stack development frameworks such as Zend, CodeIgniter, and CakePHP adhere to the points mentioned in the preceding section, which in turn becomes the framework of choice for most developers. However, we have to keep in mind that WordPress is an application where we built applications on top of existing features. On the other hand, traditional frameworks are foundations used for building applications such as WordPress. Now, let's take a look at how WordPress fits into the boots of web application framework. The MVC versus event-driven architecture A vast majority of web development frameworks are built to work with the Model-View-Controller(MVC) architecture, where an application is separated into independent layers called models, views, and controllers. In MVC, we have a clear understanding of what goes where and when each of the layers will be integrated in the process. So, the first thing most developers will look at is the availability of MVC in WordPress. Unfortunately, WordPress is not built on top of the MVC architecture. This is one of the main reasons why developers refuse to choose it as a development framework. Even though it is not MVC, we can create custom execution process to make it work like a MVC application. Also, we can find frameworks such as WP MVC, which can be used to take advantage of both WordPress's native functionality and a vast plugin library and all of the many advantages of an MVC framework. Unlike other frameworks, it won't have the full capabilities of MVC. However, unavailability of the MVC architecture doesn't mean that we cannot develop quality applications with WordPress. There are many other ways to separate concerns in WordPress applications. WordPress on the other hand, relies on a procedural event-driven architecture with its action hooks and filters system. Once a user makes a request, these actions will get executed in a certain order to provide the response to the user. You can find the complete execution procedure at http://codex.wordpress.org/Plugin_API/Action_Reference. In the event-driven architecture, both model and controller code gets scattered throughout the theme and plugin files. Simplifying development with built-in features As we discussed in the previous section, the quality of a framework depends on its core features. The better the quality of the core, the better it will be for developing quality and maintainable applications. It's surprising to see the availability of number of WordPress features directly related to web development, even though it is meant to create websites. Let's get a brief introduction about the WordPress core features to see how it fits into web application development. User management Built-in user management features are quite advanced in order to cater to the most common requirements of any web application. Its user roles and capability handling makes it much easier to control the access to specific areas of your application. We can separate users into multiple levels using roles and then use capabilities to define the permitted functionality for each user level. Most full stack frameworks don't have a built-in user management features, and hence, this can be considered as an advantage of using WordPress. Media management File uploading and managing is a common and time consuming task in web applications. Media uploader, which comes built-in with WordPress, can be effectively used to automate the file-related tasks without writing much source code. A super-simple interface makes it so easy for application users to handle file-related tasks. Also, WordPress offers built-in functions for directly uploading media files without the media uploader. These functions can be used effectively to handle advanced media uploading requirements without spending much time. Template management WordPress offers a simple template management system for its themes. It is not as complex or fully featured as a typical template engine. However, it offers a wide range of capabilities in CMS development perspective, which we can extend to suit web applications. Database management In most scenarios, we will be using the existing database table structure for our application development. WordPress database management functionalities offer a quick and easy way of working with existing tables with its own style of functions. Unlike other frameworks, WordPress provides a built-in database structure, and hence most of the functionalities can be used to directly work with these tables without writing custom SQL queries. Routing Comprehensive support for routing is provided through permalinks. WordPress makes it simple to change the default routing and choose your own routing, in order to built search engine friendly URLs. XML-RPC API Building an API is essential for allowing third-party access to our application. WordPress provides built-in API for accessing CMS-related functionality through its XML-RPC interface. Also, developers are allowed to create custom API functions through plugins, making it highly flexible for complex applications. REST API REST API makes it possible to give third-party access to the application data, similar to XML-RPC API. This API uses easy to understand HTTP requests and JSON format making it easier to communicate with WordPress applications. JavaScript is becoming the modern trend in developing applications. So the availability of JSON in REST API will allow external users to access and manipulate WordPress data within their JavaScript based applications. Caching Caching in WordPress can be categorized into two sections called persistent and nonpersistent cache. Nonpersistent caching is provided by WordPress cache object while persistent caching is provided through its Transient API. Caching techniques in WordPress is a simple compared to other frameworks, but it's powerful enough to cater to complex web applications. Scheduling As developers, you might have worked with cron jobs for executing certain tasks at specified intervals. WordPress offers same scheduling functionality through built-in functions, similar to a cron job. However, WordPress cron execution is slightly different from normal cron jobs. In WordPress, cron won't be executed unless someone visits the site. Typically, it's used for scheduling future posts. However, it can be extended to cater complex scheduling functionality. Plugins and widgets The power of WordPress comes from its plugin mechanism, which allows us to dynamically add or remove functionality without interrupting other parts of the application. Widgets can be considered as a part of the plugin architecture and will be discussed in detail further in this article. Themes The design of a WordPress site comes through the theme. This site offers many built-in template files to cater to the default functionality. Themes can be easily extended for custom functionality. Also, the design of the site can be changed instantly by switching compatible theme. Actions and filters Actions and filters are part of the WordPress hook system. Actions are events that occur during a request. We can use WordPress actions to execute certain functionalities after a specific event is completed. On the other hand, filters are functions that are used to filter, modify, and return the data. Flexibility is one of the key reasons for the higher popularity of WordPress, compared to other CMS. WordPress has its own way of extending functionality of custom features as well as core features through actions and filters. These actions and filters allow the developers to build advanced applications and plugins, which can be easily extended with minor code changes. As a WordPress developer, it's a must to know the perfect use of these actions and filters in order to build highly flexible systems. The admin dashboard WordPress offers a fully featured backend for administrators as well as normal users. These interfaces can be easily customized to adapt to custom applications. All the application-related lists, settings, and data can be handled through the admin section. The overall collection of features provided by WordPress can be effectively used to match the core functionalities provided by full stack PHP frameworks. Identifying the components of WordPress WordPress comes up with a set of prebuilt components, which are intended to provide different features and functionality for an application. A flexible theme and powerful admin features act as the core of WordPress websites, while plugins and widgets extend the core with application-specific features. As a CMS, we all have a pretty good understanding of how these components fit into a WordPress website. Here our goal is to develop web applications with WordPress, and hence it is important to identify the functionality of these components in the perspective of web applications. So, we will look at each of the following components, how they fit into web applications, and how we can take advantage of them to create flexible applications through a rapid development process: The role of WordPress themes The role of admin dashboard The role of plugins The role of widgets The role of WordPress themes Most of us are used to seeing WordPress as a CMS. In its default view, a theme is a collection of files used to skin your web application layouts. In web applications, it's recommended to separate different components into layers such as models, views, and controllers. WordPress doesn't adhere to the MVC architecture. However, we can easily visualize themes or templates as the presentation layer of WordPress. In simple terms, views should contain the HTML needed to generate the layout and all the data it needs, should be passed to the views. WordPress is built to create content management systems, and hence, it doesn't focus on separating views from its business logic. Themes contain views, also known as template files, as a mix of both HTML code and PHP logic. As web application developers, we need to alter the behavior of existing themes, in order to limit the logic inside templates and use plugins to parse the necessary model data to views. Structure of a WordPress page layout Typically, posts or pages created in WordPress consist of five common sections. Most of these components will be common across all the pages in the website. In web applications, we also separate the common layout content into separate views to be included inside other views. It's important for us to focus on how we can adapt the layout into web application-specific structure. Let's visualize the common layout of WordPress using the following diagram: Having looked at the structure, it's obvious that Header, Footer, and the Main Contentarea are mandatory even for web applications. However, the Footerand Commentssection will play a less important role in web applications, compared to web pages. Sidebaris important in web applications, even though it won't be used with the same meaning. It can be quite useful as a dynamic widget area. Customizing the application layout Web applications can be categorized as projects and products. A project is something we develop targeting specific requirements of a client. On the other hand, a product is an application created based on the common set of requirements for wide range of users. Therefore, customizations will be required on layouts of your product based on different clients. WordPress themes make it simple to customize the layout and features using child themes. We can make the necessary modifications in the child theme while keeping the core layout in the parent theme. This will prevent any code duplications in customizing layouts. Also, the ability to switch themes is a powerful feature that eases the layout customization. The role of the admin dashboard The administration interface of an application plays one of the most important roles behind the scenes. WordPress offers one of the most powerful and easy-to-access admin areas amongst other competitive frameworks. Most of you should be familiar with using admin area for CMS functionalities. However, we will have to understand how each component in the admin area suits the development of web applications. The admin dashboard Dashboard is the location where all the users get redirected, once logged into admin area. Usually, it contains dynamic widget areas with the most important data of your application. Dashboard can play a major role in web applications, compared to blogging or CMS functionality. The dashboard contains a set of default widgets that are mainly focused on main WordPress features such as posts, pages, and comments. In web applications, we can remove the existing widgets related to CMS and add application-specific widgets to create a powerful dashboard. WordPress offers a well-defined API to create a custom admin dashboard widgets and hence we can create a very powerful dashboard using custom widgets for custom requirements in web applications. Posts and pages Posts in WordPress are built for creating content such as articles and tutorials. In web applications, posts will be the most important section to create different types of data. Often, we will choose custom post types instead of normal posts for building advanced data creation sections. On the other hand, pages are typically used to provide static content of the site. Usually, we have static pages such as About Us, Contact Us, Services, and so on. Users User management is a must use section for any kind of web application. User roles, capabilities and profiles will be managed in this section by the authorized users. Appearance Themes and application configurations will be managed in this section. Widgets and theme options will be the important sections related to web applications. Generally, widgets are used in sidebars of WordPress sites to display information such as recent members, comments, posts, and so on. However, in web applications, widgets can play a much bigger role as we can use widgets to split main template into multiple sections. Also, these types of widgetized areas become handy in applications where majority of features are implemented with AJAX. The theme options panel can be used as the general settings panel of web applications where we define the settings related to templates and generic site-specific configurations. Settings This section involves general application settings. Most of the prebuilt items in this section are suited for blogs and websites. We can customize this section to add new configuration areas related to our plugins, used in web application development. There are some other sections such as links, pages, and comments, which will not be used frequently in complex web application development. The ability to add new sections is one of the key reasons for its flexibility. The role of plugins In normal circumstances, WordPress developers use functions that involve application logic scattered across theme files and plugins. Even some of the developers change the core files of WordPress. Altering WordPress core files, third-party theme or plugin files is considered a bad practice since we lose all the modifications on version upgrades and it may break the compatibility of other parts of WordPress. In web applications, we need to be much more organized. In the Role of WordPress theme section, we discussed the purpose of having a theme for web applications. Plugins will be and should be used to provide the main logic and content of your application. The plugins architecture is a powerful way to add or remove features without affecting the core. Also, we have the ability to separate independent modules into their own plugins, making it easier to maintain. On top of this, plugins have the ability to extend other plugins. Since there are over 40,000 free plugins and large number of premium plugins, sometimes you don't have to develop anything for WordPress applications. You can just use number of plugins and integrate them properly to build advanced applications. The role of widgets The official documentation of WordPress refers to widgets as a component that adds content and features to your sidebar. In a typical blogging or CMS user's perspective, it's a completely valid statement. Actually, the widgets offer more in web applications by going beyond the content that populates sidebars. Modern WordPress themes provides wide range of built-in widgets for advanced functionality, making it much more easier to build applications. The following screenshot shows a typical widgetized sidebar of a website: We can use dynamic widgetized areas to include complex components as widgets, making it easy to add or remove features without changing source code. The following screenshot shows a sample dynamic widgetized area. We can use the same technique for developing applications with WordPress. Throughout these sections, we covered the main components of WordPress and how they fit into the actual web application development. Now, we have a good understanding of the components in order to plan our application developed throughout this article. A development plan for the forum management application In this article, our main goal is to learn how we can build full stack web applications using built-in WordPress features. Therefore, I thought of building a complete application, explaining each and every aspect of web development. We will develop an online forum management system for creating public forums or managing support forum for a specific product or service. This application can be considered as a mini version of a powerful forum system like bbPress. We will be starting the development of this application. Planning is a crucial task in web development, in which we will save a lot of time and avoid potential risks in the long run. First, we need to get a basic idea about the goal of this application, features and functionalities, and the structure of components to see how it fits into WordPress. Application goals and target audience Anyone who are using Internet on day to day basis knows the importance of online discussion boards, also known as forums. These forums allows us to participate in a large community and discuss common matters, either related to a specific subject or a product. The application developed throughout is intended to provide simple and flexible forum management application using a WordPress plugin with the goals of: Learning to develop a forum application Learning to use features of various online forums Learning to manage a forum for your product or service This application will be targeted towards all the people who have participated in an online forum or used a support system of a product they purchased. I believe that both output of this application and the contents will be ideal for the PHP developers who want to jump into WordPress application development. Summary Our main goal was to find how WordPress fits into web application development. We started this articleby identifying the CMS functionalities of WordPress. We explored the features and functionalities of popular full stack frameworks and compared them with the existing functionalities of WordPress. Then, we looked at the existing components and features of WordPress and how each of those components fit into a real-world web application. We also planned the forum management application requirements and identified the limitations in using WordPress for web applications. Finally, we converted the default interface into a question-answer interface in a rapid process using existing functionalities, without interrupting the default behavior of WordPress and themes. By now, you should be able to decide whether to choose WordPress for your web application, visualize how your requirements fits into components of WordPress, and identify and minimize the limitations. Resources for Article: Further resources on this subject: Creating Your Own Theme—A Wordpress Tutorial [article] Introduction to a WordPress application's frontend [article] Wordpress: Buddypress Courseware [article]
Read more
  • 0
  • 0
  • 8660

article-image-hands-service-fabric
Packt
06 Apr 2017
12 min read
Save for later

Hands on with Service Fabric

Packt
06 Apr 2017
12 min read
In this article by Rahul Rai and Namit Tanasseri, authors of the book Microservices with Azure, explains that Service Fabric as a platform supports multiple programming models. Each of which is best suited for specific scenarios. Each programming model offers different levels of integration with the underlying management framework. Better integration leads to more automation and lesser overheads. Picking the right programming model for your application or services is the key to efficiently utilize the capabilities of Service Fabric as a hosting platform. Let's take a deeper look into these programming models. (For more resources related to this topic, see here.) To start with, let's look at the least integrated hosting option: Guest Executables. Native windows applications or application code using Node.js or Java can be hosted on Service Fabric as a guest executable. These executables can be packaged and pushed to a Service Fabric cluster like any other services. As the cluster manager has minimal knowledge about the executable, features like custom health monitoring, load reporting, state store and endpoint registration cannot be leveraged by the hosted application. However, from a deployment standpoint, a guest executable is treated like any other service. This means that for a guest executable, Service Fabric cluster manager takes care of high availability, application lifecycle management, rolling updates, automatic failover, high density deployment and load balancing. As an orchestration service, Service Fabric is responsible for deploying and activating an application or application services within a cluster. It is also capable of deploying services within a container image. This programming model is addressed as Guest Containers. The concept of containers is best explained as an implementation of operating system level virtualization. They are encapsulated deployable components running on isolated process boundaries sharing the same kernel. Deployed applications and their runtime dependencies are bundles within the container with an isolated view of all operating system constructs. This makes containers highly portable and secure. Guest container programming model is usually chosen when this level of isolation is required for the application. As containers don't have to boot an operating system, they have fast boot up time and are comparatively small in size. A prime benefit of using Service Fabric as a platform is the fact that it supports heterogeneous operating environments. Service Fabric supports two types of containers to be deployed as guest containers: Docker containers on Linux and Windows server containers. Container images for Docker containers are stored in Docker Hub and Docker APIs are used to create and manage the containers deployed on Linux kernel. Service Fabric supports two different types of containers in Windows Server 2016 with different levels of isolation. They are: Windows Server containers and Windows Hyper-V containers Windows Server containers are similar to Docker containers in terms of the isolation they provide. Windows Hyper-V containers offer higher degree of isolation and security by not sharing the operating system kernel across instances. These are ideally used when a higher level of security isolation is required such as systems requiring hostile multitenant hosts. The following figure illustrates the different isolation levels achieved by using these containers. Container isolation levels Service Fabric application model treats containers as an application host which can in turn host service replicas. There are three ways of utilizing containers within a Service Fabric application mode. Existing applications like Node.js, JavaScript application of other executables can be hosted within a container and deployed on Service Fabric as a Guest Container. A Guest Container is treated similar to a Guest Executable by Service Fabric runtime. The second scenario supports deploying stateless services inside a container hosted on Service Fabric. Stateless services using Reliable Services and Reliable actors can be deployed within a container. The third option is to deploy stateful services in containers hosted on Service Fabric. This model also supports Reliable Services and Reliable Actors. Service Fabric offers several features to manage containerized Microservices. These include container deployment and activation, resource governance, repository authentication, port mapping, container discovery and communication and ability to set environment variables. While containers offer a good level of isolation it is still heavy in terms of deployment footprint. Service Fabric offers a simpler, powerful programming model to develop your services which they call Reliable Services. Reliable services let you develop stateful and stateless services which can be directly deployed on Service Fabric clusters. For stateful services, the state can be stored close to the compute by using Reliable Collections. High availability of the state store and replication of the state is taken care by the Service Fabric cluster management services. This contributes substantially to the performance of the system by improving the latency of data access. Reliable services come with a built-in pluggable communication model which supports HTTP with Web API, WebSockets and custom TCP protocols out of the box. A Reliable service is addressed as stateless if it does not maintain any state within it or if the scope of the state stored is limited to a service call and is entirely disposable. This means that a stateless service does not require to persist, synchronize or replicate state. A good example for this service is a weather service like MSN weather service. A weather service can be queried to retrieve weather conditions associated with a specific geographical location. The response is totally based on the parameters supplied to the service. This service does not store any state. Although stateless services are simpler to implement, most of the services in real life are not stateless. They either store state in an external state store or an internal one. Web front end hosting APIs or web applications are good use cases to be hosted as stateless services. A stateful service persists states. The outcome of a service call made to a stateful service is usually influenced by the state persisted by the service. A service exposed by a bank to return the balance on an account is a good example for a stateful service. The state may be stored in an external data store such as Azure SQL Database, Azure Blobs or Azure Table store. Most services prefer to store the state externally considering the challenges around reliability, availability, scalability and consistency of the data store. With Service Fabric, state can be stored close to the compute by using reliable collections. To makes things more lightweight, Service Fabric also offers a programming model based on Virtual actor pattern. This programming model is called Reliable Actors. The Reliable Actors programming model is built on top of Reliable Services. This guarantees the scalability and reliability of the services. An Actor can be defined as an isolated, independent unit of compute and state with single-threaded execution. Actors can be created, managed and disposed independent of each other. Large number of actors can coexist and execute at a time. Service Fabric Reliable Actors are a good fit for systems which are highly distributed and dynamic by nature. Every actor is defined as an instance of an actor type; the same way an object is an instance of a class. Each actor is uniquely identified by an actor ID. The lifetime of Service Fabric Actors is not tied to their in-memory state. As a result, Actors are automatically created the first time a request for them is made. Reliable Actor's garbage collector takes care of disposing unused Actors in memory. Now that we understand the programming models, let's take a look at how the services deployed on Service Fabric are discovered and how the communication between services takes place. Service Fabric discovery and communication An application built on top of Microservices is usually composed of multiple services, each of which runs multiple replicas. Each service is specialized in a specific task. To achieve an end to end business use case, multiple services will need to be stitched together. This requires services to communicate to each other. A simple example would be web front end service communicating with the middle tier services which in turn connects to the back end services to handle a single user request. Some of these middle tier services can also be invoked by external applications. Services deployed on Service Fabric are distributed across multiple nodes in a cluster of virtual machines. The services can move across dynamically. This distribution of services can wither be triggered by a manual action of be result of Service Fabric cluster manager re-balancing services to achieve optimal resource utilization. This makes communication a challenge as services are not tied to a particular machine. Let's understand how Service Fabric solved this challenge for its consumers. Service protocols Service Fabric, as a hosting platform for Microservices does not interfere in the implementation of the service. On top of this, it also lets services decide on the communication channels they want to open. These channels are addressed as service endpoints. During service initiation, Service Fabric provides the opportunity for the services to set up the endpoints for incoming request on any protocol or communication stack. The endpoints are defined according to common industry standards, that is IP:Port. It is possible that multiple service instances share a single host process. In which case, they either have to use different ports or a port sharing mechanism. This will ensure that every service instance is uniquely addressable. Service endpoints Service discovery Service Fabric can rebalance services deployed on a cluster as a part of orchestration activities. This can be caused by resource balancing activities, failovers, upgrades, scale outs or scale ins. This will result in change in service endpoint addresses as the services move across different virtual machines. Service distribution The Service Fabric Naming Service is responsible for abstracting this complexity from the consuming service or application. Naming service takes care of service discovery and resolution. All service instances in Services Fabric are identified by a unique URL like fabric:/MyMicroServiceApp/AppService1. This name stays constant across the lifetime of the service although the endpoint addresses which physically host the service may change. Internally, Service Fabric manages a map between the service names and the physical location where the service is hosted. This is similar to the DNS service which is used to resolve Website URLs to IP addresses. The following figure illustrates the name resolution process for a service hosted on Service Fabric: Name resolution Connections from applications external to Service Fabric Service communications to or between services hosted in Service Fabric can be categorized as internal or external. Internal communication among services hosted on Service Fabric is easily achieved using the Naming Service. External communication, originated from an application or a user outside the boundaries of Service Fabric will need some extra work. To understand how this works, let's dive deeper in to the logical network layout of a typical Service Fabric cluster. Service Fabric cluster is always placed behind an Azure Load Balancer. The Load Balancer acts like a gateway to all traffic which needs to pass to the Service Fabric cluster. The Load Balancer is aware of every post open on every node of a cluster. When a request hits the Load Balancer, it identifies the port the request is looking for and randomly routes the request to one of the nodes which has the requested port open. The Load Balancer is not aware of the services running on the nodes or the ports associated with the services. The following figure illustrates request routing in action. Request routing Configuring ports and protocols The protocol and the ports to be opened by a Service Fabric cluster can be easily configured through the portal. Let's take an example to understand the configuration in detail. If we need a web application to be hosted on a Service Fabric cluster which should have port 80 opened on HTTP to accept incoming traffic, the following steps should be performed. Configuring service manifest Once a service listening to port 80 is authored, we need to configure port 80 in the service manifest to open a listener in the service. This can be done by editing the Service Manifest.xml. <Resources> <Endpoints> <Endpoint Name="WebEndpoint" Protocol="http" Port="80" /> </Endpoints> </Resources> Configuring custom end point On the Service Fabric cluster, configure port 80 as a custom endpoint. This can be easily done through the Azure Management portal. Configuring custom port Configure Azure Load Balancer Once the cluster is configured and created, the Azure Load Balancer can be instructed to forward the traffic to port 80. If the Service Fabric cluster is created through the portal, this step is automatically taken care for every port which is configured on the cluster configuration. Configuring Azure Load Balancer Configure health check Azure Load Balancer probes the ports on the nodes for their availability to ensure reliability of the service. The probes can be configured on the Azure portal. This is an optional step as a default probe configuration is applied for each endpoint when a cluster is created. Configuring probe Built-in Communication API Service Fabric offers many built-in communication options to support inter service communications. Service Remoting is one of them. This option allows strong typed remote procedure calls between Reliable Services and Reliable Actors. This option is very easy to set up and operate with as Service Remoting handles resolution of service addresses, connection, retry and error handling. Service Fabric also supports HTTP for language-agnostic communication. Service Fabric SDK exposes ICommunicationClient and ServicePartitionClient classes for service resolution, HTTP connections, and retry loops. WCF is also supported by Service Fabric as a communication channel to enable legacy workload to be hosted on it. The SDK exposed WcfCommunicationListener for the server side and WcfCommunicationClient and ServicePartitionClient classes for the client to ease programming hurdles. Resources for Article: Further resources on this subject: Installing Neutron [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article] Insight into Hyper-V Storage [article]
Read more
  • 0
  • 0
  • 3985

article-image-building-components-using-angular
Packt
06 Apr 2017
11 min read
Save for later

Building Components Using Angular

Packt
06 Apr 2017
11 min read
In this article by Shravan Kumar Kasagoni, the author of the book Angular UI Development, we will learn how to use new features of Angular framework to build web components. After going through this article you will understand the following: What are web components How to setup the project for Angular application development Data binding in Angular (For more resources related to this topic, see here.) Web components In today's web world if we need to use any of the UI components provided by libraries like jQuery UI, YUI library and so on. We write lot of imperative JavaScript code, we can't use them simply in declarative fashion like HTML markup. There are fundamental problems with the approach. There is no way to define custom HTML elements to use them in declarative fashion. The JavaScript, CSS code inside UI components can accidentally modify other parts of our web pages, our code can also accidentally modify UI components, which is unintended. There is no standard way to encapsulate code inside these UI components. Web Components provides solution to all these problems. Web Components are set of specifications for building reusable UI components. Web Components specifications is comprised of four parts: Templates: Allows us to declare fragments of HTML that can be cloned and inserted in the document by script Shadow DOM: Solves DOM tree encapsulation problem Custom elements: Allows us to define custom HTML tags for UI components HTML imports: Allows to us add UI components to web page using import statement More information on web components can be found at: https://www.w3.org/TR/components-intro/. Component are the fundament building blocks of any Angular application. Components in Angular are built on top of the web components specification. Web components specification is still under development and might change in future, not all browsers supports it. But Angular provides very high abstraction so that we don't need to deal with multiple technologies in web components. Even if specification changes Angular can take care of internally, it provides much simpler API to write web components. Getting started with Angular We know Angular is completely re-written from scratch, so everything is new in Angular. In this article we will discuss few important features like data binding, new templating syntax and built-in directives. We are going use more practical approach to learn these new features. In the next section we are going to look at the partially implemented Angular application. We will incrementally use Angular new features to implement this application. Follow the instruction specified in next section to setup sample application. Project Setup Here is a sample application with required Angular configuration and some sample code. Application Structure Create directory structure, files as mentioned below and copy the code into files from next section. Source Code package.json We are going to use npm as our package manager to download libraries and packages required for our application development. Copy the following code to package.json file. { "name": "display-data", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "tsc": "tsc", "tsc:w": "tsc -w", "lite": "lite-server", "start": "concurrent "npm run tsc:w" "npm run lite" " }, "author": "Shravan", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.1", "es6-promise": "^3.0.2", "es6-shim": "^0.33.13", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.0", "systemjs": "^0.19.14", "zone.js": "^0.5.10" }, "devDependencies": { "concurrently": "^1.0.0", "lite-server": "^1.3.2", "typescript": "^1.7.5" } } The package.json file holds metadata for npm, in the preceding code snippet there are two important sections: dependencies: It holds all the packages required for an application to run devDependencies: It holds all the packages required only for development Once we add the preceding package.json file to our project we should run the following command at the root of our application. $ npm install The preceding command will create node_modules directory in the root of project and downloads all the packages mentioned in dependencies, devDependencies sections into node_modules directory. There is one more important section, that is scripts. We will discuss about scripts section, when we are ready to run our application. tsconfig.json Copy the below code to tsconfig.json file. { "compilerOptions": { "target": "es5", "module": "system", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false }, "exclude": [ "node_modules" ] } We are going to use TypeScript for developing our Angular applications. The tsconfig.json file is the configuration file for TypeScript compiler. Options specified in this file are used while transpiling our code into JavaScript. This is totally optional, if we don't use it TypeScript compiler use are all default flags during compilation. But this is the best way to pass the flags to TypeScript compiler. Following is the expiation for each flag specified in tsconfig.json: target: Specifies ECMAScript target version: 'ES3' (default), 'ES5', or 'ES6' module: Specifies module code generation: 'commonjs', 'amd', 'system', 'umd' or 'es6' moduleResolution: Specifies module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6) sourceMap: If true generates corresponding '.map' file for .js file emitDecoratorMetadata: If true enables the output JavaScript to create the metadata for the decorators experimentalDecorators: If true enables experimental support for ES7 decorators removeComments: If true, removes comments from output JavaScript files noImplicitAny: If true raise error if we use 'any' type on expressions and declarations exclude: If specified, the compiler will not compile the TypeScript files in the containing directory and subdirectories index.html Copy the following code to index.html file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Top 10 Fastest Cars in the World</title> <link rel="stylesheet" href="app/site.css"> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <script> System.config({ transpiler: 'typescript', typescriptOptions: {emitDecoratorMetadata: true}, map: {typescript: 'node_modules/typescript/lib/typescript.js'}, packages: { 'app' : { defaultExtension: 'ts' } } }); System.import('app/boot').then(null, console.error.bind(console)); </script> </head> <body> <cars-list>Loading...</cars-list> </body> </html> This is startup page of our application, it contains required angular scripts, SystemJS configuration for module loading. Body tag contains <cars-list> tag which renders the root component of our application. However, I want to point out one specific statement: The System.import('app/boot') statement will import boot module from app package. Physically it loading boot.js file under app folder. car.ts Copy the following code to car.ts file. export interface Car { make: string; model: string; speed: number; } We are defining a car model using TypeScript interface, we are going to use this car model object in our components. app.component.ts Copy the following code to app.component.ts file. import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } Important points about AppComponent class: The AppComponent class is our application root component, it has one public property named 'heading' The AppComponent class is decorated with @Component() function with selector, template properties in its configuration object The @Component() function is imported using ES2015 module import syntax from 'angular2/core' module in Angular library We are also exporting the AppComponent class as module using export keyword Other modules in application can also import the AppComponent class using module name (app.component – file name without extension) using ES2015 module import syntax boot.ts Copy the following code to boot.ts file. import {bootstrap} from 'angular2/platform/browser' import {AppComponent} from './app.component'; bootstrap(AppComponent); In this file we are importing bootstrap() function from 'angular2/platform/browser' module and the AppComponent class from 'app.component' module. Next we are invoking bootstrap() function with the AppComponent class as parameter, this will instantiate an Angular application with the AppComponent as root component. site.css Copy the following code to site.css file. * { font-family: 'Segoe UI Light', 'Helvetica Neue', 'Segoe UI', 'Segoe'; color: rgb(51, 51, 51); } This file contains some basic styles for our application. Working with data in Angular In any typical web application, we need to display data on a HTML page and read the data from input controls on a HTML page. In Angular everything is a component, HTML page is represented as template and it is always associated with a component class. Application data lives on component's class properties. Either push values to template or pull values from template, to do this we need to bind the properties of component class to the controls on the template. This mechanism is known as data binding. Data binding in angular allows us to use simple syntax to push or pull data. When we bind the properties of component class to the controls on the template, if the data on the properties changes, Angular will automatically update the template to display the latest data and vice versa. We can also control the direction of data flow (from component to template, from template to component). Displaying Data using Interpolation If we go back to our AppComponent class in sample application, we have heading property. We need to display this heading property on the template. Here is the revised AppComponent class: app/app.component.ts import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '<h1>{{heading}}</h1>' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } In @Component() function we updated template property with expression {{heading}} surrounded by h1 tag. The double curly braces are the interpolation syntax in Angular. Any property on the class we need to display on the template, use the property name surrounded by double curly braces. Angular will automatically render the value of property on the browser screen. Let's run our application, go to command line and navigate to the root of the application structure, then run the following command. $ npm start The preceding start command is part of scripts section in package.json file. It is invoking two other commands npm run tsc:w, npm run lite. npm run tsc:w: This command is performing the following actions: It is invoking TypeScript compiler in watch mode TypeScript compiler will compile all our TypeScript files to JavaScript using configuration mentioned in tsconfig.json TypeScript compiler will not exit after the compilation is over, it will wait for changes in TypeScript files Whenever we modify any TypeScript file, on the fly compiler will compile them to JavaScript npm run lite: This command will start a lite weight Node.js web server and launches our application in browser Now we can continue to make the changes in our application. Changes are detected and browser will refresh automatically with updates. Output in the browser: Let's further extend this simple application, we are going to bind the heading property to a textbox, here is revised template: template: ` <h1>{{heading}}</h1> <input type="text" value="{{heading}}"/> ` If we notice the template it is a multiline string and it is surrounded by ` (backquote/ backtick) symbols instead of single or double quotes. The backtick (``) symbols are new multi-line string syntax in ECMAScript 2015. We don't need start our application again, as mentioned earlier it will automatically refresh the browser with updated output until we stop 'npm start' command is at command line. Output in the browser: Now textbox also displaying the same value in heading property. Let's change the value in textbox by typing something, then hit the tab button. We don't see any changes happening on the browser. But as mentioned earlier in data binding whenever we change the value of any control on the template, which is bind to a property of component class it should update the property value. Then any other controls bind to same property should also display the updated value. In browser h1 tag should also display the same text whatever we type in textbox, but it won't happen. Summary We started this article by covering introduction to web components. Next we discussed a sample application which is the foundation for this article. Then we discussed how to write components using new features in Angular to like data binding and new templating syntaxes using lot of examples. By the end of this article, you should have good understanding of Angular new concepts and should be able to write basic components. Resources for Article: Further resources on this subject: Get Familiar with Angular [article] Gearing Up for Bootstrap 4 [article] Angular's component architecture [article]
Read more
  • 0
  • 0
  • 5165
article-image-using-react-router-client-side-redirecting
Antonio Cucciniello
05 Apr 2017
6 min read
Save for later

Using React Router for Client Side Redirecting

Antonio Cucciniello
05 Apr 2017
6 min read
If you are using React in the front end of your web application and you would like to use React Router in order to handle routing, then you have come to the right place. Today, we will learn how to have your client side redirect to another page after you have completed some processing. Let's get started! Installs First we will need to make sure we have a couple of things installed. The first thing here is to make sure you have Node and NPM installed. In order to make this as simple as possible, we are going to use create-react-app to get React fully working in our application. Install this by doing: $ npm install -g create-react-app Now create a directory for this example and enter it; here we will call it client-redirect. $ mkdir client-redirect $ cd client-redirect Once in that directory, initialize create-react-app in the repo. $ create-react-app client Once it is done, test that it is working by running: $ npm start You should see something like this: The last thing you must install is react-router: $ npm install --save react-router Now that you are all set up, let's start with some code. Code First we will need to edit the main JavaScript file, in this case, located at client-redirect/client/src/App.js. App.js App.js is where we handle all the routes for our application and acts as the main js file. Here is what the code looks like for App.js: import React, { Component } from 'react' import { Router, Route, browserHistory } from 'react-router' import './App.css' import Result from './modules/result' import Calculate from './modules/calculate' class App extends Component { render () { return ( <Router history={browserHistory}> <Route path='/' component={Calculate} /> <Route path='/result' component={Result} /> </Router> ) } } export default App If you are familiar with React, you should not be too confused as to what is going on. At the top, we are importing all of the files and modules we need. We are importing React, react-router, our App.css file for styles, and result.js and calculate.js files (do not worry we will show the implementation for these shortly). The next part is where we do something different. We are using react-router to set up our routes. We chose to use a history of type browserHistory. History in react-router listens to the browser's address bar for anything changing and parses the URL from the browser and stores it in a location object so the router can match the specified route and render the different components for that path. We then use <Route> tags in order to specify what path we would like a component to be rendered on. In this case, we are using '/' path for the components in calculate.js and '/result' path for the components in result.js. Let's define what those pages will look like and see how the client can redirect using browserHistory. Calculate.js This page is a basic page with two text boxes and a button. Each text box should receive a number and when the button is clicked, we are going to calculate the sum of the two numbers given. Here is what that looks like: import React, { Component } from 'react' import { browserHistory } from 'react-router' export default class Calculate extends Component { render () { return ( <div className={'Calculate-page'} > <InputBox type='text' name='first number' id='firstNum' /> <InputBox type='text' name='second number' id='secondNum' /> <CalculateButton type='button' value='Calculate' name='Calculate' onClick='result()' /> </div> ) } } var InputBox = React.createClass({ render: function () { return <div className={'input-field'}> <input type={this.props.type} value={this.props.value} name={this.props.name} id={this.props.id} /> </div> } }) var CalculateButton = React.createClass({ result: function () { var firstNum = document.getElementById('firstNum').value var secondNum = document.getElementById('secondNum').value var sum = Number(firstNum) + Number(secondNum) if (sum !== undefined) { const path = '/result' browserHistory.push(path) } window.sessionStorage.setItem('sum', sum) return console.log(sum) }, render: function () { return <div className={'calculate-button'}> <button type={this.props.type} value={this.props.value} name={this.props.name} onClick={this.result} > Calculate </button> </div> } }) The important part we want to focus on is inside the result function of the CalculateButton class. We take the two numbers and sum them. Once we have the sum, we create a path variable to hold the route we would like to go to next. Then browserHistory.push(path) redirects the client to a new path of localhost:3000/result. We then store the sum in sessionStorage in order to retrieve it on the next page. result.js This is simply a page that will display your result from the calculation, but it serves as the page you redirected to with react-router. Here is the code: import React, { Component } from 'react' export default class Result extends Component { render () { return ( <div className={'result-page'} > Result : <DisplayNumber id='result' /> </div> ) } } var DisplayNumber = React.createClass({ componentDidMount () { document.getElementById('result').innerHTML = window.sessionStorage.getItem('sum') }, render: function () { return ( <div className={'display-number'}> <p id={this.props.id} /> </div> ) } }) We simply create a class that wraps a paragraph tag. It also has a componentDidMount() function, which allows us to access the sessionStorage for the sum once the component output has been rendered by the DOM. We update the innerHTML of the paragraph element with the sum's value. Test Let's get back to the client directory of our application. Once we are there, we can run: $ npm start This should open a tab in your web browser at localhost:3000. This is what it should look like when you add numbers in the text boxes (here I added 17 and 17). This should redirect you to another page: Conclusion There you go! You now have a web app that utilizes client-side redirecting! To summarize what we did, here is a quick list of what happened: Installed the prerequisites: node, npm Installed create-react-app Created a create-react-app Installed react-router Added our routing to our App.js Created a module that calculated the sum of two numbers and redirected to a new page Displayed the number on the new redirected page Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog GitHub pages for: react-router create-react-app About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at [email protected], follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 15936

article-image-microservices-and-service-oriented-architecture
Packt
09 Mar 2017
6 min read
Save for later

Microservices and Service Oriented Architecture

Packt
09 Mar 2017
6 min read
Microservices are an architecture style and an approach for software development to satisfy modern business demands. They are not a new invention as such. They are instead an evolution of previous architecture styles. Many organizations today use them - they can improve organizational agility, speed of delivery, and ability to scale. Microservices give you a way to develop more physically separated modular applications. This tutorial has been taken from Spring 5.0 Microsevices - Second Edition Microservices are similar to conventional service-oriented architectures. In this article, we will see how microservices are related to SOA. The emergence of microservices Many organizations, such as Netflix, Amazon, and eBay, successfully used what is known as the 'divide and conquer' technique to functionally partition their monolithic applications into smaller atomic units. Each one performs a single function - a 'service'. These organizations solved a number of prevailing issues they were experiencing with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern as microservices architecture. Microservices originated from the idea of Hexagonal Architecture, coined by Alistair Cockburn back in 2005. Hexagonal Architecture or Hexagonal pattern is also known as the Ports and Adapters pattern. Cockburn defined microservices as: "...an architectural style or an approach for building IT systems as a set of business capabilities that are autonomous, self contained, and loosely coupled." The following diagram depicts a traditional N-tier application architecture having presentation layer, business layer, and database layer: Modules A, B, and C represent three different business capabilities. The layers in the diagram represent separation of architecture concerns. Each layer holds all three business capabilities pertaining to that layer. Presentation layer has web components of all three modules, business layer has business components of all three modules, and database hosts tables of all three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservice-based architecture: As we can see in the preceding diagram, the boundaries are inversed in the microservices architecture. Each vertical slice represents a microservice. Each microservice will have its own presentation layer, business layer, and database layer. Microservices is aligned toward business capabilities. By doing so, changes to one microservice do not impact the others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols, such as HTTP and REST, or messaging protocols, such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols, such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices is more aligned to the business capabilities and has independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two facets of microservices. How do microservices compare to Service Oriented Architectures? One of the common question arises when dealing with microservices architecture is, how is it different from SOA. SOA and microservices follow similar concepts. Earlier in this article, we saw that microservices is evolved from SOA and many service characteristics that are common in both approaches. However, are they the same or different? As microservices evolved from SOA, many characteristics of microservices is similar to SOA. Let’s first examine the definition of SOA. The Open Group definition of SOA is as follows: "SOA is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services. Is self-contained May be composed of other services Is a “black box” to consumers of the service" You have learned similar aspects in microservices as well. So, in what way is microservices different? The answer is--it depends. The answer to the previous question could be yes or no, depending upon the organization and its adoption of SOA. SOA is a broader term and different organizations approached SOA differently to solve different organizational problems. The difference between microservices and SOA is in the way based on how an organization approaches SOA. In order to get clarity, a few cases will be examined here. Service oriented integration Service-oriented integration refers to a service-based integration approach used by many organizations: Many organizations would have used SOA primarily to solve their integration complexities, also known as integration spaghetti. Generally, this is termed as Service Oriented Integration (SOI). In such cases, applications communicate with each other through a common integration layer using standard protocols and message formats, such as SOAP/XML-based web services over HTTP or Java Message Service (JMS). These types of organizations focus on Enterprise Integration Patterns (EIP) to model their integration requirements. This approach strongly relies on heavyweight Enterprise Service Bus (ESB),such as TIBCO Business Works, WebSphere ESB, Oracle ESB, and the likes. Most of the ESB vendors also packed a set of related product, such as Rules Engines, Business Process Management Engines, and so on as a SOA suite. Such organization's integrations are deeply rooted into these products. They either write heavy orchestration logic in the ESB layer or business logic itself in the service bus. In both cases, all enterprise services are deployed and accessed through the ESB. These services are managed through an enterprise governance model. For such organizations, microservices is altogether different from SOA. Legacy modernization SOA is also used to build service layers on top of legacy applications which is shown in the following diagram: Another category of organizations would have used SOA in transformation projects or legacy modernization projects. In such cases, the services are built and deployed in the ESB connecting to backend systems using ESB adapters. For these organizations, microservices are different from SOA. Service oriented application Some organizations would have adopted SOA at an application level: In this approach as shown in the preceding diagram, lightweight Integration frameworks, such as Apache Camel or Spring Integration, are embedded within applications to handle service related cross-cutting capabilities, such as protocol mediation, parallel execution, orchestration, and service integration. As some of the lightweight integration frameworks had native Java object support, such applications would have even used native Plain Old Java Objects (POJO) services for integration and data exchange between services. As a result, all services have to be packaged as one monolithic web archive. Such organizations could see microservices as the next logical step of their SOA. Monolithic migration using SOA The following diagram represents Logical System Boundaries: The last possibility is transforming a monolithic application into smaller units after hitting the breaking point with the monolithic system. They would have broken the application into smaller physically deployable subsystems, similar to the Y axis scaling approach explained earlier and deployed them as web archives on web servers or as jars deployed on some home grown containers. These subsystems as service would have used web services or other lightweight protocols to exchange data between services. They would have also used SOA and service design principles to achieve this. For such organizations, they may tend to think that microservices is the same old wine in a new bottle. Further resources on this subject: Building Scalable Microservices [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 6244

article-image-unit-testing-and-end-end-testing
Packt
09 Mar 2017
4 min read
Save for later

Unit Testing and End-To-End Testing

Packt
09 Mar 2017
4 min read
In this article by Andrea Passaglia, the author of the book Vue.js 2 Cookbook, will cover stubbing external API calls with Sinon.JS. (For more resources related to this topic, see here.) Stub external API calls with Sinon.JS Normally when you do end-to-end testing and integration testing you would have the backend server running and ready to respond to you. I think there are many situations in which this is not desirable. As a frontend developer you take every opportunity to blame the backend guys. Getting started No particular skills are required to complete this recipe except that you should install Jasmine as a dependency. How to do it... First of all let's install some dependencies, starting with Jasmine as we are going to use it to run the whole thing. Also install Sinon.JS and axios before continuing, you just need to add the .js files. We are going to build an application that retrieves a post at the click of a button. In the HTML part write the following: <div id="app"> <button @click="retrieve">Retrieve Post</button> <p v-if="post">{{post}}</p> </div> The JavaScript part instead, is going to look like the following: const vm = new Vue({ el: '#app', data: { post: undefined }, methods: { retrieve () { axios .get('https://jsonplaceholder.typicode.com/posts/1') .then(response => { console.log('setting post') this.post = response.data.body }) } } }) If you launch your application now you should be able to see it working. Now we want to test the application but we don't like to connect to the real server. This would take additional time and it would not be reliable, instead we are going to take a sample, correct response from the server and use it instead. Sinon.JS has the concept of a sandbox. It means that whenever a test start, some dependencies, like axios are overwritten. After each test we can discard the sandbox and everything returns normal. An empty test with Sinon.JS looks like the following (add it after the Vue instance): describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) }) We want to stub the call to the get function for axios: describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) it('should save the returned post body', done => { const resolved = new Promise(resolve => r({ data: { body: 'Hello World' } }) ) sandbox.stub(axios, 'get').returns(resolved) ... done() }) }) We are overwriting axios here. We are saying that now the get method should return the resolved promise: describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) it('should save the returned post body', done => { const promise = new Promise(resolve => resolve({ data: { body: 'Hello World' } }) ) sandbox.stub(axios, 'get').returns(resolved) vm.retrieve() promise.then(() => { expect(vm.post).toEqual('Hello World') done() }) }) }) Since we are returning a promise (and we need to return a promise because the retrieve method is calling then on it) we need to wait until it resolves. We can launch the page and see that it works: How it works... In our case we used the sandbox to stub a method of one of our dependencies. This way the get method of axios never gets fired and we receive an object that is similar to what the backend would give us. Stubbing the API responses will get you isolated from the backend and its quirks. If something goes wrong you won't mind and moreover you can run your test without relying on the backend running and running correctly. There are many libraries and techniques to stub API calls in general, not only related to HTTP. Hopefully this recipe have given you a head start. Summary In this article we covered how we can stub an external API class with Sinon.JS. Resources for Article: Further resources on this subject: Installing and Using Vue.js [article] Introduction to JavaScript [article] JavaScript Execution with Selenium [article]
Read more
  • 0
  • 0
  • 3584
article-image-developer-workflow
Packt
02 Mar 2017
7 min read
Save for later

Developer Workflow

Packt
02 Mar 2017
7 min read
In this article by Chaz Chumley and William Hurley, the author of the book Mastering Drupal 8, we will to decide on a local AMP stack and the role of a Composer. (For more resources related to this topic, see here.) Deciding on a local AMP stack Any developer workflow begins with having an AMP (Apache, MySQL, PHP) stack installed and configured on a Windows, OSX, or *nix based machine. Depending on the operating system, there are a lot of different methods that one can take to setup an ideal environment. However, when it comes down to choices there are really only three: Native AMP stack: This option refers to systems that generally either come preconfigured with Apache, MySQL, and PHP or have a generally easy install path to download and configure these three requirements. There are plenty of great tutorials on how to achieve this workflow but this does require familiarity with the operating system. Packaged AMP stacks: This option refers to third-party solutions such as MAMP—https://www.mamp.info/en/, WAMP—http://www.wampserver.com/en/, or Acquia Dev Desktop—https://dev.acquia.com/downloads. These solutions come with an installer that generally works on Windows and OSX and is a self-contained AMP stack allowing for general web server development.  Out of these three only Acquia Dev Desktop is Drupal specific. Virtual machine: This option is often the best solution as it closely represents the actual development, staging, and production web servers. However, this can also be the most complex to initially setup and requires some knowledge of how to configure specific parts of the AMP stack. That being said, there are a few really well documented VM’s available that can help reduce the experience needed. Two great virtual machines to look at are Drupal VM—https://www.drupalvm.com/ and Vagrant Drupal Development (VDD)—https://www.drupal.org/project/vdd. In the end, my recommendation is to choose an environment that is flexible enough to quickly install, setup, and configure Drupal instances.  The above choices are all good to start with, and by no means is any single solution a bad choice. If you are a single person developer, then a packaged AMP stack such as MAMP may be the perfect choice. However, if you are in a team environment I would strongly recommend one of the VM options above or look into creating your own VM environment that can be distributed to your team. We will discuss virtualized environments in more detail, but before we do, we need to have a basic understanding of how to work with three very important command line interfaces. Composer, Drush, and Drupal Console. The role of Composer Drupal 8 and each minor version introduces new features and functionality. Everything from moving the most commonly used 3rd party modules into its core to the introduction of an object oriented PHP framework. These improvements also introduced the Symfony framework which brings along the ability to use a dependency management tool called Composer. Composer (https://getcomposer.org/) is a dependency manager for PHP that allows us to perform a multitude of tasks. Everything from creating a Drupal project to declaring libraries and even installing contributed modules just to name a few.  The advantage to using Composer is that it allows us to quickly install and update dependencies by simply running a few commands. These configurations are then stored within a composer.json file that can be shared with other developers to quickly setup identical Drupal instances. If you are new to Composer then let’s take a moment to discuss how to go about installing Composer for the first time within a local environment. Installing Composer locally Composer can be installed on Windows, Linux, Unix, and OSX. For this example, we will be following the install found at https://getcomposer.org/download/. Make sure to take a look at the Getting Started documentation that corresponds with your operating system. Begin by opening a new terminal window. By default, our terminal window should place us in the user directory. We can then continue by executing the following four commands: Download Composer installer to local directory: php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" Verify the installer php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" Since Composer versions are often updated it is important to refer back to these directions to ensure the hash file above is the most current one. Run the installer: php composer-setup.php Remove the installer: php -r "unlink('composer-setup.php');" Composer is now installed locally and we can verify this by executing the following command within a terminal window: php composer.phar Composer should now present us with a list of available commands: The challenge with having Composer installed locally is that it restricts us from using it outside the current user directory. In most cases, we will be creating projects outside of our user directory, so having the ability to globally use Composer quickly becomes a necessity. Installing Composer globally Moving the composer.phar file from its current location to a global directory can be achieved by executing the following command within a terminal window: mv composer.phar /usr/local/bin/composer We can now execute Composer commands globally by typing composer in the terminal window. Using Composer to create a Drupal project One of the most common uses for Composer is the ability to create a PHP project. The create-project command takes several arguments including the type of PHP project we want to build, the location of where we want to install the project, and optionally, the package version. Using this command, we no longer need to manually download Drupal and extract the contents into an install directory. We can speed up the entire process by using one simple command. Begin by opening a terminal window and navigating to a folder where we want to install Drupal. Next we can use Composer to execute the following command: composer create-project drupal/drupal mastering The create-project command tells Composer that we want to create a new Drupal project within a folder called mastering. Once the command is executed, Composer locates the current version of Drupal and installs the project along with any additional dependencies that it needs During the install process Composer will prompt us to remove any existing version history that the Drupal repository stores.  It is generally a good idea to choose yes to remove these files as we will be wanting to manage our own repository with our own version history. Once Composer has completed creating our Drupal project, we can navigate to the mastering folder and review the composer.json file the Composer creates to store project specific configurations. As soon as the composer.json file is created our Drupal project can be referred to as a package. We can version the file, distribute it to a team, and they can run composer install to generate an identical Drupal 8 code base. With Composer installed globally we can take a look at another command line tool that will assist us with making Drupal development much easier. Summary In this article, you learned about how to decide on a local AMP stack and how to install a composer both locally and globally. Also we saw a bit about how to use Composer to create a Drupal project. Resources for Article: Further resources on this subject: Drupal 6 Performance Optimization Using Throttle and Devel Module [article] Product Cross-selling and Layout using Panels with Drupal and Ubercart 2.x [article] Setting up an online shopping cart with Drupal and Ubercart [article]
Read more
  • 0
  • 0
  • 2538

article-image-how-setup-postgresql-nodejs
Antonio Cucciniello
14 Feb 2017
7 min read
Save for later

How to Setup PostgreSQL with Node.js

Antonio Cucciniello
14 Feb 2017
7 min read
Have you ever wanted to add a PostgreSQL database to the backend of your web application? If so, by the end of this tutorial, you should have a PostgreSQL database up and running with your Node.js web application. PostgreSQL is a popular open source relational database. This tutorial assumes that you have Node and NPM installed on your machine; if you need help installing that, check out this link. First, let's download PostgreSQL. PostgreSQL I am writing and testing this tutorial on a Mac, so it will primarily caterMac, but I will include links in the reference section for downloading PostgreSQL on select Linux distributions and Windows as well. If you are on a Mac, however, you can follow these steps. First, you must have Homebrew. If you do not have it, you may install it byfollowing the directions here. Once Homebrew is installed and working, you can run the following: $ brew update $ brew install postgres This downloads and installs PostgreSQL for you. Command-line Setup Now, open a new instance of a terminal by pressing Command+T. Once you have the new tab, you can start a PostgreSQL server with the command: postgres -D /usr/local/var/postgres.This allows you to use Postgres locally and gives you a logger for all of the commands you run on your databases. Next, open a new instance of a terminal with Command+Tand enter$ psql. This is similar to a command center for Postgres. It allows you to create things in your database and plenty more. You can manually enter commands here to set up your environment. For this example, we willcreatea database called example. To do that, while in the terminal tab with psql running, enter CREATE DATABASE example;. To confirm that the database was made, you should seeCREATE DATABASE as the output. Also, to list all databases, you would usel. Then, we will want to connect to our new database with the command connect example. This should give you the following message telling you that you are connected: You are connected to database "example" as user In order to store things in this database, we need to create a table. Enter the following: CREATE TABLE numbers(   age integer   ); So, this format is probably confusing if you have never seen it before. This is telling Postgres to create a table in this database called numbers, with one column called age, and all items in the age column will be of the data type integer. Now, this should give us the output CREATE TABLE,but if you want to list all tables in a database, you shoulduse the dt command. For the sake of this example, we are going to add a row to the table so that we have some data to play with and prove that this works. When you want to add something to a database in PostgreSQL, you use the INSERT command. Enter this command to have the first row in the table equal to 732: INSERT INTO numbers VALUES (732); This should give you an output of INSERT 0 1. To check the contents of the table, simply typeTABLE numbers;. Now that we have a database up and running with a table with a value, we can setup our code to access this table and pull the value from it. Code Setup In order to follow this example, you will need to make sure that you have the following packages: pg, pg-format, and express. Enter the project directory you plan on working in (and where you have Node and NPMinstalled). For pg, usenpm install -pg.This is a Postgres client for Node. For pg-format, usenpm install pg-format.This allows us to safely make dynamic SQL queries. For express,use npm install express --save.This allows us to create a quick and basic server. Now that those packages are installed, we can code! Actual Code Let's create a file called app.js for this as the main point in our program. At the top, establish your variables: const express = require('express') const app = express() var pg = require('pg') var format = require('pg-format') var PGUSER = 'yourUserName' var PGDATABASE = 'example' var age = 732 The first two lines allow us to use the package express and help us make our server. The next two lines allow us to use the packages pg and pg-format. PGUSER is a variable that holds the user to your database. Enter your username here in place of yourUserName. PGDATABASE is a variable to hold the database name that we wouldlike to connect to. Then, the last variable is to hold the number that we stored in the database. Next, add this: var config = {   user: PGUSER, // name of the user account   database: PGDATABASE, // name of the database   max: 10, // max number of clients in the pool   idleTimeoutMillis: 30000 // how long a client is allowed to remain idle before being closed } var pool = new pg.Pool(config) var myClient Here, we establish a config object that allows pg to know that we want to connect to the database specified as the user specified, with a maximum of 10 clients in a pool of clients with a time out of 30,000 milliseconds of how long a client can be idle before disconnected from the database. Then, we create a new pool of clients according to that config file. Afterwards, we create a variable called myClient to store the client we get from the database in the next step. Now, enter the last bit of code here: pool.connect(function (err, client, done) { if (err) console.log(err) app.listen(3000, function () { console.log('listening on 3000') }) myClient = client var ageQuery = format('SELECT * from numbers WHERE age = %L', age) myClient.query(ageQuery, function (err, result) { if (err) { console.log(err) } console.log(result.rows[0]) }) }) This tries to connect to the database with one of the clients from the pool. If a client successfully connects to the database, we start our server by listening on a port (here, I use 3000). Then, we get access to our client. Next,we create a variable called ageQuery to make a dynamic SQL query. A query is a command to a database. Here, we are making a SELECT query to the database, checking all rows in the table called numbers where the age column is equal to 732. If that is a successful query (meaning, it finds a row with 732 as the value), then we will log the answer. It's now time to test all your hard work! Save the file and run the command in a terminal: node app.js Your output should look like this: listening on 3000 { age: 732 } Conclusion There you go! You now have a PostgreSQL database connected to your web app. To summarize our work, here is a quick breakdown of what happened: We installed PostgreSQL through Homebrew. We started our Local PostgreSQL server. We opened psql in a terminal to use commands manually. We created a database called example. We created a table in that database called numbers. We added a value to that table. We installed pg, pg-format, and express. We used Express to create a server. We created a pool of clients using a config object to access the database. We queried the table in the database for 732. We logged the value. Check out the code for this tutorial on GitHub. About the Author Antonio Cucciniello is a software engineer with a background in C, C++, and Javascript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using our voice. He loves building cool things with software and reading books on self-help and improvement, finance, and entrepreneurship. You can find Antonio on Twitter @antocucciniello and on GitHub.
Read more
  • 0
  • 1
  • 27354