Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-angular-2-components-what-you-need-know
David Meza
08 Jun 2016
10 min read
Save for later

Angular 2 Components: What You Need to Know

David Meza
08 Jun 2016
10 min read
From the 7th to the 13th of November you can save up to 80% on some of our very best Angular content - along with our hottest React eBooks and video courses. If you're curious about the cutting-edge of modern web development we think you should click here and invest in your skills... The Angular team introduced quite a few changes in version 2 of the framework, and components are one of the important ones. If you are familiar with Angular 1 applications, components are actually a form of directives that are extended with template-oriented features. In addition, components are optimized for better performance and simpler configuration than a directive as Angular doesn’t support all its features. Also, while a component is technically a directive, it is so distinctive and central to Angular 2 applications that you’ll find that it is often separated as a different ingredient for the architecture of an application. So, what is a component? In simple words, a component is a building block of an application that controls a part of your screen real estate or your “view”. It does one thing, and it does it well. For example, you may have a component to display a list of active chats in a messaging app (which, in turn, may have child components to display the details of the chat or the actual conversation). Or you may have an input field that uses Angular’s two-way data binding to keep your markup in sync with your JavaScript code. Or, at the most elementary level, you may have a component that substitutes an HTML template with no special functionality just because you wanted to break down something complex into smaller, more manageable parts. Now, I don’t believe too much in learning something by only reading about it, so let’s get your hands dirty and write your own component to see some sample usage. I will assume that you already have Typescript installed and have done the initial configuration required for any Angular 2 app. If you haven’t, you can check out how to do so by clicking on this link. You may have already seen a component at its most basic level: import {Component} from 'angular2/core'; @Component({ selector: 'my-app', template: '<h1>{{ title }}</h1>' }) export class AppComponent { title = 'Hello World!'; } That’s it! That’s all you really need to have a component. Three things are happening here: You are importing the Component class from the Angular 2 core package. You are using a Typescript decorator to attach some metadata to your AppComponent class. If you don’t know what a decorator is, it is simply a function that extends your class with Angular code so that it becomes an Angular component. Otherwise, it would just be a plain class with no relation to the Angular framework. In the options, you defined a selector, which is the tag name used in the HTML code so that Angular can find where to insert your component, and a template, which is applied to the inner contents of the selector tag. You may notice that we also used interpolation to bind the component data and display the value of the public variable in the template. You are exporting your AppComponent class so that you can import it elsewhere (in this case, you would import it in your main script so that you can bootstrap your application). That’s a good start, but let’s get into a more complex example that showcases other powerful features of Angular and Typescript/ES2015. In the following example, I've decided to stuff everything into one component. However, if you'd like to stick to best practices and divide the code into different components and services or if you get lost at any point, you can check out the finished/refactored example here. Without any further ado, let’s make a quick page that displays a list of products. Let’s start with the index: <html> <head> <title>Products</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <script src="node_modules/es6-shim/es6-shim.min.js"></script> <script src="node_modules/systemjs/dist/system-polyfills.js"></script> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <link rel="stylesheet" href="styles.css"> <script> System.config({ packages: { app: { format: 'register', defaultExtension: 'js' } } }); System.import('app/main') .then(null, console.error.bind(console)); </script> </head> <body> <my-app>Loading...</my-app> </body> </html> There’s nothing out of the ordinary going on here. You are just importing all of the necessary scripts for your application to work as demonstrated in the quick-start. The app/main.ts file should already look somewhat similar to this: import {bootstrap} from ‘angular2/platform/browser’ import {AppComponent} from ‘./app.component’ bootstrap(AppComponent); Here, we imported the bootstrap function from the Angular 2 package and an AppComponent class from the local directory. Then, we initialized the application. First, create a product class that defines the constructor and type definition of any products made. Then, create app/product.ts, as follows: export class Product { id: number; price: number; name: string; } Next, you will create an app.component.ts file, which is where the magic happens. I've decided to stuff everything in here for demonstration purposes, but ideally, you would want to extract the products array into its own service, the HTML template into its own file, and the product details into its own component. This is how the component will look: import {Component} from 'angular2/core'; import {Product} from './product' @Component({ selector: 'my-app', template: ` <h1>{{title}}</h1> <ul class="products"> <li *ngFor="#product of products" [class.selected]="product === selectedProduct" (click)="onSelect(product)"> <span class="badge">{{product.id}}</span> {{product.name}} </li> </ul> <div *ngIf="selectedProduct"> <h2>{{selectedProduct.name}} details!</h2> <div><label>id: </label>{{selectedProduct.id}}</div> <div><label>Price: </label>{{selectedProduct.price | currency: 'USD': true }}</div> <div> <label>name: </label> <input [(ngModel)]="selectedProduct.name" placeholder="name"/> </div> </div> `, styleUrls: ['app/app.component.css'] }) export class AppComponent { title = 'My Products'; products = PRODUCTS; selectedProduct: Product; onSelect(product: Product) { this.selectedProduct = product; } } const PRODUCTS: Product[] = [ { "id": 1, "price": 45.12, "name": "TV Stand" }, { "id": 2, "price": 25.12, "name": "BBQ Grill" }, { "id": 3, "price": 43.12, "name": "Magic Carpet" }, { "id": 4, "price": 12.12, "name": "Instant liquidifier" }, { "id": 5, "price": 9.12, "name": "Box of puppies" }, { "id": 6, "price": 7.34, "name": "Laptop Desk" }, { "id": 7, "price": 5.34, "name": "Water Heater" }, { "id": 8, "price": 4.34, "name": "Smart Microwave" }, { "id": 9, "price": 93.34, "name": "Circus Elephant" }, { "id": 10, "price": 87.34, "name": "Tinted Window" } ]; The app/app.component.css file will look something similar to this: .selected { background-color: #CFD8DC !important; color: white; } .products { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .products li { position: relative; min-height: 2em; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: .3em 0; border-radius: 4px; font-size: 16px; overflow: hidden; white-space: nowrap; text-overflow: ellipsis; color: #3F51B5; display: block; width: 100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease; -o-transition: all 0.3s ease; -ms-transition: all 0.3s ease; transition: all 0.3s ease; } .products li.selected:hover { background-color: #BBD8DC !important; color: white; } .products li:hover { color: #607D8B; background-color: #DDD; left: .1em; color: #3F51B5; text-decoration: none; font-size: 1.2em; background-color: rgba(0,0,0,0.01); } .products .text { position: relative; top: -3px; } .products .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #607D8B; line-height: 1em; position: relative; left: -1px; top: 0; height: 2em; margin-right: .8em; border-radius: 4px 0 0 4px; } I'll explain what is happening: We imported from Component so that we can decorate your new component and imported Product so that we can create an array of products and have access to Typescript type infererences. We decorated our component with a “my-app” selector property, which finds <my-app></my-app> tags and inserts our component there. I decided to define the template in this file instead of using a URL so that I can demonstrate how handy the ES2015 template string syntax is (no more long strings or plus-separated strings). Finally, the styleUrls property uses an absolute file path, and any styles applied will only affect the template in this scope. The actual component only has a few properties outside of the decorator configuration. It has a title that you can bind to the template, a products array that will iterate in the markup, a selectedProduct variable that is a scope variable that will initialize as undefined and an onSelect method that will be run every time you click on a list item. Finally, define a constant (const because I've hardcoded it in and it won't change in runtime) PRODUCTS array to mock an object that is usually returned by a service after an external request. Also worth noting are the following: As you are using Typescript, you can make inferences about what type of data your variables will hold. For example, you may have noticed that I defined the Product type whenever I knew that this the only kind of object I want to allow for this variable or to be passed to a function. Angular 2 has different property prefixes, and if you would like to learn when to use each one, you can check out this Stack Overflow question. That's it! You now have a bit more complex component that has a particular functionality. As I previously mentioned, this could be refactored, and that would look something similar to this: import {Component, OnInit} from 'angular2/core'; import {Product} from './product'; import {ProductDetailComponent} from './product-detail.component'; import {ProductService} from './product.service'; @Component({ selector: 'my-app', templateUrl: 'app/app.component.html', styleUrls: ['app/app.component.css'], directives: [ProductDetailComponent], providers: [ProductService] }) export class AppComponent implements OnInit { title = 'Products'; products: Product[]; selectedProduct: Product; constructor(private _productService: ProductService) { } getProducts() { this._productService.getProducts().then(products => this.products = products); } ngOnInit() { this.getProducts(); } onSelect(product: Product) { this.selectedProduct = product; } } In this example, you get your product data from a service and separate the product detail template into a child component, which is much more modular. I hope you've enjoyed reading this post. About this author David Meza is an AngularJS developer at the City of Raleigh. He is passionate about software engineering and learning new programming languages and frameworks. He is most familiar working with Ruby, Rails, and PostgreSQL in the backend and HTML5, CSS3, JavaScript, and AngularJS in the frontend. He can be found at here.
Read more
  • 0
  • 0
  • 2713

article-image-fine-tune-your-web-application-profiling-and-automation
Packt
07 Jun 2016
17 min read
Save for later

Fine Tune Your Web Application by Profiling and Automation

Packt
07 Jun 2016
17 min read
In this article by James Singleton, author of the book,ASP.NET Core 1.0 High Performance,sheds some light on how to improve the performance of your web application by profiling and testing it. In this article, we will cover writing automated tests to monitor performance along with adding these to aContinuous Integration(CI) and deployment system by constantly checking for regressions. (For more resources related to this topic, see here.) Profiling and measurement It's impossible to overstate how important profiling, measuring, and analyzingreliable evidence is, especially when dealing with web application performance. Maybe you used Glimpseor MiniProfilerto provide insights into the running of your web application;or perhaps, you are familiar with the Visual Studio diagnostics tools and the Application InsightsSoftware Development Kit (SDK). There's another tool that's worth mentioning and that's the Prefix profiler, which you can get at prefix.io.Prefix is a free, web‑based,ASP.NET profiler thatsupports ASP.NET Core. However, it doesn't yet support .NET Core (although this is planned),so you'll need to run ASP.NETCore on .NET Framework 4.6, for now. There's a live demo on their website (at demo.prefix.io) if you want to quickly check it out. You may also want to look at the PerfView performance analysis tool from Microsoft, which is used in the development of .NET Core. You can download PerfView from https://www.microsoft.com/en-us/download/details.aspx?id=28567, as a ZIP file that you can just extract and run. It is useful to analyze the memory of .NET applications among other things. You can use PerfView for many debugging activities, for example, to snapshot the heap or force GC runs. We don't have space for a detailed walkthrough here, but the included instructions are good, and there blogs on MSDN with guides and many video tutorials on Channel 9 at channel9.msdn.com/Series/PerfView-Tutorial if you need more information.Sysinternals tools (technet.microsoft.com/sysinternals) can also be helpful, but as they are not focused on .NET, they are less useful in this context. While tools such as these are great, what would be even better is building performance monitoring into your development workflow. Automate everything that you can and make performance checks transparent, routine, and run by default. Manual processes are bad becausesteps can be skipped and errors can easily be made. You wouldn't dream of developing software by e-mailing files around or editing code directly on a production server, so why not automate your performance tests too? Change control processes exist to ensure consistency and reduce errors. This is why using a Source Control Management (SCM) system, such as git or Team Foundation Server (TFS) is essential. It's also extremely useful to have a build server and perform Continuous Integration(CI) or even fully automated deployments. If the code that is deployed in production differs from what you have on your local workstation, then you have very little chance of success. This is one of the reasons why SQL Stored Procedures (SPs/sprocs) are difficult to work with,at least without rigorous version control. It's far too easy to modify an old version of an SP on a development database, accidentally revert a bug fix, and end up with a regression.If you must use sprocs, then you will need a versioning system such, as ReadyRoll (which Redgate has now acquired). If you practice Continuous Delivery (CD),then you'll have a build server, such as JetBrains TeamCity, ThoughtWorksGoCD, orCruiseControl.NET,or a cloud service, such as AppVeyor. Perhaps, you even automating your deployments using a tool, such as Octopus Deploy, and have your own internal NuGet feeds using software such as TheMotleyFool's Klondike or a cloud service such as MyGet (which also supports npm, bower, and VSIX packages). Bypassing processes and doing things manually will cause problems, even if you follow a script. If it can be automated, then it probably should be, and this includes testing. Automated testing As previously mentioned, the key to improving almost everything is automation. Tests thatare only run manually on developer workstations add very little value. It should of course be possible to run the tests on desktops, but this shouldn't be the official result because there's no guarantee that they will pass on a server (where the correct functioning matters more). Although automation usually occurs on servers, it can be useful to automate tests running on developer workstations too. One way of doing this in Visual Studio is to use a plugin, such as NCrunch. This runs your tests as you work, which can be very useful if you practice Test-Driven Development (TDD) and write your tests before your implementations. You can read more about NCrunch and see the pricing at ncrunch.net, or there's a similar open source project at continuoustests.com. One way of enforcing testing is to use gated check-ins in TFS, but this can be a little draconian, and if you use an SCM-like git, then it's easier to work on branches and simply block merges until all of the tests pass. You want to encourage developers to check-in early and often because this makes merges easier.Therefore, it's a bad idea to have features in progress sitting on workstations for a long time (generally no longer than a day). Continuous integration CI systems automatically build and test all of your branches, and they feed this information back to your version control system. For example, using the GitHubAPI,you can block the merging of pull requests until the build server has reported success of the merge result. Both Bitbucket and GitLab offer free CI systems called pipelines, so you may not need any extra systems in addition to one for source control because everything is in one place. GitLab also offers an integrated Docker container registry, and there is an open source version that you can install locally. Docker is well supported by .NET Core, and the new version of Visual Studio.You cando something similar with Visual Studio Team Services for CI builds and unit testing. Visual Studioalso has git services built into it. This process works well for unit testing because unit tests must be quick so that you get feedback early.Shortening the iteration cycle is a good way of increasing productivity,and you'll want the lag to be as small as possible. However, running tests on each build isn't suitable for all types of testing because not all tests can be quick. In this case, you'll need an additional strategy so as not to slow down your feedback loop. There are many unit testing frameworks available for .NET, for example NUnit, xUnit, and MSTest (Microsoft's unit test framework), along with multiple graphical ways of running tests locally, such as the Visual Studio Test Explorer and the ReSharper plugin. People have their favorites, but it doesn't really matter what you choose because most CI systems will support all of them. Slow testing Some tests are slow,but even if each test is fast they can easily add up to a lengthy time if you have a lot of them. This is especially true if they can't be parallelized and need to be run in sequence.Therefore, you should always aim to have each test stand on its own, without any dependencies on others. It's good practice to divide your tests into rings of importance so that you can at least run a subset of the most crucial on every CI build. However, if you have a large test suite or some tests thatare unavoidably slow, then you may choose to only run these once a day (perhaps overnight) or every week (maybe over the weekend). Some testing is simply slow by nature, and performance testing can often fall into this category, for example, load testing or User Interface (UI) testing. These are usually classed as integration testing, rather than unit testing, because they require your code to be deployed to an environment for testing, and the tests can't simply exercise the binaries. To make use of such automated testing, you will need to have an automated deployment system in addition to your CI system. If you have enough confidence in your test system, then you caneven have live deployments happen automatically. This works well if you also use feature switching to control the rollout of new features. Realistic environments Using a test environment that is as close to production (or as live-like) as possible is a good step toward ensuring reliable results. You cantry and use a smaller set of servers, and then scale your results up to get an estimate of live performance, but this assumes that you have an intimate knowledge of how your application scales, and what hardware constraints will be the bottlenecks. A better option is to use your live environment or rather what will become your production stack. You first create a staging environment that is identical to live, then you deploy your code to it, and run your full test suite, including a comprehensive performance test, ensuring that it behaves correctly. Once you are happy, then you simply swap staging and production, perhaps using DNS or Azure staging slots. Your old live environment now either becomes your test environment or if you use immutable cloud instances, then you can simply terminate it and spin up a new staging system. This concept is known as blue‑green deployment. You don't necessarily have to move all users across at once in a big bang. You canmove a few over first to test whether everything is correct. Web UI testing tools One of the most popular web testing tools is Selenium, which allows you to easily write tests and automate web browsers using WebDriver. Selenium is useful for many other tasks apart from testing, and you can read more about it at docs.seleniumhq.org. WebDriver is a protocol for remote controlling web browsers, and you can read about it at w3c.github.io/webdriver/webdriver-spec.html. Selenium uses real browsers, the same versions your users will access your web application with. This makes it excellent to get representative results, but it can cause issues if itrunsfrom the command line in an unattended fashion. For example, you may find your test server's memory full of dead browser processes, which have timed out. You may find it easier to use a dedicated headless test browser, which while not exactly the same as what your users will see, is more suitable for automation. The best approach is of course to use a combination of both, perhaps running headless tests first and then running the same tests on real browsers with WebDriver. One of the most well-known headless test browsers is PhantomJS. This is based on the WebKit engine, so it should give similar results to Chrome and Safari. PhantomJS is useful for many things apart from testing, such as capturing screenshots, and many different testing frameworks can drive it. As the name suggests,JavaScript can control PhantomJS, and you can read more about it at phantomjs.org. WebKit is an open source engine for web browsers, which was originally part of the KDE Linux desktop environment. It is mainly used in Apple's Safari browser, but a fork called Blink is used in Google Chrome, Chromium, and Opera. You can read more at webkit.org. Other automatable testing browsers based on different engines are available, but they have some limitations. For example, SlimerJS (slimerjs.org) is based on the Gecko engine used by Firefox, but is not fully headless. You probably want to use a higher-level testing utility rather than scripting browser engines directly. One such utility that provides many useful abstractions is CasperJS(casperjs.org),which supports running onboth PhantomJS and SlimerJS. Another library is Capybara, which allows you to easily simulate user interactions in Ruby. It supports Selenium, WebKit, Rack, and PhantomJS (via Poltergeist), although it's more suitable for Rails apps.You can read more at jnicklas.github.io/capybara. There is also TrifleJS (triflejs.org), which uses the .NET WebBrowser class (the Internet Explorer Trident engine), but this is a work in progress. Additionally, there's Watir (watir.com), which is a set of Ruby libraries that target Internet Explorer and WebDriver. However, neither have been updated in a while, and IE has changed a lot recently. Microsoft Edge (codenamed Spartan)is the new version of IE, and the Trident engine has been forked to EdgeHTML.The JavaScript engine (Chakra) has been open sourced as ChakraCore (github.com/Microsoft/ChakraCore). It shouldn't matter too much what browser engine you use, and PhantomJS will work fine as a first pass for automated tests. You can always test with real browsers after using a headless one, perhaps with Selenium or with PhantomJS using WebDriver. When we refer to browser engines (WebKit/Blink, Gecko, and Trident/EdgeHTML), we generally mean only the rendering and layout engine, not the JavaScript engine (SFX/Nitro/FTL/B3, V8, SpiderMonkey, and Chakra/ChakraCore). You'll probably still want to use a utility such as CasperJS to make writing tests easier, and you'll likely need a test framework, such as Jasmine (jasmine.github.io) or QUnit (qunitjs.com), too. You can also use a test runner thatsupports both Jasmine and QUnit, such as Chutzpah (mmanela.github.io/chutzpah). You can integrate your automated tests with many different CI systems, for example, Jenkins or JetBrains TeamCity. If you prefer a cloud-hosted option, then there's Travis CI (travis-ci.org) andAppVeyor (appveyor.com), which is also suitableto build .NET apps. You may prefer to run your integration and UI tests from your deployment system, for example, to verify a successful deployment in Octopus Deploy. There are also dedicated,cloud-based,web-application UI testing services available, such as BrowserStack (browserstack.com). Automating UI performancetests Automated UI tests are clearly great to check functional regressions, but they are also useful to test performance. You have programmatic access to the same information provided by the network inspector in the browser developer tools. You can integrate the YSlow (yslow.org)performance analyzerwith PhantomJS, enabling your CI system to check for common web performance mistakes on every commit. YSlow came out of Yahoo!, and it provides rules used to identify bad practices, which can slow down web applications for users. It's a similar idea to Google's PageSpeed Insights service (which can be automated via its API). However, YSlow is pretty old, and things have moved on in web development recently, for example, HTTP/2. A modern alternative is "the coach" from sitespeed.io, and you can read more at github.com/sitespeedio/coach.You should check out their other open source tools too, such as the dashboard at dashboard.sitespeed.io, which uses Graphite and Grafana. You canalso export the network results (in industry standard HAR format) and analyze them however you like. For example, visualizing them graphically in waterfall format, as you might do manually with your browser developer tools. The HTTP Archive (HAR) format is a standard way of representing the content of monitored network data to export it to other software. You can copy or save as HAR in some browser developer tools by right-clicking on a network request. DevOps When using automation and techniques, such as feature switching, it is essential to have a good view of your environments so that you know the utilization of all the hardware. Good tooling is important to perform this monitoring, and you want to easily be able to see the vital statistics of every server. This will consist of at least the CPU, memory, and disk space consumption, but it may include more, and you will want alarms set up to alert you if any of these stray outside allowed bands. The practice of DevOps is the culmination of all of the automation that we covered previously with development, operations, and quality assurance testing teams all collaborating. The only missing pieces left now are provisioning and configuring infrastructure and then monitoring it while in use. Although DevOps is a culture, there is plenty of tooling that can help. DevOps tooling One of the primary themes of DevOps tooling is defining infrastructure as code. The idea is that you shouldn't manually perform a task, such as setting up a server, when you can create software to do it for you. You canthen reuse these provisioning scripts, which will not only save you time, but it will also ensure that all of the machines are consistent and free of mistakes or missed steps. Provisioning There are many systems available to commission and configure new machines. Some popular configuration management automation tools are Ansible (ansible.com), Chef (chef.io), and Puppet (puppet.com). Not all of these tools work great on Windows servers, partly because Linux is easier to automate. However, you can run ASP.NETCore on Linux and still develop on Windows using Visual Studio, while testing in a VM. Developing for a VM is a great idea because it solves the problems in setting up environments and issues where it "works on my machine" but not in production. Vagrant (vagrantup.com) is a great command line tool to manage developer VMs. It allows you to easily create, spin up, and share developer environments. The successor to Vagrant, Otto (ottoproject.io) takes this a step further and abstracts deployment too.Therefore,you can push to multiple cloud providers without worrying about the intricacies of CloudFormation, OpsWorks, or anything else. If you create your infrastructure as code, then your scripts can be versioned and tested, just like your application code. We'll stop before we get too far off-topic, but the point is that if you have reliable environments, which you can easily verify, instantiate, and perform testing on, then CI is a lot easier. Monitoring Monitoring is essential, especially for web applications, and there are many tools available to help with it. A popular open source infrastructure monitoring system is Nagios (nagios.org). Another more modern open source alerting and metrics tool is Prometheus(prometheus.io). If you use a cloud platform, then there will be monitoring built in, for example AWS CloudWatch or Azure Diagnostics.There are also cloud servicesto directly monitor your website, such as Pingdom (pingdom.com), UptimeRobot (uptimerobot.com),Datadog (datadoghq.com),and PagerDuty (pagerduty.com). You probably already have a system in place to measure availability, but you can also use the same systems to monitor performance. This is not only helpfulto ensure a responsive users experience, but it can also provide early warning signs that a failure is imminent. If you are proactive and take preventative action, then you can save yourself a lot of trouble reactively fighting fires. It helps consider application support requirements at design time. Development, testing, and operations aren't competing disciplines, and you will succeed more often if you work as one team rather than simply throwing an application over the fence and saying it "worked in test, ops problem now". Summary In this article, we saw how wecan integrate automated testing into a CI system in order to monitor for performance regressions. We also learned some strategies to roll out changes and ensure that tests accurately reflect real life. We also briefly covered some options for DevOps practices and cloud-hosting providers, which together make continuous performance testing much easier. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Creating a NHibernate session to access database within ASP.NET [article] Working With ASP.NET DataList Control [article]
Read more
  • 0
  • 0
  • 3031

article-image-webhooks-slack
Packt
01 Jun 2016
11 min read
Save for later

Webhooks in Slack

Packt
01 Jun 2016
11 min read
In this article by Paul Asjes, the author of the book, Building Slack Bots, we'll have a look at webhooks in Slack. (For more resources related to this topic, see here.) Slack is a great way of communicating at your work environment—it's easy to use, intuitive, and highly extensible. Did you know that you can make Slack do even more for you and your team by developing your own bots? This article will teach you how to implement incoming and outgoing webhooks for Slack, supercharging your Slack team into even greater levels of productivity. The programming language we'll use here is JavaScript; however, webhooks can be programmed with any language capable of HTTP requests. Webhooks First let's talk basics: a webhook is a way of altering or augmenting a web application through HTTP methods. Webhooks allow us to post messages to and from Slack using regular HTTP requests with a JSON payloads. What makes a webhook a bot is its ability to post messages to Slack as if it were a bot user. These webhooks can be divided into incoming and outgoing webhooks, each with their own purposes and uses. Incoming webhooks An example of an incoming webhook is a service that relays information from an external source to a Slack channel without being explicitly requested, such as GitHub Slack integration: The GitHub integration posts messages about repositories we are interested in In the preceding screenshot, we see how a message was sent to Slack after a new branch was made on a repository this team was watching. This data wasn't explicitly requested by a team member but automatically sent to the channel as a result of the incoming webhook. Other popular examples include Jenkins integration, where infrastructure changes can be monitored in Slack (for example, if a server watched by Jenkins goes down, a warning message can be posted immediately to a relevant Slack channel). Let's start with setting up an incoming webhook that sends a simple "Hello world" message: First, navigate to the Custom Integration Slack team page, as shown in the following screenshot (https://my.slack.com/apps/build/custom-integration): The various flavors of custom integrations Select Incoming WebHooks from the list and then select which channel you'd like your webhook app to post messages to: Webhook apps will post to a channel of your choosing Once you've clicked on the Add Incoming WebHooks integration button, you will be presented with this options page, which allows you to customize your integration a little further: Names, descriptions, and icons can be set from this menu Set a customized icon for your integration (for this example, the wave emoji was used) and copy down the webhook URL, which has the following format:https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX This generated URL is unique to your team, meaning that any JSON payloads sent via this URL will only appear in your team's Slack channels. Now, let's throw together a quick test of our incoming webhook in Node. Start a new Node project (remember: you can use npm init to create your package.json file) and install the superagent AJAX library by running the following command in your terminal: npm install superagent –save Create a file named index.js and paste the following JavaScript code within it: const WEBHOOK_URL = [YOUR_WEBHOOK_URL]; const request = require('superagent'); request .post(WEBHOOK_URL) .send({ text: 'Hello! I am an incoming Webhook bot!' }) .end((err, res) => { console.log(res); }); Remember to replace [YOUR_WEBHOOK_URL] with your newly generated URL, and then run the program by executing the following command: nodemon index.js Two things should happen now: firstly, a long response should be logged in your terminal, and secondly, you should see a message like the following in the Slack client: The incoming webhook equivalent of "hello world" The res object we logged in our terminal is the response from the AJAX request. Taking the form of a large JavaScript object, it displays information about the HTTP POST request we made to our webhook URL. Looking at the message received in the Slack client, notice how the name and icon are the same ones we set in our integration setup on the team admin site. Remember that the default icon, name, and channel are used if none are provided, so let's see what happens when we change that. Replace your request AJAX call in index.js with the following: request .post(WEBHOOK_URL) .send({ username: "Incoming bot", channel: "#general", icon_emoji: ":+1:", text: 'Hello! I am different from the previous bot!' }) .end((err, res) => { console.log(res); }); Save the file, and nodemon will automatically restart the program. Switch over to the Slack client and you should see a message like the following pop up in your #general channel: New name, icon, and message In place of icon_emoji, you could also use icon_url to link to a specific image of your choosing. If you wish your message to be sent only to one user, you can supply a username as the value for the channel property: channel: "@paul" This will cause the message to be sent from within the Slackbot direct message. The message's icon and username will match either what you configured in the setup or set in the body of the POST request. Finally, let's look at sending links in our integration. Replace the text property with the following and save index.js: text: 'Hello! Here is a fun link: <http://www.github.com|Github is great!>' Slack will automatically parse any links it finds, whether it's in the http://www.example.com or www.example.com formats. By enclosing the URL in angled brackets and using the | character, we can specify what we would like the URL to be shown as: Formatted links are easier to read than long URLs For more information on message formatting, visit https://api.slack.com/docs/formatting. Note that as this is a custom webhook integration, we can change the name, icon, and channel of the integration. If we were to package the integration as a Slack app (an app installable by other teams), then it is not possible to override the default channel, username, and icon set. Incoming webhooks are triggered by external sources; an example would be when a new user signs up to your service or a product is sold. The goal of the incoming webhook is to provide information to your team that is easy to reach and comprehend. The opposite of this would be if you wanted users to get data out of Slack, which can be done via the medium of outgoing webhooks. Outgoing webhooks Outgoing webhooks differ from the incoming variety in that they send data out of Slack and to a service of your choosing, which in turn can respond with a message to the Slack channel. To set up an outgoing webhook, visit the custom integration page of your Slack team's admin page again—https://my.slack.com/apps/build/custom-integration—and this time, select the Outgoing WebHooks option. On the next screen, be sure to select a channel, name, and icon. Notice how there is a target URL field to be filled in; we will fill this out shortly. When an outgoing webhook is triggered in Slack, an HTTP POST request is made to the URL (or URLs, as you can specify multiple ones) you provide. So first, we need to build a server that can accept our webhook. In index.js, paste the following code: 'use strict'; const http = require('http'); // create a simple server with node's built in http module http.createServer((req, res) => { res.writeHead(200, {'Content-Type': 'text/plain'}); // get the data embedded in the POST request req.on('data', (chunk) => { // chunk is a buffer, so first convert it to // a string and split it to make it more legible as an array console.log('Body:', chunk.toString().split('&')); }); // create a response let response = JSON.stringify({ text: 'Outgoing webhook received!' }); // send the response to Slack as a message res.end(response); }).listen(8080, '0.0.0.0'); console.log('Server running at http://0.0.0.0:8080/'); Notice how we require the http module despite not installing it with NPM. This is because the http module is a core Node dependency and is automatically included with your installation of Node. In this block of code, we start a simple server on port 8080 and listen for incoming requests. In this example, we set our server to run at 0.0.0.0 rather than localhost. This is important as Slack is sending a request to our server, so it needs to be accessible from the Internet. Setting the IP of your server to 0.0.0.0 tells Node to use your computer's network-assigned IP address. Therefore, if you set the IP of your server to 0.0.0.0, Slack can reach your server by hitting your IP on port 8080 (for example, http://123.456.78.90:8080). If you are having trouble with Slack reaching your server, it is most likely because you are behind a router or firewall. To circumvent this issue, you can use a service such as ngrok (https://ngrok.com/). Alternatively, look at port forwarding settings for your router or firewall. Let's update our outgoing webhook settings accordingly: The outgoing webhook settings, with a destination URL Save your settings and run your Node app; test whether the outgoing webhook works by typing a message into the channel you specified in the webhook's settings. You should then see something like this in Slack: We built a spam bot Well, the good news is that our server is receiving requests and returning a message to send to Slack each time. The issue here is that we skipped over the Trigger Word(s) field in the webhook settings page. Without a trigger word, any message sent to the specified channel will trigger the outgoing webhook. This causes our webhook to be triggered by a message sent by the outgoing webhook in the first place, creating an infinite loop. To fix this, we could do one of two things: Refrain from returning a message to the channel when listening to all the channel's messages. Specify one or more trigger words to ensure we don't spam the channel. Returning a message is optional yet encouraged to ensure a better user experience. Even a confirmation message such as Message received! is better than no message as it confirms to the user that their message was received and is being processed. Let's therefore presume we prefer the second option, and add a trigger word: Trigger words keep our webhooks organized Let's try that again, this time sending a message with the trigger word at the beginning of the message. Restart your Node app and send a new message: Our outgoing webhook app now functions a lot like our bots from earlier Great, now switch over to your terminal and see what that message logged: Body: [ 'token=KJcfN8xakBegb5RReelRKJng', 'team_id=T000001', 'team_domain=buildingbots', 'service_id=34210109492', 'channel_id=C0J4E5SG6', 'channel_name=bot-test', 'timestamp=1460684994.000598', 'user_id=U0HKKH1TR', 'user_name=paul', 'text=webhook+hi+bot%21', 'trigger_word=webhook' ] This array contains the body of the HTTP POST request sent by Slack; in it, we have some useful data, such as the user's name, the message sent, and the team ID. We can use this data to customize the response or to perform some validation to make sure the user is authorized to use this webhook. In our response, we simply sent back a Message received string; however, like with incoming webhooks, we can set our own username and icon. The channel cannot be different from the channel specified in the webhook's settings, however. The same restrictions apply when the webhook is not a custom integration. This means that if the webhook was installed as a Slack app for another team, it can only post messages as the username and icon specified in the setup screen. An important thing to note is that webhooks, either incoming or outgoing, can only be set up in public channels. This is predominantly to discourage abuse and uphold privacy, as we've seen that it's simple to set up a webhook that can record all the activity on a channel. Summary In this article, you learned what webhooks are and how you can use them to get data in and out of Slack. You learned how to send messages as a bot user and how to interact with your users in the native Slack client. Resources for Article: Further resources on this subject: Keystone – OpenStack Identity Service[article] A Sample LEMP Stack[article] Implementing Stacks using JavaScript[article]
Read more
  • 0
  • 0
  • 4911
Banner background image

article-image-programming-raspberry-pi-robots-javascript
Anna Gerber
16 May 2016
6 min read
Save for later

Programming Raspberry-Pi Robots with JavaScript

Anna Gerber
16 May 2016
6 min read
The Raspberry Pi Foundation recently announced a smaller, cheaper single-board computer—the Raspberry Pi Zero. Priced at $5 and measuring about half the size of Model A+, the new Pi Zero is ideal for embedded applications and robotics projects. Johnny-Five supports programming Raspberry Pi-based robots via a Firmata-compatible interface that is implemented via the raspi-io IO Plugin for Node.js. This post steps you through building a robot with Raspberry-Pi and Johnny-Five. What you'll need Raspberry Pi (for example, B+, 2, or Zero) Robot chassis. We're using a laser-cut acrylic "Smart Robot Car" kit that includes two DC motors with wheels and a castor. You can find these on eBay for around $10. 5V power supply (USB battery packs used for charging mobile phones are ideal) 4 x AA battery holder for the motor Texas Instruments L293NE Motor Driver IC Solderless breadboard and jumper wires USB Keyboard and mouse Monitor or TV with HDMI cable USB Wi-Fi adapter For Pi Zero only: mini HDMI—HDMI adaptor or cable, USB-on-the-go connector and powered USB Hub   A laser cut robot chassis Attach peripherals If you are using a Raspberry Pi B+ or 2, you can attach a monitor or TV screen via HDMI, and plug in a USB keyboard, a USB Wi-Fi adapter, and a mouse directly. The Raspberry Pi Zero doesn't have as many ports as the older Raspberry Pi models, so you'll need to use a USB-on-the-go cable and a powered USB hub to attach the peripherals. You'll also need a micro-HDMI-to-HDMI cable (or micro-HDMI-to-HDMI adapter) for the monitor. The motors for the robot wheels will be connected via the GPIO pins, but first we'll install the operating system. Prepare the micro SD card Raspberry Pi runs the Linux operating system, which you can install on an 8 GB or larger micro SD card: Use a USB adapter or built-in SD card reader to format your micro SD card using SD Formatter for Windows or Mac. Download the "New Out Of the Box Software" install manager (NOOBS), unzip, and copy the extracted files to your micro SD card. Remove the micro SD card from your PC and insert it into the Raspberry Pi. Power The Raspberry Pi requires 5V power supplied via the micro-USB power port. If the power supplied drops suddenly, the Pi may restart, which can lead to corruption of the micro SD card. Use a 5V power bank or an external USB power adaptor to ensure that there will be an uninterrupted supply. When we plug in the motors, we'll use separate batteries so that they don't draw power from the board, which can potentially damage the Raspberry Pi. Install Raspbian OS Power up the Raspberry Pi and follow the on-screen prompts to install Raspbian Linux. This process takes about half an hour and the Raspberry Pi will reboot after the OS has finished installing. The latest version of Raspbian should log you in automatically and launch the graphical UI by default. If not, sign in using the username pi and password raspberry. Then type startx at the command prompt to start the X windows graphical UI. Set up networking The Raspberry Pi will need to be online to install the Johnny-Five framework. Connect the Wi-Fi adapter, select your access point from the network menu at the top right of the graphical UI, and then enter your network password and connect. We'll be running the Raspberry Pi headless (without a screen) for the robot, so if you want to be able to connect to your Raspberry Pi desktop later, now would be a good time to enable remote access via VNC. Make sure you have the latest version of the installed packages by running the following commands from the terminal: sudo apt-get updatesudo apt-get upgrade Install Node.js and Johnny-Five Raspbian comes with a legacy version of Node.js installed, but we'll need a more recent version. Launch a terminal to uninstall the legacy version, and download and update to the latest by running the following commands: sudo apt-get uninstall nodejs-legacy cd ~ wget http://node-arm.herokuapp.com/node_latest_armhf.deb sudo dpkg -i node_latest_armhf.deb If npm is not installed, you can install it with sudo apt-get install npm. Create a folder for your code and install the johnny-five framework and the raspi-io IO Plugin from npm: mkdir ~/myrobot cd myrobot npm install johnny-five npm install raspi-io Make the robot move A motor converts electricity into movement. You can control the speed by changing the voltage supplied and control the direction by switching the polarity of the voltage. Connect the motors as shown, with an H-bridge circuit: Pins 32 and 35 support PWM, so we'll use these to control the motor speed. We can use any of the digital IO pins to control the direction for each motor, in this case pins 13 and 15. See Raspi-io Pin Information for more details on pins. Use a text editor (for example, nano myrobot.js) to create the JavaScript program: var raspi = require('raspi-io'); var five = require('johnny-five'); var board = new five.Board({ io: new raspi() }); board.on('ready', function() { var leftMotor = new five.Motor({ pins: {pwm: "P1-35", dir: "P1-13"}, invertPWM: true }); var rightMotor = new five.Motor({ pins: {pwm: "P1-32", dir: "P1-15"}, invertPWM: true }); board.repl.inject({ l: leftMotor, r: rightMotor }); leftMotor.forward(150); rightMotor.forward(150); }); Accessing GPIO requires root permissions, so run the program using sudo: sudo node myrobot.js. Use differential drive to propel the robot, by controlling the motors on either side of the chassis independently. Experiment with driving each wheel using the Motor API functions (stop, start, forward, and reverse, providing different speed parameters) via the REPL. If both motors have the same speed and direction, the robot will move in a straight line. You can turn the robot by moving the wheels at different rates. Go wireless Now you can unplug the screen, keyboard, and mouse from the Raspberry Pi. You can attach it and the batteries and breadboard to the chassis using double-sided tape. Power the Raspberry Pi using the 5V power pack. Connect to your Raspberry Pi via ssh or VNC over Wi-Fi to run or modify the program. Eventually, you might want to add sensors and program some line-following or obstacle-avoidance behavior to make the robot autonomous. The raspi-io plugin supports 3.3V digital and I2C sensors. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. She was a technical project manager at The University of Queensland (ITEE eResearch). She specializes in digital humanities and is a research scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 8560

article-image-playing-tic-tac-toe-against-ai
Packt
11 May 2016
30 min read
Save for later

Playing Tic-Tac-Toe against an AI

Packt
11 May 2016
30 min read
In this article by Ivo Gabe de Wolff, author of the book TypeScript Blueprints, we will build a game in which the computer will play well. The game is called Tic-Tac-Toe. The game is played by two players on a grid, usually three by three. The players try to place their symbols threein a row (horizontal, vertical or diagonal). The first player can place crosses, the second player placescircles. If the board is full, and no one has three symbols in a row, it is a draw. (For more resources related to this topic, see here.) The game is usually played on a three by three grid and the target is to have three symbols in a row. To make the application more interesting, we will make the dimension and the row length variable. We will not create a graphical interface for this application. We will only build the game mechanics and the artificial intelligence(AI). An AI is a player controlled by the computer. If implemented correctly, the computer should never lose on a standard three by three grid. When the computer plays against the computer, it will result in a draft. We will also write various unit tests for the application. We will build the game as a command line application. That means you can play the game in a terminal. You can interact with the game only with text input. It's player one's turn! Choose one out of these options: 1X|X|-+-+- |O|-+-+- | |   2X| |X-+-+- |O|-+-+- | |   3X| |-+-+-X|O|-+-+- | |   4X| |-+-+- |O|X-+-+- | |   5X| |-+-+- |O|-+-+-X| |   6X| |-+-+- |O|-+-+- |X|   7X| |-+-+- |O|-+-+- | |X Creating the project structure We will locate the source files in lib and the tests in lib/test. We use gulp to compile the project and AVA to run tests. We can install the dependencies of our project with NPM: npm init -y npm install ava gulp gulp-typescript --save-dev In gulpfile.js, we configure gulp to compile our TypeScript files. var gulp = require("gulp"); var ts = require("gulp-typescript");   var tsProject = ts.createProject("./lib/tsconfig.json");   gulp.task("default", function() { return tsProject.src() .pipe(ts(tsProject)) .pipe(gulp.dest("dist")); }); Configure TypeScript We can download type definitions for NodeJS with NPM: npm install @types/node --save-dev We must exclude browser files in TypeScript. In lib/tsconfig.json, we add the configuration for TypeScript: {   "compilerOptions": {     "target": "es6",     "module": "commonjs" }   } For applications that run in the browser, you will probably want to target ES5, since ES6 is not supported in all browsers. However, this application will only beexecuted in NodeJS, so we do not have such limitations. You have to use NodeJS 6 or later for ES6 support. Adding utility functions Since we will work a lot with arrays, we can use some utility functions. First, we create a function that flattens a two dimensional array into a one dimensional array. export function flatten<U>(array: U[][]) { return (<U[]>[]).concat(...array); } Next, we create a functionthat replaces a single element of an array with a specified value. We will use functional programming in this article again, so we must use immutable data structures. We can use map for this, since this function provides both the element and the index to the callback. With this index, we can determine whether that element should be replaced. export function arrayModify<U>(array: U[], index: number, newValue: U) { return array.map((oldValue, currentIndex) => currentIndex === index ? newValue : oldValue); } We also create a function that returns a random integer under a certain upper bound. export function randomInt(max: number) { return Math.floor(Math.random() * max); } We will use these functions in the next sessions. Creating the models In lib/model.ts, we will create the model for our game. The model should contain the game state. We start with the player. The game is played by two players. Each field of the grid contains the symbol of a player or no symbol. We will model the grid as a two dimensional array, where each field can contain a player. export type Grid = Player[][]; A player is either Player1, Player2 or no player. export enum Player { Player1 = 1, Player2 = -1, None = 0 } We have given these members values so we can easily get the opponent of a player. export function getOpponent(player: Player): Player { return -player; } We create a type to represent an index of the grid. Since the grid is two dimensional, such an index requires two values. export type Index = [number, number]; We can use this type to create two functions that get or update one field of the grid. We use functional programming in this article, so we will not modify the grid. Instead, we return a new grid with one field changed. export function get(grid: Grid, [rowIndex, columnIndex]: Index) { const row = grid[rowIndex]; if (!row) return undefined; return row[columnIndex]; } export function set(grid: Grid, [row, column]: Index, value: Player) { return arrayModify(grid, row, arrayModify(grid[row], column, value) ); } Showing the grid To show the game to the user, we must convert a grid to a string. First, we will create a function that converts a player to a string, then a function that uses the previous function to show a row, finally a function that uses these functions to show the complete grid. The string representation of a grid should have lines between the fields. We create these lines with standard characters (+, -, and |). This gives the following result: X|X|O-+-+- |O|-+-+-X| | To convert a player to the string, we must get his symbol. For Player1, that is a cross and for Player2, a circle. If a field of the grid contains no symbol, we return a space to keep the grid aligned. function showPlayer(player: Player) { switch (player) { case Player.Player1: return "X"; case Player.Player2: return "O"; default: return ""; } } We can use this function to the tokens of all fields in a row. We add a separator between these fields. function showRow(row: Player[]) { return row.map(showPlayer).reduce((previous, current) => previous + "|" + current); } Since we must do the same later on, but with a different separator, we create a small helper function that does this concatenation based on a separator. const concat = (separator: string) => (left: string, right: string) => left + separator + right; This function requires the separator and returns a function that can be passed to reduce. We can now use this function in showRow. function showRow(row: Player[]) { return row.map(showPlayer).reduce(concat("|")); } We can also use this helper function to show the entire grid. First we must compose the separator, which is almost the same as showing a single row. Next, we can show the grid with this separator. export function showGrid(grid: Grid) { const separator = "n" + grid[0].map(() =>"-").reduce(concat("+")) + "n"; return grid.map(showRow).reduce(concat(separator)); } Creating operations on the grid We will now create some functions that do operations on the grid. These functions check whether the board is full, whether someone has won, and what options a player has. We can check whether the board is full by looking at all fields. If no field exists that has no symbol on it, the board is full, as every field has a symbol. export function isFull(grid: Grid) { for (const row of grid) { for (const field of row) { if (field === Player.None) return false; } } return true; } To check whether a user has won, we must get a list of all horizontal, vertical and diagonal rows. For each row, we can check whether a row consists of a certain amount of the same symbols on a row. We store the grid as an array of the horizontal rows, so we can easily get those rows. We can also get the vertical rows relatively easily. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ... ];   function getVertical(index: number) { return grid.map(row => row[index]); } } Getting a diagonal row requires some more work. We create a helper function that will walk on the grid from a start point, in a certain direction. We distinguish two different kinds of diagonals: a diagonal that goes to the lower-right and a diagonal that goes to the lower-left. For a standard three by three game, only two diagonals exist. However, a larger grid may have more diagonals. If the grid is 5 by 5, and the users should get three in a row, ten diagonals with a length of at least three exist: 0, 0 to 4, 40, 1 to 3, 40, 2 to 2, 41, 0 to 4, 32, 0 to 4, 24, 0 to 0, 43, 0 to 0, 32, 0 to 0, 24, 1 to 1, 44, 2 to 2, 4 The diagonals that go toward the lower-right, start at the first column or at the first horizontal row. Other diagonals start at the last column or at the first horizontal row. In this function, we will just return all diagonals, even if they only have one element, since that is easy to implement. We implement this with a function that walks the grid to find the diagonal. That function requires a start position and a step function. The step function increments the position for a specific direction. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ...grid.map((row, index) => getDiagonal([index, 0], stepDownRight)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index + 1], stepDownRight)), ...grid.map((row, index) => getDiagonal([index, grid[0].length - 1], stepDownLeft)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index], stepDownLeft)) ];   function getVertical(index: number) { return grid.map(row => row[index]); }   function getDiagonal(start: Index, step: (index: Index) => Index) { const row: Player[] = []; for (let index = start; get(grid, index) !== undefined; index = step(index)) { row.push(get(grid, index)); } return row; } function stepDownRight([i, j]: Index): Index { return [i + 1, j + 1]; } function stepDownLeft([i, j]: Index): Index { return [i + 1, j - 1]; } function stepUpRight([i, j]: Index): Index { return [i - 1, j + 1]; } } To check whether a row has a certain amount of the same elements on a row, we will create a function with some nice looking functional programming. The function requires the array, the player, and the index at which the checking starts. That index will usually be zero, but during recursion we can set it to a different value. originalLength contains the original length that a sequence should have. The last parameter, length, will have the same value in most cases, but in recursion we will change the value. We start with some base cases. Every row contains a sequence of zero symbols, so we can always return true in such a case. function isWinningRow(row: Player[], player: Player, index: number, originalLength: number, length: number): boolean { if (length === 0) { return true; } If the row does not contain enough elements to form a sequence, the row will not have such a sequence and we can return false. if (index + length > row.length) { return false; } For other cases, we use recursion. If the current element contains a symbol of the provided player, this row forms a sequence if the next length—1 fields contain the same symbol. if (row[index] === player) { return isWinningRow(row, player, index + 1, originalLength, length - 1); } Otherwise, the row should contain a sequence of the original length in some other position. return isWinningRow(row, player, index + 1, originalLength, originalLength); } If the grid is large enough, a row could contain a long enough sequence after a sequence that was too short. For instance, XXOXXX contains a sequence of length three. This function handles these rows correctly with the parameters originalLength and length. Finally, we must create a function that returns all possible sets that a player can do. To implement this function, we must first find all indices. We filter these indices to indices that reference an empty field. For each of these indices, we change the value of the grid into the specified player. This results in a list of options for the player. export function getOptions(grid: Grid, player: Player) { const rowIndices = grid.map((row, index) => index); const columnIndices = grid[0].map((column, index) => index);   const allFields = flatten(rowIndices.map( row => columnIndices.map(column =><Index> [row, column]) ));   return allFields .filter(index => get(grid, index) === Player.None) .map(index => set(grid, index, player)); } The AI will use this to choose the best option and a human player will get a menu with these options. Creating the grid Before the game can be started, we must create an empty grid. We will write a function that creates an empty grid with the specified size. export function createGrid(width: number, height: number) { const grid: Grid = []; for (let i = 0; i < height; i++) { grid[i] = []; for (let j = 0; j < width; j++) { grid[i][j] = Player.None; } } return grid; } In the next section, we will add some tests for the functions that we have written. These functions work on the grid, so it will be useful to have a function that can parse a grid based on a string. We will separate the rows of a grid with a semicolon. Each row contains tokens for each field. For instance, "XXO; O ;X  " results in this grid: X|X|O-+-+- |O|-+-+-X| | We can implement this by splitting the string into an array of lines. For each line, we split the line into an array of characters. We map these characters to a Player value. export function parseGrid(input: string) { const lines = input.split(";"); return lines.map(parseLine);   function parseLine(line: string) { return line.split("").map(parsePlayer); } function parsePlayer(character: string) { switch (character) { case "X": return Player.Player1; case "O": return Player.Player2; default: return Player.None; } } } In the next section we will use this function to write some tests. Adding tests We will use AVA to write tests for our application. Since the functions do not have side effects, we can easily test them. In lib/test/winner.ts, we test the findWinner function. First, we check whether the function recognizes the winner in some simple cases. import test from "ava"; import { Player, parseGrid, findWinner } from "../model";   test("player winner", t => { t.is(findWinner(parseGrid("   ;XXX;   "), 3), Player.Player1); t.is(findWinner(parseGrid("   ;OOO;   "), 3), Player.Player2); t.is(findWinner(parseGrid("   ;   ;   "), 3), Player.None); }); We can also test all possible three-in-a-row positions in the three by three grid. With this test, we can find out whether horizontal, vertical, and diagonal rows are checked correctly. test("3x3 winner", t => { const grids = [     "XXX;   ;   ",     "   ;XXX;   ",     "   ;   ;XXX",     "X  ;X  ;X  ",     " X ; X ; X ",     "  X;  X;  X",     "X  ; X ;  X",     "  X; X ;X  " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.Player1); } });   We must also test that the function does not claim that someone won too often. In the next test, we validate that the function does not return a winner for grids that do not have a winner. test("3x3 no winner", t => { const grids = [     "XXO;OXX;XOO",     "   ;   ;   ",     "XXO;   ;OOX",     "X  ;X  ; X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.None); } }); Since the game also supports other dimensions, we should check these too. We check that all diagonals of a four by three grid are checked correctly, where the length of a sequence should be two. test("4x3 winner", t => { const grids = [     "X   ; X  ;    ",     " X  ;  X ;    ",     "  X ;   X;    ",     "    ;X   ; X  ",     "  X ;   X;    ",     " X  ;  X ;    ",     "X   ; X  ;    ",     "    ;   X;  X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 2), Player.Player1); } }); You can of course add more test grids yourself. Add tests before you fix a bug. These tests should show the wrong behavior related to the bug. When you have fixed the bug, these tests should pass. This prevents the bug returning in a future version. Random testing Instead of running the test on some predefined set of test cases, you can also write tests that run on random data. You cannot compare the output of a function directly with an expected value, but you can check some properties of it. For instance, getOptions should return an empty list if and only if the board is full. We can use this property to test getOptions and isFull. First, we create a function that randomly chooses a player. To higher the chance of a full grid, we add some extra weight on the players compared to an empty field. import test from "ava"; import { createGrid, Player, isFull, getOptions } from "../model"; import { randomInt } from "../utils";   function randomPlayer() { switch (randomInt(4)) { case 0: case 1: return Player.Player1; case 2: case 3: return Player.Player2; default: return Player.None; } } We create 10000 random grids with this function. The dimensions and the fields are chosen randomly. test("get-options", t => { for (let i = 0; i < 10000; i++) { const grid = createGrid(randomInt(10) + 1, randomInt(10) + 1) .map(row => row.map(randomPlayer)); Next, we check whether the property that we describe holds for this grid. const options = getOptions(grid, Player.Player1) t.is(isFull(grid), options.length === 0); We also check that the function does not give the same option twice. for (let i = 1; i < options.length; i++) { for (let j = 0; j < i; j++) { t.notSame(options[i], options[j]); } } } }); Depending on how critical a function is, you can add more tests. In this case, you could check that only one field is modified in an option or that only an empty field can be modified in an option. Now you can run the tests using gulp && ava dist/test.You can add this to your package.json file. In the scripts section, you can add commands that you want to run. With npm run xxx, you can run task xxx. npm testthat was added as shorthand for npm run test, since the test command is often used. { "name": "article-7", "version": "1.0.0", "scripts": { "test": "gulp && ava dist/test"   }, ... Implementing the AI using Minimax We create an AI based on Minimax. The computer cannot know what his opponent will do in the next steps, but he can check what he can do in the worst-case. The minimum outcome of these worst cases is maximized by this algorithm. This behavior has given Minimax its name. To learn how Minimax works, we will take a look at the value or score of a grid. If the game is finished, we can easily define its value: if you won, the value is 1; if you lost, -1 and if it is a draw, 0. Thus, for player 1 the next grid has value 1 and for player 2 the value is -1. X|X|X-+-+-O|O|-+-+-X|O| We will also define the value of a grid for a game that has not been finished. We take a look at the following grid: X| |X-+-+-O|O|-+-+-O|X| It is player 1's turn. He can place his stone on the top row, and he would win, resulting in a value of 1. He can also choose to lay his stone on the second row. Then the game will result in a draft, if player 2 is not dumb, with score 0. If he chooses to place the stone on the last row, player 2 can win resulting in -1. We assume that player 1 is smart and that he will go for the first option. Thus, we could say that the value of this unfinished game is 1. We will now formalize this. In the previous paragraph, we have summed up all options for the player. For each option, we have calculated the minimum value that the player could get if he would choose that option. From these options, we have chosen the maximum value. Minimax chooses the option with the highest value of all options. Implementing Minimax in TypeScript As you can see, the definition of Minimax looks like you can implement it with recursion. We create a function that returns both the best option and the value of the game. A function can only return a single value, but multiple values can be combined into a single value in a tuple, which is an array with these values. First, we handle the base cases. If the game is finished, the player has no options and the value can be calculated directly. import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; Otherwise, we list all options. For all options, we calculate the value. The value of an option is the same as the opposite of the value of the option for the opponent. Finally, we choose the option with the best value. } else { let options = getOptions(grid, player); const opponent = getOpponent(player); return options.map<[Grid, number]>( option => [option, -(minimax(option, rowLength, opponent)[1])] ).reduce( (previous, current) => previous[1] < current[1] ? current : previous ); } } When you use tuple types, you should explicitly add a type definition for it. Since tuples are arrays too, an array type is automatically inferred. When you add the tuple as return type, expressions after the return keyword will be inferred as these tuples. For options.map, you can mention the union type as a type argument or by specifying it in the callback function (options.map((option): [Grid, number] => ...);). You can easily see that such an AI can also be used for other kinds of games. Actually, the minimax function has no direct reference to Tic-Tac-Toe, only findWinner, isFull and getOptions are related to Tic-Tac-Toe. Optimizing the algorithm The Minimax algorithm can be slow. Choosing the first set, especially, takes a long time since the algorithm tries all ways of playing the game. We will use two techniques to speed up the algorithm. First, we can use the symmetry of the game. When the board is empty it does not matter whether you place a stone in the upper-left corner or the lower-right corner. Rotating the grid around the center 180 degrees gives an equivalent board. Thus, we only need to take a look at half the options when the board is empty. Secondly, we can stop searching for options if we found an option with value 1. Such an option is already the best thing to do. Implementing these techniques gives the following function: import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; } else { let options = getOptions(grid, player); const gridSize = grid.length * grid[0].length; if (options.length === gridSize) { options = options.slice(0, Math.ceil(gridSize / 2)); } const opponent = getOpponent(player); let best: [Grid, number]; for (const option of options) { const current: [Grid, number] = [option, -(minimax(option, rowLength, opponent)[1])]; if (current[1] === 1) { return current; } else if (best === undefined || current[1] > best[1]) { best = current; } } return best; } } This will speed up the AI. In the next sections we will implement the interface for the game and we will write some tests for the AI. Creating the interface NodeJS can be used to create servers. You can also create tools with a command line interface (CLI). For instance, gulp, NPM and typings are command line interfaces built with NodeJS. We will use NodeJS to create the interface for our game. Handling interaction The interaction from the user can only happen by text input in the terminal. When the game starts, it will ask the user some questions about the configuration: width, height, row length for a sequence, and the player(s) that are played by the computer. The highlighted lines are the input of the user. Tic-Tac-Toe Width3 Height3 Row length2 Who controls player 1?1You 2Computer1 Who controls player 2?1You 2Computer1 During the game, the game asks the user which of the possible options he wants to do. All possible steps are shown on the screen, with an index. The user can type the index of the option he wants. X| |-+-+-O|O|-+-+- |X|   It's player one's turn! Choose one out of these options: 1X|X|-+-+-O|O|-+-+- |X|   2X| |X-+-+-O|O|-+-+- |X|   3X| |-+-+-O|O|X-+-+- |X|   4X| |-+-+-O|O|-+-+-X|X|   5X| |-+-+-O|O|-+-+- |X|X A NodeJS application has three standard streams to interact with the user. Standard input, stdin, is used to receive input from the user. Standard output, stdout, is used to show text to the user. Standard error, stderr, is used to show error messages to the user. You can access these streams with process.stdin, process.stdout and process.stderr. You have probably already used console.log to write text to the console. This function writes the text to stdout. We will use console.log to write text to stdout and we will not use stderr. We will create a helper function that reads a line from stdin. This is an asynchronous task, the function starts listening and resolves when the user hits enter. In lib/cli.ts, we start by importing the types and function that we have written. import { Grid, Player, getOptions, getOpponent, showGrid, findWinner, isFull, createGrid } from "./model"; import { minimax } from "./ai"; We can listen to input from stdin using the data event. The process sends either the string or a buffer, an efficient way to store binary data in memory. With once, the callback will only be fired once. If you want to listen to all occurrences of the event, you can use on. function readLine() { return new Promise<string>(resolve => { process.stdin.once("data", (data: string | Buffer) => resolve(data.toString())); }); } We can easily use readLinein async functions. For instance, we can now create a function that reads, parses and validates a line. We can use this to read the input of the user, parse it to a number, and finally check that the number is within a certain range. This function will return the value if it passes the validator. Otherwise it shows a message and retries. async function readAndValidate<U>(message: string, parse: (data: string) => U, validate: (value: U) => boolean): Promise<U> { const data = await readLine(); const value = parse(data); if (validate(value)) { return value; } else { console.log(message); return readAndValidate(message, parse, validate); } } We can use this function to show a question where the user has various options. The user should type the index of his answer. This function validates that the index is within bounds. We will show indices starting at 1 to the user, so we must carefully handle these. async function choose(question: string, options: string[]) { console.log(question); for (let i = 0; i < options.length; i++) { console.log((i + 1) + "t" + options[i].replace(/n/g, "nt")); console.log(); } return await readAndValidate( `Enter a number between 1 and ${ options.length }`, parseInt, index => index >= 1 && index <= options.length ) - 1; } Creating players A player could either be a human or the computer. We create a type that can contain both kinds of players. type PlayerController = (grid: Grid) => Grid | Promise<Grid>; Next we create a function that creates such a player. For a user, we must first know whether he is the first or the second player. Then we return an async function that asks the player which step he wants to do. const getUserPlayer = (player: Player) => async (grid: Grid) => { const options = getOptions(grid, player); const index = await choose("Choose one out of these options:", options.map(showGrid)); return options[index]; }; For the AI player, we must know the player index and the length of a sequence. We use these variables and the grid of the game to run the Minimax algorithm. const getAIPlayer = (player: Player, rowLength: number) => (grid: Grid) => minimax(grid, rowLength, player)[0]; Now we can create a function that asks the player whether a player should be played by the user or the computer. async function getPlayer(index: number, player: Player, rowLength: number): Promise<PlayerController> { switch (await choose(`Who controls player ${ index }?`, ["You", "Computer"])) { case 0: return getUserPlayer(player); default: return getAIPlayer(player, rowLength); } } We combine these functions in a function that handles the whole game. First, we must ask the user to provide the width, height and length of a sequence. export async function game() { console.log("Tic-Tac-Toe"); console.log(); console.log("Width"); const width = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Height"); const height = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Row length"); const rowLength = await readAndValidate("Enter an integer", parseInt, isFinite); We ask the user which players should be controlled by the computer. const player1 = await getPlayer(1, Player.Player1, rowLength); const player2 = await getPlayer(2, Player.Player2, rowLength); The user can now play the game. We do not use a loop, but we use recursion to give the player their turns. return play(createGrid(width, height), Player.Player1);   async function play(grid: Grid, player: Player): Promise<[Grid, Player]> { In every step, we show the grid. If the game is finished, we show which player has won. console.log(); console.log(showGrid(grid)); console.log();   const winner = findWinner(grid, rowLength); if (winner === Player.Player1) { console.log("Player 1 has won!"); return <[Grid, Player]> [grid, winner]; } else if (winner === Player.Player2) { console.log("Player 2 has won!"); return <[Grid, Player]>[grid, winner]; } else if (isFull(grid)) { console.log("It's a draw!"); return <[Grid, Player]>[grid, Player.None]; } If the game is not finished, we ask the current player or the computer which set he wants to do. console.log(`It's player ${ player === Player.Player1 ? "one's" : "two's" } turn!`);   const current = player === Player.Player1 ? player1 : player2; return play(await current(grid), getOpponent(player)); } } In lib/index.ts, we can start the game. When the game is finished, we must manually exit the process. import { game } from "./cli";   game().then(() => process.exit()); We can compile and run this in a terminal: gulp && node --harmony_destructuring dist At the time of writing, NodeJS requires the --harmony_destructuring flag to allow destructuring, like [x, y] = z. In future versions of NodeJS, this flag will be removed and you can run it without it. Testing the AI We will add some tests to check that the AI works properly. For a standard three by three game, the AI should never lose. That means when an AI plays against an AI, it should result in a draw. We can add a test for this. In lib/test/ai.ts, we import AVA and our own definitions. import test from "ava"; import { createGrid, Grid, findWinner, isFull, getOptions, Player } from "../model"; import { minimax } from "../ai"; import { randomInt } from "../utils"; We create a function that simulates the whole gameplay. type PlayerController = (grid: Grid) => Grid; function run(grid: Grid, a: PlayerController, b: PlayerController): Player { const winner = findWinner(grid, 3); if (winner !== Player.None) return winner; if (isFull(grid)) return Player.None; return run(a(grid), b, a); } We write a function that executes a step for the AI. const aiPlayer = (player: Player) => (grid: Grid) => minimax(grid, 3, player)[0]; Now we create the test that validates that a game where the AI plays against the AI results in a draw. test("AI vs AI", t => { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), aiPlayer(Player.Player2)) t.is(result, Player.None); }); Testing with a random player We can also test what happens when the AI plays against a random player or when a player plays against the AI. The AI should win or it should result in a draw. We run these multiple times; what you should always do when you use randomization in your test. We create a function that creates the random player. const randomPlayer = (player: Player) => (grid: Grid) => { const options = getOptions(grid, player); return options[randomInt(options.length)]; }; We write the two tests that both run 20 games with a random player and an AI. test("random vs AI", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), randomPlayer(Player.Player1), aiPlayer(Player.Player2)) t.not(result, Player.Player1); } });   test("AI vs random", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), randomPlayer(Player.Player2)) t.not(result, Player.Player2); } }); We have written different kinds of tests: Tests that check the exact results of single function Tests that check a certain property of results of a function Tests that check a big component Always start writing tests for small components. If the AI tests should fail, that could be caused by a mistake in hasWinner, isFull or getOptions, so it is hard to find the location of the error. Only testing small components is not enough; bigger tests, such as the AI tests, are closer to what the user will do. Bigger tests are harder to create, especially when you want to test the user interface. You must also not forget that tests cannot guarantee that your code runs correctly, it just guarantees that your test cases work correctly. Summary In this article, we have written an AI for Tic-Tac-Toe. With the command line interface, you can play this game against the AI or another human. You can also see how the AI plays against the AI. We have written various tests for the application. You have learned how Minimax works for turn-based games. You can apply this to other turn-based games as well. If you want to know more on strategies for such games, you can take a look at game theory, the mathematical study of these games. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Data Science with R [article] Web Typography [article]
Read more
  • 0
  • 0
  • 2662

article-image-exploring-performance-issues-nodejsexpress-applications
Packt
11 May 2016
16 min read
Save for later

Exploring Performance Issues in Node.js/Express Applications

Packt
11 May 2016
16 min read
Node.js is an exciting new platform for developing web applications, application servers, any sort of network server or client, and general purpose programming. It is designed for extreme scalability in networked applications through an ingenious combination of server-side JavaScript, asynchronous I/O, asynchronous programming, built around JavaScript anonymous functions, and a single execution thread event-driven architecture. Companies—large and small—are adopting Node.js, for example, PayPal is one of the companies converting its application stack over to Node.js. An up-and-coming leading application model, the MEAN stack, combines MongoDB (or MySQL) with Express, AngularJS and, of course, Node.js. A look through current job listings demonstrates how important the MEAN stack and Node.js in general have become. It's claimed that Node.js is a lean, low-overhead, software platform. The excellent performance is supposedly because Node.js eschews the accepted wisdom of more traditional platforms, such as JavaEE and its complexity. Instead of relying on a thread-oriented architecture to fill the CPU cores of the server, Node.js has a simple single-threaded architecture, avoiding the overhead and complexity of threads. Using threads to implement concurrency often comes with admonitions like these: expensive and error-prone, the error-prone synchronization primitives of Java, or designing concurrent software can be complex and error prone. The complexity comes from the access to shared variables and various strategies to avoid deadlock and competition between threads. The synchronization primitives of Java are an example of such a strategy, and obviously many programmers find them hard to use. There's the tendency to create frameworks, such as java.util.concurrent, to tame the complexity of threaded concurrency, but some might argue that papering over complexity does not make things simpler. Adopting Node.js is not a magic wand that will instantly make performance problems disappear forever. The development team must approach this intelligently, or else, you'll end up with one core on the server running flat out and the other cores twiddling their thumbs. Your manager will want to know how you're going to fully utilize the server hardware. And, because Node.js is single-threaded, your code must return from event handlers quickly, or else, your application will be frequently blocked and will provide poor performance. Your manager will want to know how you'll deliver the promised high transaction rate. In this article by David Herron, author of the book Node.JS Web Development - Third Edition, we will explore this issue. We'll write a program with an artificially heavy computational load. The naive Fibonacci function we'll use is elegantly simple, but is extremely recursive and can take a long time to compute. (For more resources related to this topic, see here.) Node.js installation Before launching into writing code, we need to install Node.js on our laptop. Proceed to the Node.js downloads page by going to http://nodejs.org/ and clicking on the downloads link. It's preferable if you can install Node.js from the package management system for your computer. While the Downloads page offers precompiled binary Node.js packages for popular computer systems (Windows, Mac OS X, Linux, and so on), installing from the package management system makes it easier to update the install later. The Downloads page has a link to instructions for using package management systems to install Node.js. Once you've installed Node.js, you can quickly test it by running a couple of commands: $ node –help Prints out helpful information about using the Node.js command-line tool: $ npm help Npm is the default package management system for Node.js, and is automatically installed along with Node.js. It lets us download Node.js packages from over the Internet, using them as the building blocks for our applications. Next, let's create a directory to develop an Express application within it to calculate Fibonacci numbers: $ mkdir fibonacci $ cd fibonacci $ npm install [email protected] $ ./node_modules/.bin/express . --ejs $ npm install The application will be written against the current Express version, version 4.x. Specifying the version number this way makes sure of compatibility. The express command generated for us a starting application. You can inspect the package.json file to see what will be installed, and the last command installs those packages. What we'll have in front of us is a minimal Express application. Our first stop is not to create an Express application, but to gather some basic data about computation-dominant code in Node.js. Heavy-weight computation Let's start the exploration by creating a Node.js module namedmath.js, containing: var fibonacci = exports.fibonacci = function(n) { if (n === 1) return 1; else if (n === 2) return 1; else return fibonacci(n-1) + fibonacci(n-2); } Then, create another file namedfibotimes.js containing this: var math = require('./math'); var util = require('util'); for (var num = 1; num < 80; num++) { util.log('Fibonacci for '+ num +' = '+ math.fibonacci(num)); } Running this script produces the following output: $ node fibotimes.js 31 Jan 14:41:28 - Fibonacci for 1 = 1 31 Jan 14:41:28 - Fibonacci for 2 = 1 31 Jan 14:41:28 - Fibonacci for 3 = 2 31 Jan 14:41:28 - Fibonacci for 4 = 3 31 Jan 14:41:28 - Fibonacci for 5 = 5 31 Jan 14:41:28 - Fibonacci for 6 = 8 31 Jan 14:41:28 - Fibonacci for 7 = 13 … 31 Jan 14:42:27 - Fibonacci for 38 = 39088169 31 Jan 14:42:28 - Fibonacci for 39 = 63245986 31 Jan 14:42:31 - Fibonacci for 40 = 102334155 31 Jan 14:42:34 - Fibonacci for 41 = 165580141 31 Jan 14:42:40 - Fibonacci for 42 = 267914296 31 Jan 14:42:50 - Fibonacci for 43 = 433494437 31 Jan 14:43:06 - Fibonacci for 44 = 701408733 This quickly calculates the first 40 or so members of the Fibonacci sequence. After the 40th member, it starts taking a couple seconds per result and quickly degrades from there. It isuntenable to execute code of this sort on a single-threaded system that relies on a quick return to the event loop. That's an important point because the Node.js design requires that event handlers quickly return to the event loop. The single-thread event-loop does everything in Node.js and event handlers that return quickly to the event loop keep it humming. A correctly written application can sustain a tremendous request throughput, but a badly written application can prevent Node.js from fulfilling that promise. This Fibonacci function demonstrates algorithms that churn away at their calculation without ever letting Node.js process the event loop. Calculating fibonacci(44) requires 16 seconds of calculation, which is an eternity for a modern web service. With any server that's bogged down like this, not processing events, the perceived performance is zilch. Your manager will be rightfully angry. This is a completely artificial example, because it's trivial to refactor the Fibonacci calculation for excellent performance. This is a stand-in for any algorithm that might monopolize the event loop. There are two general ways in Node.js to solve this problem: Algorithmic refactoring: Perhaps, like the Fibonacci function we chose, one of your algorithms is suboptimal and can be rewritten to be faster. Or, if not faster, it can be split into callbacks dispatched through the event loop. We'll look at one such method in a moment. Creating a backend service: Can you imagine a backend server dedicated to calculating Fibonacci numbers? Okay, maybe not, but it's quite common to implement backend servers to offload work from frontend servers, and we will implement a backend Fibonacci server at the end of this article. But first, we need to set up an Express application that demonstrates the impact on the event loop. An Express app to calculate Fibonacci numbers To see the impact of a computation-heavy application on Node.js performance, let's write a simple Express application to do Fibonacci calculations. Express is a key Node.js technology, so this will also give you a little exposure to writing an Express application. We've already created the blank application, so let's make a couple of small changes, so it uses our Fibonacci algorithm. Edit views/index.ejs to have this code: <!DOCTYPE html> <html> <head> <title><%= title %></title> <link rel='stylesheet' href='/stylesheets/style.css' /> </head> <body> <h1><%= title %></h1> <% if (typeof fiboval !== "undefined") { %> <p>Fibonacci for <%= fibonum %> is <%= fiboval %></p> <hr/> <% } %> <p>Enter a number to see its' Fibonacci number</p> <form name='fibonacci' action='/' method='get'> <input type='text' name='fibonum' /> <input type='submit' value='Submit' /> </form> </body> </html> This simple template sets up an HTML form where we can enter a number. This number designates the desired member of the Fibonacci sequences to calculate. This is written for the EJS template engine. You can see that <%= variable %> substitutes the named variable into the output, and JavaScript code is written in the template by enclosing it within <% %> delimiters. We use that to optionally print out the requested Fibonacci value if one is available. var express = require('express'); var router = express.Router(); var math = require('../math'); router.get('/', function(req, res, next) { if (req.query.fibonum) { res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: math.fibonacci(req.query.fibonum) }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); module.exports = router; This router definition handles the home page for the Fibonacci calculator. The router.get function means this route handles HTTP GET operations on the / URL. If the req.query.fibonum value is set, that means the URL had a ?fibonum=# value which would be produced by the form in index.ejs. If that's the case, the fiboval value is calculated by calling math.fibonacci, the function we showed earlier. By using that function, we can safely predict ahead performance problems when requesting larger Fibonacci values. On the res.render calls, the second argument is an object defining variables that will be made available to the index.ejs template. Notice how the two res.render calls differ in the values passed to the template, and how the template will differ as a result. There are no changes required in app.js. You can study that file, and bin/www, if you're curious how Express applications work. In the meantime, you run it simply: $ npm start > [email protected] start /Users/david/fibonacci > node ./bin/www And this is what it'll look like in the browser—at http://localhost:3000: For small Fibonacci values, the result will return quickly. As implied by the timing results we looked at earlier, at around the 40th Fibonacci number, it'll take a few seconds to calculate the result. The 50th Fibonacci number will take 20 minutes or so. That's enough time to run a little experiment. Open two browser windows onto http://localhost:3000. You'll see the Fibonacci calculator in each window. In one, request the value for 45 or more. In the other, enter 10 that, in normal circumstances, we know would return almost immediately. Instead, the second window won't respond until the first one finishes. Unless, that is, your browser times out and throws an error. What's happening is the Node.js event loop is blocked from processing events because the Fibonacci algorithm is running and does not ever yield to the event loop. As soon as the Fibonacci calculation finishes, the event loop starts being processed again. It then receives and processes the request made from the second window. Algorithmic refactoring The problem here is the applications that stop processing events. We might solve the problem by ensuring events are handled while still performing calculations. In other words, let's look at algorithmic refactoring. To prove that we have an artificial problem on our hands, add this function to math.js: var fibonacciLoop = exports.fibonacciLoop = function(n) { var fibos = []; fibos[0] = 0; fibos[1] = 1; fibos[2] = 1; for (var i = 3; i <= n; i++) { fibos[i] = fibos[i-2] + fibos[i-1]; } return fibos[n]; } Change fibotimes.js to call this function, and the Fibonacci values will fly by so fast your head will spin. Some algorithms aren't so simple to optimize as this. For such a case, it is possible to divide the calculation into chunks and then dispatch the computation of those chunks through the event loop. Consider the following code: var fibonacciAsync = exports.fibonacciAsync = function(n, done) { if (n === 0) done(undefined, 0); else if (n === 1 || n === 2) done(undefined, 1); else { setImmediate(function() { fibonacciAsync(n-1, function(err, val1) { if (err) done(err); else setImmediate(function() { fibonacciAsync(n-2, function(err, val2) { if (err) done(err); else done(undefined, val1+val2); }); }); }); }); } }; This converts the fibonacci function from a synchronous function to an asynchronous function one with a callback. By using setImmediate, each stage of the calculation is managed through Node.js's event loop, and the server can easily handle other requests while churning away on a calculation. It does nothing to reduce the computation required; this is still the silly inefficient Fibonacci algorithm. All we've done is spread the computation through the event loop. To use this new Fibonacci function, we need to change the router function in routes/index.js to the following: exports.index = function(req, res) { if (req.query.fibonum) { math.fibonacciAsync(req.query.fibonum, function(err,fiboval){ res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: fiboval }); }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }; This makes an asynchronous call to fibonacciAsync, and when the calculation finishes, the result is sent to the browser. With this change, the server no longer freezes when calculating a large Fibonacci number. The calculation, of course, still takes a long time, because fibonacciAsync is still an inefficient algorithm. At least, other users of the application aren't blocked, because it regularly yields to the event loop. Repeat the same test used earlier. Open two or more browser windows to the Fibonacci calculator, make a large request in one window, and the requests in the other window will be promptly answered. Creating a backend REST service The next way to mitigate computationally intensive code is to push the calculation to a backend process. To do that, we'll request computations from a backend Fibonacci server. While Express has a powerful templating system, making it suitable for delivering HTML web pages to browsers, it can also be used to implement a simple REST service. Express supports parameterized URL's in route definitions, so it can easily receive REST API arguments, and Express makes it easy to return data encoded in JSON. Create a file named fiboserver.js containing this code: var math = require('./math'); var express = require('express'); var logger = require('morgan'); var util = require('util'); var app = express(); app.use(logger('dev')); app.get('/fibonacci/:n', function(req, res, next) { math.fibonacciAsync(Math.floor(req.params.n), function(err, val) { if (err) next('FIBO SERVER ERROR ' + err); else { util.log(req.params.n +': '+ val); res.send({ n: req.params.n, result: val }); } }); }); app.listen(3333); This is a stripped down Express application that gets right to the point of providing a Fibonacci calculation service. The one route it supports does the Fibonacci computation using the same fibonacciAsync function used earlier. The res.send function is a flexible way to send data responses. As used here, it automatically detects the object, formats it as JSON text, and sends it with the correct content-type. Then, in package.json, add this to the scripts section: "server": "node ./fiboserver" Now, let's run it: $ npm run server > [email protected] server /Users/david/fibonacci > node ./fiboserver Then, in a separate command window, use curl to request values from this service. $ curl -f http://localhost:3002/fibonacci/10 {"n":"10","result":55} Over in the window, where the service is running, we'll see a log of GET requests and how long each took to process. It's easy to create a small Node.js script to directly call this REST service. But let's instead move directly to changing our Fibonacci calculator application to do so. Make this change to routes/index.js: router.get('/', function(req, res, next) { if (req.query.fibonum) { var httpreq = require('http').request({ method: 'GET', host: "localhost", port: 3333, path: "/fibonacci/"+Math.floor(req.query.fibonum) }, function(httpresp) { httpresp.on('data', function(chunk) { var data = JSON.parse(chunk); res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: data.result }); }); httpresp.on('error', function(err) { next(err); }); }); httpreq.on('error', function(err) { next(err); }); httpreq.end(); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); Running the Fibonacci Calculator service now requires starting both processes. In one command window, we run: $ npm run server And in the other command window: $ npm start In the browser, we visit http://localhost:3000 and see what looks like the same application, because no changes were made to views/index.ejs. As you make requests in the browser window, the Fibonacci service window prints a log of requests it receives and values it sent. You can, of course, repeat the same experiment as before. Open two browser windows, in one window request a large Fibonacci number, and in the other make smaller requests. You'll see, because the server uses fibonacciAsync, that it's able to respond to every request. Why did we go through this trouble when we could just directly call fibonacciAsync? We can now push the CPU load for this heavy-weight calculation to a separate server. Doing so would preserve CPU capacity on the frontend server, so it can attend to web browsers. The heavy computation can be kept separate, and you could even deploy a cluster of backend servers sitting behind a load balancer evenly distributing requests. Decisions like this are made all the time to create multitier systems. What we've demonstrated is that it's possible to implement simple multitier REST services in a few lines of Node.js and Express. Summary While the Fibonacci algorithm we chose is artificially inefficient, it gave us an opportunity to explore common strategies to mitigate performance problems in Node.js. Optimizing the performance of our systems is as important as correctness, fixing bugs, mobile friendliness, and usability. Inefficient algorithms means having to deploy more hardware to satisfy load requirements, costing more money, and creating a bigger environmental impact. For real-world applications, optimizing away performance problems won't be as easy as it would be for the Fibonacci calculator. We could have just used the fibonacciLoop function, since it provides all the performance we'd need. But we needed to explore more typical approaches to performing heavy-weight calculations in Node.js while still keeping the event loop rolling. The bottom line is that in Node.js the event loop must run. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [article] Node.js Fundamentals [article] Making a Web Server in Node.js [article]
Read more
  • 0
  • 0
  • 3275
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-managing-payment-and-shipping-magento-2
Packt
10 May 2016
24 min read
Save for later

Managing Payment and Shipping with Magento 2

Packt
10 May 2016
24 min read
In this article by Bret Williams, author of the book Learning Magento 2 Administration, we will see how to manage payment gateways, shipping methods and orders with Magneto 2. E-commerce doesn't work unless customers actually purchase a product or service. In order to make that happen on your Magento store, you need to take payments, provide shipping solutions, collect any required taxes, and, of course, process orders. In this article , we're going to: Understand the checkout and payment process Discuss various payment methods you can offer your customers Configure table rate shipping and review other shipping options Manage the order process (For more resources related to this topic, see here.) It's extremely important that you take care to understand and manage these aspects of your online business, as this involves money — the customer's and yours. No matter how great your products or your pricing, if customers cannot purchase easily, understand your shipping and delivery, or feel in the least hesitant about completing their transaction, your customer leaves and neither they nor you achieve satisfactory results. Once an order is placed, you also have steps to take to process the purchase and make good on your obligation to fulfill your customer's request. Fortunately — as with many other aspects of online commerce — Magento has the features and tools in place to create a solid, efficient checkout experience. Understanding the checkout and payment process Since most people shopping online today have made at least one e-commerce purchase on a website, the general process of completing an order is fairly well established, although the exact steps will vary somewhat: Customer reviews their shopping cart, confirming the items they have decided to purchase. Customer enters their shipping destination information. Customer chooses a shipping method based on cost, method and time of delivery. Customer enters their payment information. Customer reviews their order and confirms their intent to purchase. The system (Magento, in our case) queries a payment processor for approval. The order is completed and ready for processing. Of course, as we'll explore in this article, there is much more detail related to this process. As online merchants, you want your customers to have a thorough, yet easy, purchasing experience and you want a valid order that can be fulfilled without complications. To achieve both ends, you have to prepare your Magento store to accurately process orders. So, let's jump in. Payment methods When a customer places an order on your Magento store, you'll naturally want to provide a means of capturing payment, whether it's immediate (credit card, PayPal, etc.) or delayed (COD, check, money order, credit). The payment methods you choose to provide, of course, are up to you, but you'll want to provide methods that: Reduce your risk of not getting paid. Provide convenience to your customers while fulfilling their payment expectations. Consumers expect to pay by credit card or through a third-party service such as PayPal. Wholesale buyers may expect to purchase using a Purchase Order or sending you a check before shipment. As with any business, you have to decide what will best benefit both you and your buyers. How Payment gateways work If you're new to online payments as a merchant, it's helpful to have an understanding of how payments are approved and captured in e-commerce. For this explanation, we're focusing on those payment gateways that allow you to accept credit and debit cards in your store. While PayPal Express and Standard works in a similar fashion, the three gateways that are included in the default Magento installation – PayPal Payments, Braintree and Authorize.net — process credit and debit cards similarly: Your customer enters their card information in your website during checkout. When the order is submitted, Magento sends a request to the gateway (PayPal Payments, Braintree or Authorize.net) for authorization of the card. The gateway submits the card information and order amount to a clearinghouse service that determines if the card is valid and the order amount does not exceed the credit limit of the cardholder. A success or failure code is returned to the gateway and on to the Magento store. If the intent is to capture the funds at time of purchase, the gateway will queue the capture into a batch for processing later in the day and notify Magento that the funds are "captured". A successful transaction will commit the order in Magento and a failure will result in a message to the purchaser. Other payment methods, such as PayPal Standard and PayPal Express, take the customer to the payment provider's website to complete the payment portion of the transaction. Once the payment is completed, the customer is returned to your Magento store front. When properly configured, integrated payment gateways will update Magento orders as they are authorized and/or captured. This automation means you spend less time managing orders and more time fulfilling shipments and satisfying your customers! PCI Compliance The protection of your customer's payment information is extremely important. Not only would a breach of security cause damage to your customer's credit and financial accounts, but the publicity of such a breach could be devastating to your business. Merchant account providers will require that your store meet stringent guidelines for PCI Compliance, a set of security requirements called Payment Card Industry Data Security Standard (PCI DSS). Your ability to be PCI compliant is based on the integrity of your hosting environment and by why methods you allow customers to enter credit card information on your site. Magento 2 no longer offers a Stored Credit Card payment method. It is highly unlikely that you could — or would want to — provide a server configuration secure enough to meet PCI DSS requirements for storing credit card information. You probably don't want the liability exposure, as well. You can, however, provide SSL Encryption that could satisfy PCI compliance as long as the credit card information is encrypted before being sent to your server, and then from your server to the credit card processor. As long as you're not storing the customer's credit card information on your server, you can meet PCI compliance as long as your hosting provider can assure compliance for server and database security. Even with SSL encryption, not all hosting environments will pass PCI DSS standards. It's vital that you work with a hosting company that has real Magento experience and can document proof of PCI compliance. Therefore, you should decide whether to provide onsite or offsite credit card payments. In other words, do you want to take payment information within your Magento checkout page or redirect the user to a payment service, such as PayPal, to complete their transaction? There are pros and cons of each method. Onsite transactions may be perceived as less secure and you do have to prove PCI compliance to your merchant account provider on an ongoing basis. However, onsite transactions mean that the customer can complete their transaction without leaving your website. This helps to preserve your brand experience for your customers. Fortunately, Magento is versatile enough to allow you to provide both options to your customers. Personally, we feel that offering multiple payment methods means you're more likely to complete a sale, while also showing your customers that you want to provide the most convenience in purchasing. Let's now review the various payment methods offered by default in Magento 2. Magento 2 comes with a host of the most popular and common payment methods. However, you should review other possibilities, such as Amazon Payments, Stripe and Moneybookers, depending on your target market. We anticipate that developers will be offering add-ons for these and other payment methods. Note that as you change the Merchant Location at the top of the Payment Methods panel, the payment methods available to you may change. PayPal all-in-one payment solutions While PayPal is commonly known for their quick and easy PayPal Express buttons — the ubiquitous yellow buttons you see throughout the web — PayPal can provide you with credit/debit card solutions that allow customers to use their cards without needing a PayPal account. To your customer, the checkout appears no different than if they were using a normal credit card checkout process. The big difference is that you have to set up a business account with PayPal before you can begin accepting non-PayPal account payments. Proceeds will go almost immediately into your PayPal account (you have to have a PayPal account), but your customers can pay by using a credit/debit card or their own PayPal account. With all-in-one solution, PayPal approves your application for a merchant account and allows you to accept all popular cards, including American Express, as a flat 2.9% rate, plus $0.30 per transaction. PayPal payments incur normal per transaction PayPal charges. We like this solution as it keeps all your online receipts in one account, while also giving you fast access to your sales income. PayPal also provides a debit card for its merchants that can earn back 1% on purchases. We use our PayPal debit card for all kinds of business purchases and receive a nice little cash back dividend each month. PayPal provide two ways to incorporate credit card payment capture on your website: PayPal Payments Advanced inserts a form on your site that is actually hosted from PayPal's highly secure servers. The form appears as part of your store, but you don't have any PCI compliance concerns. PayPal Payments Pro allows you to obtain payment information using the normal Magento form, then submit it to PayPal for approval. The difference to your customer is that for Advanced, there is a slight delay while the credit card form is inserted into the checkout page. You may also have some limitations in terms of styling. PayPal Standard, also a part of the all-in-one solution, takes your customer to a PayPal site for payment. Unlike PayPal Express, however, you can style this page to better reflect your brand image. Plus, customers do not have to have a PayPal account in order to use this checkout method. PayPal payment gateways If you already have a merchant account for collecting online payments, you can still utilize the integration of PayPal and Magento by setting up a PayPal business account that is linked to your merchant account. Instead of paying PayPal a percentage of each transaction — you would pay this to your merchant account provider — you simply pay a small per transaction fee. PayPal Express Offering PayPal Express is as easy as having a PayPal account. It does require some configurations of API credentials, but it does provide the simplest means of offering payment services without setting up a merchant account. PayPal Express will add "Buy Now" buttons to your product pages and the cart page of your store, giving shoppers quick and immediate ability to checkout using their PayPal account. Braintree PayPal recently acquired Braintree, a payment services company that adds additional services to merchants. While many of their offerings appear to overlap PayPal's, Braintree brings additional features to the marketplace such as Bitcoin, Venmo, Android Pay and Apple Pay payment methods, recurring billing and fraud protection. Like PayPal Payments, Braintree charges 2.9% + $0.30 per transaction. A Word about Merchant Fees After operating our own e-commerce businesses for many years, we have used many different merchant accounts and gateways. At first glance, 2.9% — offered by PayPal, Braintree and Stripe — appear to be expensive percentages. If you've been solicited by merchant account providers, you no doubt have been quoted rates as low as 1.7%. What is not often disclosed is that this rate only applies to basic cards that do not contain miles or other premiums. Rates for most cards you accept can be quite a bit higher. American Express usually charges more than 3% on transactions. Once you factor in gateway costs, reporting, monthly account costs, etc. you may find, as we did, that our total merchant costs using a traditional merchant account averaged over 3.3%! One cost you may not think to factor is the expense of set-up and integration. PayPal and Braintree have worked hard to create easy integrations to Magento (Stripe is not yet available for Magento 2 as of this writing). Check / Money Order If you have customers for whom you will accept payment by check and/or money order, you can enable this payment method. Be sure to enter all the information fields, especially Make Check Payable to and Send Check to. You will most likely want to keep the New Order Status as Pending, which means the order is not ready for fulfillment until you receive payment and update the order as Paid. As with any payment method, be sure to edit the Title of the method to reflect how you wish to communicate it to your customers. If you only wish to accept Money Orders, for instance, you might change Title to Money Orders (sorry, no checks). Bank transfer payment As with Check / Money order, you can allow customers to wire money to your account by providing information to your customers who choose this method. Cash on Delivery payment Likewise, you can offer COD payments. We still see this method being made available on wholesale shipments, but very rarely on B2C (Business-to-Consumer) sales. COD shipments usually cost more, so you will need to accommodate this added fee in your pricing or shipping methods. At present, there is no ability to add a COD fee using this payment method panel. Zero Subtotal Checkout If your customer, by use of discounts or credits, or selecting free items, owes nothing at checkout, enabling this method will cause Magento to hide payment methods during checkout. The content in the Title field will be displayed in these cases. Purchase order In B2B (Business-to-Business) sales, it's quite common to accept purchase order (PO) for customers with approved credit. If you enable this payment method, an additional field is presented to customers for entering their PO number when ordering. Authorize.net direct post Authorize.net — perhaps the largest payment gateway provider in the USA — provides an integrated payment capture mechanism that gives your customers the convenience of entering credit/debit card information on your site, but the actual form submission bypasses your server and goes directly to Authorize.net. This mechanism, as with PayPal Payments Advanced, lessens your responsibility for PCI compliance as the data is communicated directly between your customer and Authorize.net instead of passing through the Magento programming. In Magento 1.x, the regular Authorize.net gateway (AIM) was one of several default payment methods. We're not certain it will be added as a default in Magento 2, although we would imagine someone will build an extension. Regardless, we think Direct Post is a wonderful way to use Authorize.net and meet your PCI compliance obligations. Shipping methods Once you get paid for a sale, you need to fulfill the order and that means you have to ship the items purchased. How you ship products is largely a function of what shipping methods you make available to your customers. Shipping is one of the most complex aspects of e-commerce, and one where you can lose money if you're not careful. As you work through your shipping configurations, it's important to keep in mind: What you charge your customers for shipping does not have to be exactly what you're charged by your carriers. Just as you can offer free shipping, you can also charge flat rates based on weight or quantity, or add a surcharge to live rates. By default, Magento does not provide you with highly sophisticated shipping rate calculations, especially when it comes to dimensional shipping. Consider shipping rate calculations as estimates only. Consult with whomever is actually doing your shipping to determine if any rate adjustments should be made to accommodate dimensional shipping. Dimensional shipping refers to a recent change by UPS, FedEx and others to charge you the greater of two rates: the cost based on weight or the cost based on a formula to determine the equivalent weight of a package based on its size: (Length x Width x Height) ÷ 166 (for US domestic shipments; other factors apply for other countries and exports). Therefore, if you have a large package that doesn't weigh much, the live rate quoted in Magento might not be reflective of your actual cost once the dimensional weight is calculated. If your packages may be large and lightweight, consult your carrier representative or shipping fulfillment partner for guidance. If your shipping calculations need more sophistication than provided natively in Magento 2, consider an add-on. However, remember that what you charge to your customers does not have to be what you pay. For that reason — and to keep it simple for your customers — consider offering Table rates (as described later). Each method you choose will be displayed to your customers if their cart and shipping destination matches the conditions of the method. Take care not to confuse your customers with too many choices: simpler is better. Keeping these insights in mind, let's explore the various shipping methods available by default in Magento 2. Before we go over the shipping methods, let's go over some basic concepts that will apply to most, if not all, shipping methods. Origin From where you ship your products will determine shipping rates, especially for carrier rates (e.g. UPS, FedEx). To set your origin, go to Stores | Configuration | Sales | Shipping Settings and expand the Origin panel. At the very least, enter the Country, Region/State and ZIP/Postal Code field. The others are optional for rate calculation purposes. At the bottom of this panel is the choice to Apply custom Shipping Policy. If enabled, a field will appear where you can enter text about your overall shipping policy. For instance, you may want to enter Orders placed by 12:00p CT will be processed for shipping on the same day. Applies only to orders placed Monday-Friday, excluding shipping holidays. Handling fee You can add an invisible handling fee to all shipping rate calculations. Invisible in that it does not appear as a separate line item charge to your customers. To add a handling fee to a shipping method: Choose whether you wish to add a fixed amount or a percentage of the shipping cost If you choose to add a percentage, enter the amount as a decimal number instead of a percentage (example: 0.06 instead of 6%) Allowed countries As you configure your shipping methods, don't forget to designate to which countries you will ship. If you only ship to the US and Canada, for instance, be sure to have only those countries selected. Otherwise, you'll have customers from other countries placing orders you will have to cancel and refund. Method not available In some cases, the method you configured may not be applicable to a customer based on destination, type of product, weight or any number of factors. For these instances, you can choose to: Show the method (e.g. UPS, USPS, DHL, etc.), but with an error message that the method is not applicable Don't show the method at all Depending on your shipping destinations and target customers, you may want to show an error message just so the customer knows why no shipping solution is being displayed. If you don't show any error message and the customer disqualifies for any shipping method, the customer will be confused. Free shipping There are several ways to offer free shipping to your customers. If you want to display a Free Shipping option to all customers whose carts meet a minimum order amount (not including taxes or shipping), enable this panel. However, you may want to be more judicious in how and when you offer free shipping. Other alternatives include: Creating Shopping Cart Promotions Include a free shipping method in your Table Rates (see later in this section) Designate a specific free shipping method and minimum qualifying amount within a carrier configuration (such as UPS and FedEx) If you choose to use this panel, note that it will apply to all orders. Therefore, if you want to be more selective, consider one of the above methods. Flat Rate As with Free Shipping, above, the Flat Rate panel allows you to charge one, singular flat rate for all orders regardless of weight or destination. You can apply the rate on a per item or per order basis, as well. Table Rates While using live carrier rates can provide more accurate shipping quotes for your customers, you may find it more convenient to offer a series of rates for your customers at certain break points. For example, you might only need something as simple as for any domestic destination: 0-5 lbs, $5.99 6-10 lbs, $8.99 11+ lbs, $10.99 Let's assume you're a US-based shipper. While these rates will work for you when shipping to any of the contiguous 48 states, you need to charge more for shipments to Alaska and Hawaii. For our example, let's assume tiered pricing of $7.99, $11.99 and $14.99 at the same weight breaks. All of these conditions can be handled using the Table Rates shipping method. Based on our example, we would first start by creating a spreadsheet (in Excel or Numbers) similar to the following: Country Region/State Zip/Postal Code Weight (and above) Shipping Price USA * * 0 5.99 USA * * 6 8.99 USA * * 11 10.99 USA AK * 0 7.99 USA AK * 6 11.99 USA AK * 11 14.99 USA HI * 0 7.99 USA HI * 6 11.99 USA HI * 11 14.99 Let's review the columns in this chart: Country. Here, you would enter the 3-character country code (for a list of valid codes, see http://goo.gl/6A1woj). Region/State. Enter the 2-character code for any state or province. Zip/Postal Code. Enter any specific postal codes for which you wish the rate to apply. Weight (and above). Enter the minimum applicable weight for the range. The assigned rate will apply until the weight of the cart products combined equals a higher weight tier. Shipping Price. Enter the shipping charge you wish to provide to the customer. Do not include the currency prefix (example: "$" or "€"). Now, let's discuss the asterisk (*) and how to limit the scope of your rates. As you can see in the chart, we have only indicated rates for US destinations. That's because there are no rows for any other countries. We could easily add rates for all other countries, simply by adding rows with an asterisk in the first column. By adding those rows, we're telling Magento to use the US rates if the customer's ship-to address is in the US, and to use other rates for all other country destinations. Likewise for the states column: Magento will first look for matches for any state codes listed. If it can't find any, then it will look for any rates with an asterisk. If no asterisk is present for a qualifying weight, then no applicable rate will be provided to the customer. The asterisk in the Zip/Postal Code column means that the rates apply to all postal codes for all states. To get a sample file with which to configure your rates, you can set your configuration scope to one of your Websites (Furniture or Sportswear in our examples) and click Export CSV in the Table Rates panel. Quantity and price based rates In the preceding example, we used the weight of the items in the cart to determine shipping rates. You can also configure table rates to use calculations based on the number of items in the cart or the total price of all items (less taxes and shipping). To set up your chart, simply rename the fourth column "Quantity (and above)" or "Subtotal (and above)." Save your rate table To upload your table rates, you'll need to save/export your spreadsheet as a CSV file. You can name it whatever you like. Save it to your computer where you can find it for the next steps. Table rate settings Before you upload your new rates, you should first set your Table Rates configurations. To do so, you can set your default settings at the Default configuration scope. However, to upload your CSV file, you will need to switch your Store View to the appropriate Website scope. When changing to a Website scope, you will see the Export CSV button and the ability to upload your rate table file. You'll note that all other settings may have Use Default checked. You can, of course, uncheck this box beside any field and adjust the settings according to your preferences. Let's review the unique fields in this panel: Enabled. Set to "Yes" to enable table rates. Title. Enter the name you wish displayed to customers when they're presented with a table rate-based shipping charge in the checkout process. Method Name. This name is presented to the customer in the shopping cart. You should probably change the default "Table Rate" to something more descriptive, as this term is likely irrelevant to customers. We have used terms "Standard Ground," "Economy," or "Saver" as names. The Title should probably be the same, as well, so that the customer, during checkout, has a visual confirmation of their shipping choice. Condition. This allows you to choose the calculation method you want to use. Your choices, as we described earlier, are "Weight vs. Destination," "Price vs. Destination," and "# of items vs. Destination." Include Virtual Products in Price Calculation. Since virtual products have no weight, this will have no effect on rate calculations for weight-based rates. However, it will affect rate calculations for price or quantity-based rates. Once you have your settings, click Save Config. Upload Rate Table Once you have saved your settings, you can now click the button next to Import and upload your rate table. Be sure to test your rates to see that you have properly constructed your rate table. Carrier Methods The remaining shipping methods involve configuring UPS, USPS, FedEx and/or DHL to provide "live" rate calculations. UPS is the only one that is set to query for live rates without the need for you to have an account with the carrier. This is both good and bad. It's good, as you only have to enable the shipping method to have it begin querying rates for your customers. On the flip side, the rates that are returned are not negotiated rates. Negotiated rates are those you may have been offered as discounted rates based on your shipping volume. FedEx, USPS and DHL require account-specific information in order to activate. This connection with your account should provide rates based on any discounts you have established with your carrier. If you wish to use negotiated rates for UPS, you may have to find a Magento add-on that will accommodate or have your developer extend your Magento installation to make a modified rate query. If you have some history with shipping, you should negotiate rates with the carriers. We have found most are willing to offer some discount from "published rates." Shipping integrations Unless you have your own sophisticated warehouse operation, it may be wise to partner with a fulfillment provider that can not only store, pick, pack and ship your orders, but also offers deep discounts on shipping rates due to their large volumes. Amazon FBA (Fulfillment By Amazon) is a very popular solution. Shipping is a low flat rate based on weight (http://goo.gl/UKjg7). ShipWire is another fulfillment provider that is well integrated with Magento. In fact, their integration can provide real-time rate quotes for your customers based on the products selected, warehouse availability and destination (http://www.ShipWire.com). We have not heard if they have updated their integration for Magento 2, yet, but we suspect they will. Summary Selling is the primary purpose of building an online store. As you've seen in this article, Magento 2 arms you with a very rich array of features to help you give your customers the ability to purchase using a variety of payment methods. You're able to customize your shipping options and manage complex tax rules. All of this combines to make it easy for your customers to complete their online purchases. Resources for Article: Further resources on this subject: Social Media and Magento [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 5046

article-image-quick-user-authentication-setup-django
Packt
10 May 2016
21 min read
Save for later

Quick User Authentication Setup with Django

Packt
10 May 2016
21 min read
In this article by Asad Jibran Ahmed author of book Django Project Blueprints we are going to start with a simple blogging platform in Django. In recent years, Django has emerged as one of the clear leaders in web frameworks. When most people decide to start using a web framework, their searches lead them to either Ruby on Rails or Django. Both are mature, stable, and extensively used. It appears that the decision to use one or the other depends mostly on which programming language you’re familiar with. Rubyists go with RoR, and Pythonistas go with Django. In terms of features, both can be used to achieve the same results, although they have different approaches to how things are done. One of the most popular platforms these days is Medium, widely used by a number of high-profile bloggers. Its popularity stems from its elegant theme and simple-to-use interface. I’ll walk you through creating a similar application in Django with a few surprise features that most blogging platforms don’t have. This will give you a taste of things to come and show you just how versatile Django can be. Before starting any software development project, it’s a good idea to have a rough roadmap of what we would like to achieve. Here’s a list of features that our blogging platform will have: Users should be able to register an account and create their blogs Users should be able to tweak the settings of their blogs There should be a simple interface for users to create and edit blog posts Users should be able to share their blog posts on other blogs on the platform I know this seems like a lot of work, but Django comes with a couple of contrib packages that speed up our work considerably. (For more resources related to this topic, see here.) The contrib Packages The contrib packages are a part of Django that contain some very useful applications that the Django developers decided should be shipped with Django. The included applications provide an impressive set of features, including some that we’ll be using in this application: Admin: This is a full-featured CMS that can be used to manage the content of a Django site. The Admin application is an important reason for the popularity of Django. We’ll use this to provide an interface for site administrators to moderate and manage the data in our application. Auth: This provides user registration and authentication without requiring us to do any work. We’ll be using this module to allow users to sign up, sign in, and manage their profiles in our application. Sites: This frameworkprovides utilities that help us run multiple Django sites using the same code base. We’ll use this feature to allow each user to have his own blog with content that can be shared between multiple blogs. There are a lot more goodies in the contrib module. I suggest you take a look at the complete list at https://docs.djangoproject.com/en/stable/ref/contrib/#contrib-packages. I usually end up using at least three of the contrib packages in all my Django projects. They provide often-required features such as user registration and management and free you to work on the core parts of your project, providing a solid foundation to build upon. Setting up our development environment Let’s start by creating the directory structure for our project, setting up the virtual environment, and configuring some basic Django settings that need to be set up in every project. Let’s call our blogging platform BlueBlog. To start a new project, you need to first open up your terminal program. In Mac OS X, it is the built-in terminal. In Linux, the terminal is named separately for each distribution, but you should not have trouble finding it; try searching your program list for the word terminal and something relevant should show up. In Windows, the terminal program is called the Command Line. You’ll need to start the relevant program depending on your operating system. If you are using the Windows operating system, some things will need to be done differently from what the book shows. If you are using Mac OS X or Linux, the commands shown here should work without any problems. Open the relevant terminal program for your operating system and start by creating the directory structure for our project and cd into the root project directory using the following commands: > mkdir –p blueblog > cd blueblog Next, let’s create the virtual environment, install Django, and start our project: > pyvenv blueblogEnv > source blueblogEnv/bin/activate > pip install django > django-admin.py startproject blueblog src With this out of the way, we’re ready to start developing our blogging platform. Database settings Open up the settings found at $PROJECT_DIR/src/blueblog/settings.py in your favorite editor and make sure that the DATABASES settings variable matches this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } In order to initialize the database file, run the following commands: > cd src > python manage.py migrate Staticfiles settings The last step in setting up our development environment is configuring the staticfiles contrib application. The staticfiles application provides a number of features that make it easy to manage the static files (css, images, and javascript) of your projects. While our usage will be minimal, you should look at the Django documentation for staticfiles in further detail as it is used quite heavily in most real-world Django projects. You can find the documentation at https://docs.djangoproject.com/en/stable/howto/static-files/. In order to set up the staticfiles application, we have to configure a few settings in the settings.py file. First, make sure that django.contrib.staticfiles is added to INSTALLED_APPS. Django should have done this by default. Next, set STATIC_URL to whatever URL you want your static files to be served from. I usually leave this to the default value, ‘/static/’. This is the URL that Django will put in your templates when you use the static template tag to get the path to a static file. Base template Next, let’s set up a base template that all the other templates in our application will inherit from. I prefer to have templates that are used by more than one application of a project in a directory named templates in the project source folder. To set this up, add os.path.join(BASE_DIR, 'templates') to the DIRS array of the TEMPLATES configuration dictionary in the settings file, and then create a directory named templates in $PROJECT_ROOT/src. Next, using your favorite text editor, create a file named base.html in the new folder with the following content: <html> <head> <title>BlueBlog</title> </head> <body> {% block content %} {% endblock %} </body> </html> Just as Python classes can inherit from other classes, Django templates can also inherit from other templates. Also, just as Python classes can have functions overridden by their subclasses, Django templates can also define blocks that children templates can override. Our base.html template provides one block to inherit templates to override called content. The reason for using template inheritance is code reuse. We should put HTML that we want to be visible on every page of our site, such as headers, footers, copyright notices, meta tags, and so on, in the base template. Then, any template inheriting from it will automatically get all this common HTML included automatically, and we will only need to override the HTML code for the block that we want to customize. You’ll see this principal of creating and overriding blocks in base templates used throughout the projects in this book. User accounts With the database setup out of the way, let’s start creating our application. If you remember, the first thing on our list of features is to allow users to register accounts on our site. As I’ve mentioned before, we’ll be using the auth package from the Django contrib packages to provide user account features. In order to use the auth package, we’ll need to add it our INSTALLED_APPS list in the settings file (found at $PROJECT_ROOT/src/blueblog/settings.py). In the settings file, find the line defining INSTALLED_APPS and make sure that the ‘django.contrib.auth’ string is part of the list. It should be by default but if, for some reason, it’s not there, add it manually. You’ll see that Django has included the auth package and a couple of other contrib applications to the list by default. A new Django project includes these applications by default because almost all Django projects end up using these. If you need to add the auth application to the list, remember to use quotes to surround the application name. We also need to make sure that the MIDDLEWARE_CLASSES list contains django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.auth.middleware.SessionAuthenticationMiddleware. These middleware classes give us access to the logged in user in our views and also make sure that if I change the password for my account, I’m logged out from all other devices that I previously logged on to. As you learn more about the various contrib applications and their purpose, you can start removing any that you know you won’t need in your project. Now, let’s add the URLs, views, and templates that allow the users to register with our application. The user accounts app In order to create the various views, URLs, and templates related to user accounts, we’ll start a new application. To do so, type the following in your command line: > python manage.py startapp accounts This should create a new accounts folder in the src folder. We’ll add code that deals with user accounts in files found in this folder. To let Django know that we want to use this application in our project, add the application name (accounts) to the INSTALLED_APPS setting variable; making sure to surround it with quotes. Account registration The first feature that we will work on is user registration. Let’s start by writing the code for the registration view in accounts/views.py. Make the contents of views.py match what is shown here: from django.contrib.auth.forms import UserCreationForm from django.core.urlresolvers import reverse from django.views.generic import CreateView class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') I’ll explain what each line of this code is doing in a bit. First, I’d like you to get to a state where you can register a new user and see for yourself how the flow works. Next, we’ll create the template for this view. In order to create the template, you first need to create a new folder called templates in the accounts folder. The name of the folder is important as Django automatically searches for templates in folders of that name. To create this folder, just type the following: > mkdir accounts/templates Next, create a new file called user_registration.html in the templates folder and type in the following code: {% extends "base.html" %} {% block content %} <h1>Create New User</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create Account" /> </form> {% endblock %} Finally, remove the existing code in blueblog/urls.py and replace it with this: from django.conf.urls import include from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView from accounts.views import UserRegistrationView urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^$', TemplateView.as_view(template_name='base.html'), name='home'), url(r'^new-user/$', UserRegistrationView.as_view(), name='user_registration'), ] That’s all the code that we need to get user registration in our project! Let’s do a quick demonstration. Run the development server by typing as follows: > python manage.py runserver In your browser, visit http://127.0.0.1:8000/new-user/ and you’ll see a user registration form. Fill this in and click on submit. You’ll be taken to a blank page on successful registration. If there are some errors, the form will be shown again with the appropriate error messages. Let’s verify that our new account was indeed created in our database. For the next step, we will need to have an administrator account. The Django auth contrib application can assign permissions to user accounts. The user with the highest level of permission is called the super user. The super user account has free reign over the application and can perform any administrator actions. To create a super user account, run this command: > python manage.py createsuperuser As you already have the runserver command running in your terminal, you will need to quit it first by pressing Ctrl + C in the terminal. You can then run the createsuperuser command in the same terminal. After running the createsuperuser command, you’ll need to start the runserver command again to browse the site. If you want to keep the runserver command running and run the createsuperuser command in a new terminal window, you will need to make sure that you activate the virtual environment for this application by running the same source blueblogEnv/bin/activate command that we ran earlier when we created our new project. After you have created the account, visit http://127.0.0.1:8000/admin/ and log in with the admin account. You will see a link titled Users. Click on this, and you should see a list of users registered in our app. It will include the user that you just created. Congrats! In most other frameworks, getting to this point with a working user registration feature would take a lot more effort. Django, with it’s batteries included approach, allows us to do the same with a minimum of effort. Next, I’ll explain what each line of code that you wrote does. Generic views Here’s the code for the user registration view again: class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') Our view is pretty short for something that does such a lot of work. That’s because instead of writing code from scratch to handle all the work, we use one of the most useful features of Django, Generic Views. Generic views are base classes included with Django that provide functionality commonly required by a lot of web apps. The power of generic views comes from the ability to customize them to a great degree with ease. You can read more about Django generic views in the documentation available at https://docs.djangoproject.com/en/1.9/topics/class-based-views/. Here, we’re using the CreateView generic view. This generic view can display ModelForm using a template and, on submission, can either redisplay the page with errors if the form data was invalid or call the save method on the form and redirect the user to a configurable URL. CreateView can be configured in a number of ways. If you want ModelForm to be created automatically from some Django model, just set the model attribute to the model class, and the form will be generated automatically from the fields of the model. If you want to have the form only show certain fields from the model, use the fields attribute to list the fields that you want, exactly like you’d do while creating ModelForm. In our case, instead of having ModelForm generated automatically, we’re providing one of our own, UserCreationForm. We do this by setting the form_class attribute on the view. This form, which is part of the auth contrib app, provides the fields and a save method that can be used to create a new user. You’ll see that this theme of composing solutions from small reusable parts provided by Django is a common practice in Django web app development and, in my opinion, one of the best features of the framework. Finally, we define a get_success_url function that does a simple reverse URL and returns the generated URL. CreateView calls this function to get the URL to redirect the user to when a valid form is submitted and saved successfully. To get something up and running quickly, we left out a real success page and just redirected the user to a blank page. We’ll fix this later. Templates and URLs The template, which extends the base template that we created earlier, simply displays the form passed to it by CreateView using the form.as_p method, which you might have seen in the simple Django projects you may have worked on before. The urls.py file is a bit more interesting. You should be familiar with most of it—the parts where we include the admin site URLs and the one where we assign our view a URL. It’s the usage of TemplateView that I want to explain here. Like CreateView, TemplateView is another generic view provided to us by Django. As the name suggests, this view can render and display a template to the user. It has a number of customization options. The most important one is template_name, which tells it which template to render and display to the user. We could have created another view class that subclassed TemplateView and customized it by setting attributes and overriding functions like we did for our registration view. However, I wanted to show you another method of using a generic view in Django. If you only need to customize some basic parameters of a generic view—in this case, we only wanted to set the template_name parameter of the view—you can just pass the values as key=value pairs as function keyword arguments to the as_view method of the class when including it in the urls.py file. Here, we pass the template name that the view renders when the user accesses its URL. As we just needed a placeholder URL to redirect the user to, we simply use the blank base.html template. This technique of customizing generic views by passing key/value pairs only makes sense when you’re interested in customizing very basic attributes, like we do here. In case you want more complicated customizations, I advice you to subclass the view; otherwise, you will quickly get messy code that is difficult to maintain. Login and logout With registration out of the way, let’s write code to provide users with the ability to log in and log out. To start out, the user needs some way to go to the login and registration pages from any page on the site. To do this, we’ll need to add header links to our template. This is the perfect opportunity to demonstrate how template inheritance can lead to much cleaner and less code in our templates. Add the following lines right after the body tag to our base.html file: {% block header %} <ul> <li><a href="">Login</a></li> <li><a href="">Logout</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> </ul> {% endblock %} If you open the home page for our site now (at http://127.0.0.1:8000/), you should see that we now have three links on what was previously a blank page. It should look similar to the following screenshot: Click on the Register Account link. You’ll see the registration form we had before and the same three links again. Note how we only added these links to the base.html template. However, as the user registration template extends the base template, it got those links without any effort on our part. This is where template inheritance really shines. You might have noticed that href for the login/logout links is empty. Let’s start with the login part. Login view Let’s define the URL first. In blueblog/urls.py, import the login view from the auth app: from django.contrib.auth.views import login Next, add this to the urlpatterns list: url(r'^login/$', login, {'template_name': 'login.html'}, name='login'), Then, create a new file in accounts/templates called login.html. Put in the following content: {% extends "base.html" %} {% block content %} <h1>Login</h1> <form action="{% url "login" %}" method="post">{% csrf_token %} {{ form.as_p }} <input type="hidden" name="next" value="{{ next }}" /> <input type="submit" value="Submit" /> </form> {% endblock %} Finally, open up blueblog/settings.py file and add the following line to the end of the file: LOGIN_REDIRECT_URL = '/' Let’s go over what we’ve done here. First, notice that instead of creating our own code to handle the login feature, we used the view provided by the auth app. We import it using from django.contrib.auth.views import login. Next, we associate it with the login/ URL. If you remember the user registration part, we passed the template name to the home page view as a keyword parameter in the as_view() function. This approach is used for class-based views. For old-style view functions, we can pass a dictionary to the url function that is passed as keyword arguments to the view. Here, we use the template that we created in login.html. If you look at the documentation for the login view (https://docs.djangoproject.com/en/stable/topics/auth/default/#django.contrib.auth.views.login), you’ll see that on successfully logging in, it redirects the user to settings.LOGIN_REDIRECT_URL. By default, this setting has a value of /accounts/profile/. As we don’t have such a URL defined, we change the setting to point to our home page URL instead. Next, let’s define the logout view. Logout view In blueblog/urls.py, import the logout view: from django.contrib.auth.views import logout Add the following to the urlpatterns list: url(r'^logout/$', logout, {'next_page': '/login/'}, name='logout'), That’s it. The logout view doesn’t need a template; it just needs to be configured with a URL to redirect the user to after logging them out. We just redirect the user back to the login page. Navigation links Having added the login/logout view, we need to make the links we added in our navigation menu earlier take the user to those views. Change the list of links that we had in templates/base.html to the following: <ul> {% if request.user.is_authenticated %} <li><a href="{% url "logout" %}">Logout</a></li> {% else %} <li><a href="{% url "login" %}">Login</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> {% endif %} </ul> This will show the Login and Register Account links to the user if they aren’t already logged in. If they are logged in, which we check using the request.user.is_authenticated function, they are only shown the Logout link. You can test all of these links yourself and see how little code was needed to make such a major feature of our site work. This is all possible because of the contrib applications that Django provides. Summary In this article we started with a simple blogging platform in Django. We also had a look at setting up the Database, Staticfiles and Base templates. We have also created a user account app with registration and navigation links in it. Resources for Article: Further resources on this subject: Setting up a Complete Django E-commerce store in 30 minutes [article] "D-J-A-N-G-O... The D is silent." - Authentication in Django [article] Test-driven API Development with Django REST Framework [article]
Read more
  • 0
  • 0
  • 3456

article-image-and-running-views
Packt
27 Apr 2016
21 min read
Save for later

Up and Running with Views

Packt
27 Apr 2016
21 min read
 In this article by Gregg Marshall, the author of Mastering Drupal 8 Views, we will get introduced to the world of Views in Drupal. Drupal 8 was released November 19, 2015, after almost 5 years of development by over 3,000 members of the Drupal community. Drupal 8 is the largest refactoring in the project's history. One of the most important changes in Drupal 8 was the inclusion of the most popular contributed module, Views. Similar to including CCK in Drupal 7, adding Views to Drupal 8 influenced how Drupal operates as many of the administration pages, such as the content list page, are now Views that can be modified or extended by site builders. Every site builder needs to master the Views module to really take advantage of Drupal's content structuring capabilities by giving site builders the ability to create lists of content formatted in many different ways. A single piece of content can be used for different displays, and all the content in each View is dynamically created when a visitor comes to a page. It was the only contributed module included in the Acquia Site Builder certification examination for Drupal 7. In this article, we will discuss the following topics: Looking at the Views administration page Reviewing the general Views module settings Modifying one of the views from Drupal core to create a specialized administrative page (For more resources related to this topic, see here.) Drupal 8 is here, should I upgrade? "Jim, this is Lynn, how are things at Fancy Websites?" "I read that Drupal 8 is being released on November 19. From our conversations this year, I guess that means it is time to upgrade our current Drupal 6 site. Should I upgrade to Drupal 7 or Drupal 8?" "Lynn, we're really excited that Drupal 8 is finally ready. It is a game changer, and I can name 10 reasons why Drupal 8 is the way to go": Mobile device compatibility is built into Drupal 8's DNA. Analytics show that 32% of your site traffic is coming from buyers using phones, and that's up from only 19% compared to last year. Multilingual is baked in and really works, so we can go ahead and add the Spanish version of the site we have been talking about. There's a new theme engine that will make styling the new site much easier. It's time to update the look of your site; it's looking pretty outdated compared to the competition. Web services is built in. When you're ready to add an app for your customer's phones, Drupal 8 will be ready. There are lots of new fields, so we won't need to add half a dozen contributed modules to let you build your content types. Drupal 8 is built using industry standards. This was a huge change you won't see, but it means that our shop will be able to recruit new developers more easily. The configuration is now stored in code. Finally, we'll have a way for you to develop on your local computer and move your changes to staging and then to production without having to rebuild content types and Views manually over and over. The WYSIWYG editor is built in. The complex setup we went through to get the right buttons and make the output work won't be necessary in Drupal 8. There's a nice tour capability built in so that you can set up custom "how to" demonstrations for your new users. This should free up a lot of your time, which is good given how you are growing. I've saved the best for last. Your favorite module, Views, is now built into core! Between Fields in Drupal 7 and now Views in Drupal 8, you've got the tools to extend your site built right into core. The bottom line is I can't imagine not going ahead and upgrading to Drupal 8. Views in core is reason enough. Why don't I set up a Drupal 8 installation on your development server so that you can start playing with Drupal 8? We're not doing any development work on your site right now, and we still have staging to test any updates." "That sounds great, Jim! Let me know when I can log in." Less than an hour later, the e-mail arrived; the Drupal 8 development site was set up and ready for Lynn to start experimenting. Based on the existing Drupal 6 site, Lynn set up four content types with the same fields she had on the current site. Jim was able to use the built-in migrate module to move some of her data to the new site. Lynn was ready to start exploring Views in Drupal 8. Looking at the Views administration page That evening, Lynn logged into the new site. Clicking on the Manage menu item, she then clicked on the Structure submenu item, and at the bottom of the list displayed on the Structure page, she clicked on the Views option. About that time, Jackson came in and settled into his spot near her terminal. "Hi Jackson, ready to explore Views with me?" Looking at the Views administration page, Lynn noticed there were already a number of Views defined. Scanning the list, she said "Look Jackson, Drupal 8 uses Views for administration pages. This means we can customize them to fit our way of doing things. I like Drupal 8 already!". Jackson purred. Lynn studied the Views administration page shown here: Views administration page As Lynn looked at each view, the listing looked familiar; she had seen the same kind of listing on her Drupal 6 site. Trying the OPERATIONS pull-down menu on the first View, she saw that the options were Edit, Duplicate, Disable, and Delete. "That's pretty clear; I guess Duplicate is the same as Clone on my old version of Views. I can change a View, create a new one using this one as a template, make it temporarily unavailable, or wipe it completely off the face of the earth." "I wonder what kind of settings there are on the Settings tab of this listing page. Look, Jackson, there's a couple of subtabs hiding on the Settings page." As Lynn didn't want to mess up her new Drupal site, she called Jim. "Hi, Jim. Can you give me a quick rundown on the Views Settings tab?" "Sure," he replied. Views settings "Looking at the Views Settings tab, you'll notice two subtabs, Basic and Advanced. Select the advanced settings tab by clicking on Advanced to show the following display: The Views advanced settings configuration page Views advanced settings Let's look at the Advanced tab first since you'll probably never use these settings. The first option, Disable views data caching, shouldn't be checked unless you are having issues with Views not updating when the data changes. Even then, you should probably disable caching on a per-View basis using the caching setting in the View's edit page in the third column, labeled Advanced, near the bottom of the column. Disabling Views' data caching can really slow down the page loads on your site. You might actually use the Advanced settings tab if you need to clear all the Views' caches, which you would do by clicking on the Clear Views' cache button. Views basic settings The other advanced setting is DEBUGGING with a Add Views signature to all SQL queries checkbox. Unless you are using MySQL's logs to debug queries, which only an advanced developer would do, you aren't going to want this overhead added to Views queries, so just leave it unselected. Moving to the Basic tab, there are a number of settings that might be handy, and I'd recommend changing the default settings. Click on Basic to show the following display: The Views basic settings configuration page The first option, Always show the master (default) display, might or might not be useful. If you create a new View and don't select either create a page or create a block (or provide a REST export if this module is enabled), then a default View display is created called master. If you select either option or both, then page and/or block View displays are created, and generally, you won't see master. It's there; it's just hidden. Sometimes, it is handy to be able to edit or use the master display. While I don't like creating a lot of displays in each View, sometimes, I do create two or three if the content being displayed is very similar. An obvious example is when you want to display the same blog listing as either a page or in a block on other pages. The same teaser information is displayed, just in different ways. So, having the two displays in the same View makes sense. Just make sure when you customize each display that any changes you make are set to only apply to the current display and not all displays. Otherwise, you might make changes you hadn't planned on in the other displays. Most of the time, you will see a pull-down menu that defaults to All displays, but you can select This page (override) to have the setting change apply only to this display. Having the master display show lets you create the information that will be the same in all the displays you are creating; then, you can create and customize the different displays. Using our blog example, you may create a master display that has a basic list of titles, with the titles linking to the full blog post. Then, you can create a blog display page, and using the This page (override) option, you can add summaries, add more links, and set the results to 10 per page. Using the master display, you can go back and add a display block that shows only the last five blog posts without any pager, again applying each setting only to the block display. You might then go back to the master display and create a second block that uses the tags to select five blog posts that are related, again making sure that the changes are applied to the current block and not all displays. Finally, when you want to change something that will affect all the displays, make the change on the master display, and this time, use the All displays option to make sure the other displays are updated. In our blog example, you might decide to change the CSS class used to display the titles to apply formatting from the theme; you probably want this to look the same in every possible display of the blog posts. The next basic setting for Views is Allow embedded displays. You will not enable this option; it is for developers who will use Views-generated content in their custom code. However, if you see it enabled, don't disable it; doing this would likely break something on your site using this feature. The last setting before the LIVE PREVIEW SETTINGS field set is Label for "Any" value on non-required single-select exposed filters, which lets you pick either <Any> or -Any- as the format for exposed filters that would allow a user to ignore the filter. Live Preview Settings There are several LIVE PREVIEW SETTINGS field sets I like to enable because they make debugging your Views easier. If the LIVE PREVIEW SETTINGS field set is closed (that is, the options are not showing), click on the title next to the arrow, and it will open. It will look similar to this: Live Preview Settings I generally enable the Automatically update preview on changes option. This way, any change I make to the View when I edit it shows the results that would occur after each change. Seeing things change right away gives me a clue whether a change will have an effect I'm not expecting. A lot of Views options can be tricky to understand, so a bit of trial and error is often required. Hence, expect to make a change and not see what you expect; just change the setting back, rethink the problem, and try again. Almost always, you'll get the answer eventually. If you have a View that is really complex and very slow, you can always disable the live preview while you edit the View by selecting the Auto preview option in the grey Preview bar just under all the View's settings. The next two options control whether Views will display the SQL query generated by the Views options you selected in the edit screen. I like to display the SQL query, so I will select the Above the preview option under Show SQL query and then select the Show the SQL query checkbox that follows it. If you don't check the Show the SQL query option, it doesn't matter what you select for above or below the preview, and if you expect to see the SQL queries and don't, it is likely that you set one option and not the other. Showing the SQL query can be confusing at first, but after a while, you'll find it handy to figure out what is going on, especially if you have relationships (or should have relationships and don't realize it). And, of course, if you can't read the query, you can always e-mail me for a translation to English. The next option, Show performance statistics, is handy when trying to figure out why some Views-generated page is loading slowly. But usually, this isn't an issue you'd be thinking of, so I'd leave it off. You want to focus on getting the right information to display exactly the way you want without thinking about the performance. If we later decide it's too slow, the developer we'll assign to it will use this information and turn the option on in development. The same is true about Show other queries run during render during live preview. This information is handy to figure out performance issues and occasionally a display formatting issue during theming, but it isn't something you as a nonprogrammer should be worried about. Seeing all the extra queries can be confusing and intimidating, yet it doesn't really offer you any help creating a View. "Oh, don't forget to click on Save configuration if you change any settings. I don't know how many times I've forgotten to save a configuration change in Drupal and then wondered why my change hasn't stuck. Does this help?" "Thanks Jim, that is great. I owe you a coffee next time we get together." Hanging up the phone, Lynn said, "What do you think, Jackson? Let's start off by creating a property maintenance page for our salespeople to use? I think I'll get a quick win by modifying one of Drupal's core views." Adapting an existing View Lynn will use her knowledge from using Views on her existing Drupal site, and so move quickly. The existing content page provided by Views is general purpose and offers lots of options, and not all these options are appropriate for all content editors. This page looks similar to the following one: Drupal's standard content listing page Lynn started creating her property maintenance page by going to the Views listing page (Manage | Structure | Views) and selecting Duplicate from the OPERATIONS pull-down menu on the right-hand side of this row. On the next screen, she named the Property Maintenance view and clicked on the Duplicate button. When the View edit screen appeared, she was ready to adapt it to her need. First, she selected the Page display, assuming the Always show the master (default) display setting was already selected; otherwise, the Page display will be selected by default as it is the only display in this View. Remember that any change made in the View edit page isn't saved until you click on the Save button. Also, unsaved changes won't show up when the page/block is displayed. If you make a change, look at it using another browser or tab, and if you don't see the change reflected, it is likely that you didn't save the change you just made. The Property Maintenance screen before making any changes Editing the Property Maintenance view Starting with the left-hand side column of the View edit screen, Lynn changed the title by clicking on the Content link next to the Title label. She changed the title to Property Maintenance. Moving down the column, Lynn decided that the table display and settings were okay on the original screen and skipped them. Under the FIELDS section, Lynn decided to delete the Content: Node operations bulk form, Content: Type (Content Type), and (author) User: Name (Author) fields/columns as they weren't useful to the real estate salespeople who would be using this page. To do this, she clicked on Content: Node operations bulk form and then on the Remove link at the bottom of the Configure field modal that appeared. She repeated the removing of the field for the Content: Type (Content Type) and (author) User: Name (Author) fields. Lynn noted that the username field appeared to be the only field reference to the author entity, so she could delete the relationship later. Moving on to FILTER CRITERIA, Lynn was a bit confused by the first two filters. When she clicked on Content: Published status or admin user, the description said "Filters out unpublished content if the current user cannot view it". "This seems reasonable, let's keep this filter," she thought, and she clicked on Cancel. Next was Content: Publishing status (grouped), an exposed filter that lets the user filter by either published or unpublished. This seemed useful, so Lynn kept it and clicked on Cancel. The next filter, Content: Type (exposed), is necessary but shouldn't be selectable by the user, so Lynn clicked on it to edit the filter, unselected the Expose this filter to visitors option, and selected just the Property content type, making the filter only select content that are properties. The next filter, Content: Title (exposed), is handy, so Lynn left it as is. The final filter, Content: Translation language (exposed), isn't needed as Lynn's site isn't multilingual, so Lynn deleted the filter. Moving on to the center column of the View edit page, under the PAGE SETTINGS heading, Lynn changed the path for the View to /admin/property-maintenance by clicking on the existing /admin/content/node path, making the change, and clicking on the Apply button. Next in this column was the menu setting. Lynn doesn't want the property maintenance page to be part of the administration content page, so she clicked on Tab: Content and changed the menu type to Normal menu entry. This changed the fields displayed on the right-hand side of the modal, so Lynn changed the Menu link title to Property Maintenance, left the description blank, and left Show as expanded unselected. In the Parent pull-down menu, she selected the <Tools> menu. Tools is the default Drupal menu for site tools that is only shown to authorized users, who are logged into the site and can view the page linked to, which real estate salespeople will be able to view. She left the weight at -10, planning on reorganizing this menu when she has most of it configured. As this is the last option, she clicked on Apply to exit the modal. The last setting in the PAGE SETTINGS section is Access. Lynn knew she needed to change the required permission as she didn't plan on giving real estate salespeople access to the main content page, but she wasn't sure which permission to give them. Looking through the permissions page (the People | Permissions tab), Lynn didn't see any permission that made sense for who should be able to see this maintenance page. So, she clicked on the Permission link in the center column of the View edit page and changed the Access value from Permission to Role, and when she clicked on the Apply (all displays) button, she could select the role(s) she wanted to be able to see on this page. She selected the Administrator, Real Estate Salesperson, and Office Administrator roles. One way to test access while you develop is to use a second browser and log in as the other kind of user. A common mistake in Drupal is to see content while logged in as an administrator that can't be seen by other users. This can also be done using a second tab opened in "incognito" mode, but I find it easier to use a different browser (for example, Chrome and Firefox). You can even have three browsers open to the same page to test a third kind of user. Continuing down the column, Lynn decided she didn't need a header or footer on this administration page at least for now, but she did want to change the NO RESULTS BEHAVIOR message. Drupal has a text message defined, so she clicked on the Global: Unfiltered text (Global: Unfiltered text) link, changed the Content field to No properties meeting your filter criteria are available., and clicked on the Apply (all displays) button. The final section, PAGER, seemed fine, so Lynn skipped over it and moved to the third column of the view edit page, ADVANCED SETTINGS. As Lynn had changed the setting to always show the advanced settings, Lynn noticed that there was a relationship for author. As she had deleted displaying the author name, there wasn't any reason to keep the relationship because she wasn't using any of the author's details. She clicked on the author link and then on the Remove link at the bottom of the modal. Reviewing the results of the live preview, Lynn was satisfied and clicked on the Save button to save her modified view. There is a maxim in computers, Save Early, Save Often. As you develop or modify your View, when you reach a point where your progress so far is okay, click on the Save button. Then, if you make a terrible mistake in the next change, you can click on the Cancel button and then click on Edit to resume from where you last saved. Before saving the View, the result looked similar to the following screen: The resulting Property Maintenance View edit screen with all the changes Debugging – Live Preview is your friend Assuming you enabled Live Preview in your Views settings earlier in this article, as you are building your View, Views will show what will be displayed. Formatting and some JavaScript displays, such as Google mapping, can't be displayed in Live Preview, but to debug, you generally don't need them. Many Views challenges are getting the data that you want to display or getting data to be displayed the way you want. Many Views are created using the fields content display. Often, you will see fields that you don't want displayed when reviewing Live Preview because you didn't check the Exclude from display option in the field configuration. Or, you will select a field from the Add fields list that isn't actually the field you want to display the data you want—for instance, do you want article tags or article tags (field_tags: delta)? Sometimes you have to just try one and see what happens. If it isn't the right option, delete the field and try another. Experience will guide you as you use Views, but even the most experienced site builders wonder what some field or field option does in the context of the View they are building. Remember to save the View before you experiment with this next idea. Then, if it doesn't work out, you can just click on Cancel and not lose all the previous work you put in. If you disabled Live Preview, hopefully, you have decided to go back and enable it; seeing the output and looking at the generated SQL queries is really very useful in trying to figure out what might be going wrong. "Okay, Jackson, I see that a lot of what I knew from the previous versions of Views applies to the version in Drupal 8. Now that I've quickly gone through the edit screen to modify a core View, let's get serious and really learn the ins and outs of this version of Views." Summary In this article, we covered the Views administration page, where you can add, delete, edit, and duplicate views. Then, we reviewed all the general Views module settings. Finally, we modified a core View, quickly going through several configuration options. If you have used Views in older versions of Drupal, you should feel comfortable. If this is your first introduction to Views, don't panic that we glossed over a lot or if you felt lost. Resources for Article: Further resources on this subject: Working with Drupal Audio in Flash (part 2) [article] Modular Programming in ECMAScript 6 [article] Using NoSQL Databases [article]
Read more
  • 0
  • 0
  • 1515

article-image-what-flux
Packt
27 Apr 2016
27 min read
Save for later

What is Flux?

Packt
27 Apr 2016
27 min read
In this article by Adam Boduch, author of Flux Architecture covers the basic idea of Flux. Flux is supposed to be this great new way of building complex user interfaces that scale well. At least that's the general messaging around Flux, if you're only skimming the Internet literature. But, how do we define this great new way of building user interfaces? What makes it superior to other more established frontend architectures? The aim of this article is to cut through the sales bullet points and explicitly spell out what Flux is, and what it isn't, by looking at the patterns that Flux provides. And since Flux isn't a software package in the traditional sense, we'll go over the conceptual problems that we're trying to solve with Flux. Finally, we'll close the article by walking through the core components found in any Flux architecture, and we'll install the Flux npm package and write a hello world Flux application right away. Let's get started. (For more resources related to this topic, see here.) Flux is a set of patterns We should probably get the harsh reality out of the way first—Flux is not a software package. It's a set of architectural patterns for us to follow. While this might sound disappointing to some, don't despair—there's good reasons for not implementing yet another framework. Throughout the course of this book, we'll see the value of Flux existing as a set of patterns instead of a de facto implementation. For now, we'll go over some of the high-level architectural patterns put in place by Flux. Data entry points With traditional approaches to building frontend architectures, we don't put much thought into how data enters the system. We might entertain the idea of data entry points, but not in any detail. For example, with MVC (Model View Controller) architectures, the controller is supposed control the flow of data. And for the most part, it does exactly that. On the other hand, the controller is really just about controlling what happens after it already has the data. How does the controller get data in the first place? Consider the following illustration: At first glance, there's nothing wrong with this picture. The data flow, represented by the arrows, is easy to follow. But where does the data originate? For example, the view can create new data and pass it to the controller, in response to a user event. A controller can create new data and pass it to another controller, depending on the composition of our controller hierarchy. What about the controller in question—can it create data itself and then use it? In a diagram such as this one, these questions don't have much virtue. But, if we're trying to scale an architecture to have hundreds of these components, the points at which data enters the system becomes very important. Since Flux is used to build architectures that scale, it considers data entry points an important architectural pattern. Managing state State is one of those realities we need to cope with in frontend development. Unfortunately, we can't compose our entire application of pure functions with no side effects for two reasons. First, our code needs to interact with the DOM interface, in one way or another. This is how the user sees changes in the UI. Second, we don't store all our application data in the DOM (at least we shouldn't do this). As time passes and the user interacts with the application, this data will change. There's no cut-and-dry approach to managing state in a web application, but there are several ways to limit the amount of state changes that can happen, and enforce how they happen. For example, pure functions don't change the state of anything, they can only create new data. Here's an example of what this looks like: As you can see, there's no side effects with pure functions because no data changes state as a result of calling them. So why is this a desirable trait, if state changes are inevitable? The idea is to enforce where state changes happen. For example, perhaps we only allow certain types of components to change the state of our application data. This way, we can rule out several sources as the cause of a state change. Flux is big on controlling where state changes happen. Later on in the article, we'll see how Flux stores manage state changes. What's important about how Flux manages state is that it's handled at an architectural layer. Contrast this with an approach that lays out a set of rules that say which component types are allowed to mutate application data—things get confusing. With Flux, there's less room for guessing where state changes take place. Keeping updates synchronous Complimentary to data entry points is the notion of update synchronicity. That is, in addition to managing where the state changes originate from, we have to manage the ordering of these changes relative to other things. If the data entry points are the what of our data, then synchronously applying state changes across all the data in our system is the when. Let's think about why this matters for a moment. In a system where data is updated asynchronously, we have to account for race conditions. Race conditions can be problematic because one piece of data can depend on another, and if they're updated in the wrong order, we see cascading problems, from one component to another. Take a look at this diagram, which illustrates this problem: When something is asynchronous, we have no control over when that something changes state. So, all we can do is wait for the asynchronous updates to happen, and then go through our data and make sure all of our data dependencies are satisfied. Without tools that automatically handle these dependencies for us, we end up writing a lot of state-checking code. Flux addresses this problem by ensuring that the updates that take place across our data stores are synchronous. This means that the scenario illustrated in the preceding diagram isn't possible. Here's a better visualization of how Flux handles the data synchronization issues that are typical of JavaScript applications today: Information architecture It's easy to forget that we work in information technology and that we should be building technology around information. In recent times, however, we seem to have moved in the other direction, where we're forced to think about implementation before we think about information. More often than not, the data exposed by the sources used by our application, don't have what the user needs. It's up to our JavaScript to turn this raw data into something consumable by the user. This is our information architecture. Does this mean that Flux is used to design information architectures as opposed to a software architecture? This isn't the case at all. In fact, Flux components are realized as true software components that perform actual computations. The trick is that Flux patterns enable us to think about information architecture as a first-class design consideration. Rather than having to sift through all sorts of components and their implementation concerns, we can make sure that we're getting the right information to the user. Once our information architecture takes shape, the larger architecture of our application follows, as a natural extension to the information we're trying to communicate to our users. Producing information from data is the difficult part. We have to distill many sources of data into not only information, but information that's also of value to the user. Getting this wrong is a huge risk for any project. When we get it right, we can then move on to the specific application components, like the state of a button widget, and so on. Flux architectures keep data transformations confined to their stores. A store is an information factory—raw data goes in and new information comes out. Stores control how data enters the system, the synchronicity of state changes, and they define how the state changes. When we go into more depth on stores as we progress through the book, we'll see how they're the pillars of our information architecture. Flux isn't another framework Now that we've explored some of the high-level patterns of Flux, it's time to revisit the question: what is Flux again? Well, it is just a set of architectural patterns we can apply to our frontend JavaScript applications. Flux scales well because it puts information first. Information is the most difficult aspect of software to scale; Flux tackles information architecture head on. So, why aren't Flux patterns implemented as a Framework? This way, Flux would have a canonical implementation for everyone to use; and like any other large scale open source project, the code would improve over time as the project matures. The main problem is that Flux operates at an architectural level. It's used to address information problems that prevent a given application from scaling to meet user demand. If Facebook decided to release Flux as yet another JavaScript framework, it would likely have the same types of implementation issues that plague other frameworks out there. For example, if some component in a framework isn't implemented in a way that best suits the project we're working on, then it's not so easy to implement a better alternative, without hacking the framework to bits. What's nice about Flux is that Facebook decided to leave the implementation options on the table. They do provide a few Flux component implementations, but these are reference implementations. They're functional, but the idea is that they're a starting point for us to understand the mechanics of how things such as dispatchers are expected to work. We're free to implement the same Flux architectural pattern as we see it. Flux isn't a framework. Does this mean we have to implement everything ourselves? No, we do not. In fact, developers are implementing Flux frameworks and releasing them as open source projects. Some Flux libraries stick more closely to the Flux patterns than others. These implementations are opinionated, and there's nothing wrong with using them if they're a good fit for what we're building. The Flux patterns aim to solve generic conceptual problems with JavaScript development, so you'll learn what they are before diving into Flux implementation discussions. Flux solves conceptual problems If Flux is simply a collection of architectural patterns instead of a software framework, what sort of problems does it solve? In this section, we'll look at some of the conceptual problems that Flux addresses from an architectural perspective. These include unidirectional data flow, traceability, consistency, component layering, and loosely coupled components. Each of these conceptual problems pose a degree of risk to our software, in particular, the ability to scale it. Flux helps us get out in front of these issues as we're building the software. Data flow direction We're creating an information architecture to support the feature-rich application that will ultimately sit on top of this architecture. Data flows into the system, and will eventually reach an endpoint, terminating the flow. It's what happens in between the entry point and the termination point that determines the data flow within a Flux architecture. This is illustrated here: Data flow is a useful abstraction, because it's easy to visualize data as it enters the system and moves from one point to another. Eventually, the flow stops. But before it does, several side effects happen along the way. It's that middle block in the preceding diagram that's concerning, because we don't know exactly how the data-flow reached the end. Let's say that our architecture doesn't pose any restrictions on data flow. Any component is allowed to pass data to any other component, regardless of where that component lives. Let's try to visualize this setup: As you can see, our system has clearly defined entry and exit points for our data. This is good because it means that we can confidently say that the data flows through our system. The problem with this picture is with how the data flows between the components of the system. There's no direction, or rather, it's multidirectional. This isn't a good thing. Flux is a unidirectional data flow architecture. This means that the preceding component layout isn't possible. The question is—why does this matter? At times, it might seem convenient to be able to pass data around in any direction, that is, from any component to any other component. This in and of itself isn't the issue—passing data alone doesn't break our architecture. However, when data moves around our system in more than one direction, there's more opportunity for components to fall out of sync with one another. This simply means that if data doesn't always move in the same direction, there's always the possibility of ordering bugs. Flux enforces the direction of data flows, and thus eliminates the possibility of components updating themselves in an order that breaks the system. No matter what data has just entered the system, it'll always flow through the system in the same order as any other data, as illustrated here: Predictable root cause With data entering our system and flowing through our components in one direction, we can more easily trace any effect to it's cause. In contrast, when a component sends data to any other component residing in any architectural layer, it's a lot more difficult to figure how the data reached it's destination. Why does this matter? Debuggers are sophisticated enough that we can easily traverse any level of complexity during runtime. The problem with this notion is that it presumes we only need to trace what's happening in our code for the purposes of debugging. Flux architectures have inherently predictable data flows. This is important for a number of design activities and not just debugging. Programmers working on Flux applications will begin to intuitively sense what's going to happen. Anticipation is key, because it let's us avoid design dead-ends before we hit them. When the cause and effect are easy to tease out, we can spend more time focusing on building application features—the things the customers care about. Consistent notifications The direction in which we pass data from component to component in Flux architectures should be consistent. In terms of consistency, we also need to think about the mechanism used to move data around our system. For example, publish/subscribe (pub/sub) is a popular mechanism used for inter-component communication. What's neat about this approach is that our components can communicate with one another, and yet, we're able to maintain a level of decoupling. In fact, this is fairly common in frontend development because component communication is largely driven by user events. These events can be thought of as fire-and-forget. Any other components that want to respond to these events in some way, need to take it upon themselves to subscribe to the particular event. While pub/sub does have some nice properties, it also poses architectural challenges, in particular, scaling complexities. For example, let's say that we've just added several new components for a new feature. Well, in which order do these components receive update messages relative to pre-existing components? Do they get notified after all the pre-existing components? Should they come first? This presents a data dependency scaling issue. The other challenge with pub-sub is that the events that get published are often fine grained to the point where we'll want to subscribe and later unsubscribe from the notifications. This leads to consistency challenges because trying to code lifecycle changes when there's a large number of components in the system is difficult and presents opportunities for missed events. The idea with Flux is to sidestep the issue by maintaining a static inter-component messaging infrastructure that issues notifications to every component. In other words, programmers don't get to pick and choose the events their components will subscribe to. Instead, they have to figure out which of the events that are dispatched to them are relevant, ignoring the rest. Here's a visualization of how Flux dispatches events to components: The Flux dispatcher sends the event to every component; there's no getting around this. Instead of trying to fiddle with the messaging infrastructure, which is difficult to scale, we implement logic within the component to determine whether or not the message is of interest. It's also within the component that we can declare dependencies on other components, which helps influence the ordering of messages. Simple architectural layers Layers can be a great way to organize an architecture of components. For one thing, it's an obvious way to categorize the various components that make up our application. For another thing, layers serve as a means to put constraints around communication paths. This latter point is especially relevant to Flux architectures since it's imperative that data flow in one direction. It's much easier to apply constraints to layers than it is to individual components. Here is an illustration of Flux layers: This diagram isn't intended to capture the entire data flow of a Flux architecture, just how data flows between the main three layers. It also doesn't give any detail about what's in the layers. Don't worry, the next section gives introductory explanations of the types of Flux components and the communication that happens between the layers is the focus of this entire book. As you can see, the data flows from one layer to the next, in one direction. Flux only has a few layers, and as our applications scale in terms of component counts, the layer counts remains fixed. This puts a cap on the complexity involved with adding new features to an already large application. In addition to constraining the layer count and the data flow direction, Flux architectures are strict about which layers are actually allowed to communicate with one another. For example, the action layer could communicate with the view layer, and we would still be moving in one direction. We would still have the layers that Flux expects. However, skipping a layer like this is prohibited. By ensuring that layers only communicate with the layer directly beneath it, we can rule out bugs introduced by doing something out-of-order. Loosely coupled rendering One decision made by the Flux designers that stands out is that Flux architectures don't care how UI elements are rendered. That is to say, the view layer is loosely coupled to the rest of the architecture. There are good reasons for this. Flux is an information architecture first, and a software architecture second. We start with the former and graduate toward the latter. The challenge with view technology is that it can exert a negative influence on the rest of the architecture. For example, one view has a particular way of interacting with the DOM. Then, if we've already decided on this technology, we'll end up letting it influence the way our information architecture is structured. This isn't necessarily a bad thing, but it can lead to us making concessions about the information we ultimately display to our users. What we should really be thinking about is the information itself and how this information changes over time. What actions are involved that bring about these changes? How is one piece of data dependent on another piece of data? Flux naturally removes itself from the browser technology constraints of the day so that we can focus on the information first. It's easy to plug views into our information architecture as it evolves into a software product. Flux components In this section, we'll begin our journey into the concepts of Flux. These concepts are the essential ingredients used in formulating a Flux architecture. While there's no detailed specifications for how these components should be implemented, they nevertheless lay the foundation of our implementation. This is a high-level introduction to the components we'll be implementing throughout this book. Action Actions are the verbs of the system. In fact, it's helpful if we derive the name of an action directly from a sentence. These sentences are typically statements of functionality; something we want the application to do. Here are some examples: Fetch the session Navigate to the settings page Filter the user list Toggle the visibility of the details section These are simple capabilities of the application, and when we implement them as part of a Flux architecture, actions are the starting point. These human-readable action statements often require other new components elsewhere in the system, but the first step is always an action. So, what exactly is a Flux action? At it's simplest, an action is nothing more than a string—a name that helps identify the purpose of the action. More typically, actions consist of a name and a payload. Don't worry about the payload specifics just yet—as far as actions are concerned, they're just opaque pieces of data being delivered into the system. Put differently, actions are like mail parcels. The entry point into our Flux system doesn't care about the internals of the parcel, only that they get to where they need to go. Here's an illustration of actions entering a Flux system: This diagram might give the impression that actions are external to Flux when in fact, they're an integral part of the system. The reason this perspective is valuable is because it forces us to think about actions as the only means to deliver new data into the system. Golden Flux Rule: If it's not an action, it can't happen. Dispatcher The dispatcher in a Flux architecture is responsible for distributing actions to the store components (we'll talk about stores next). A dispatcher is actually kind of like a broker—if actions want to deliver new data to a store, they have to talk to the broker, so it can figure out the best way to deliver them. Think about a message broker in a system like RabbitMQ. It's the central hub where everything is sent before it's actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions and dispatching them to stores: In a Flux application, there's only one dispatcher. It can be thought of more as a pseudo layer than an explicit layer. We know the dispatcher is there, but it's not essential to this level of abstraction. What we're concerned about at an architectural level, is making sure that when a given action is dispatched, we know that it's going to make it's way to every store in the system. Having said that, the dispatcher's role is critical to how Flux works. It's the place where store callback functions are registered. And it's how data dependencies are handled. Stores tell the dispatcher about other stores that it depends on, and it's up to the dispatcher to make sure these dependencies are properly handled. Golden Flux Rule: The dispatcher is the ultimate arbiter of data dependencies. Store Stores are where state is kept in a Flux application. Typically, this means the application data that's sent to the frontend from the API. However, Flux stores take this a step further and explicitly model the state of the entire application. For now, just know that stores are where state that matters can be found. Other Flux components don't have state—they have implicit state at the code level, but we're not interested in this, from an architectural point of view. Actions are the delivery mechanism for new data entering the system. The term new data doesn't imply that we're simply appending it to some collection in a store. All data entering the system is new in the sense that it hasn't been dispatched as an action yet—it could in fact result in a store changing state. Let's look at a visualization of an action that results in a store changing state: The key aspect of how stores change state is that there's no external logic that determines a state change should happen. It's the store, and only the store, that makes this decision and then carries out the state transformation. This is all tightly encapsulated within the store. This means that when we need to reason about a particular information, we need not look any further than the stores. They're their own boss—they're self-employed. Golden Flux Rule: Stores are where state lives, and only stores themselves can change this state. View The last Flux component we're going to look at in this section is the view, and it technically isn't even a part of Flux. At the same time, views are obviously a critical part of our application. Views are almost universally understood as the part of our architecture that's responsible for displaying data to the user—it's the last stop as data flows through our information architecture. For example, in MVC architectures, views take model data and display it. In this sense, views in a Flux-based application aren't all that different from MVC views. Where they differ markedly is with regard to handling events. Let's take a look at the following diagram: Here we can see the contrasting responsibilities of a Flux view, compared with a view component found in your typical MVC architecture. The two view types have similar types of data flowing into them—application data used to render the component and events (often user input). What's different between the two types of views is what flows out of them. The typical view doesn't really have any constraints in how it's event handler functions communicate with other components. For example, in response to a user clicking a button, the view could directly invoke behavior on a controller, change the state of a model, or it might query the state of another view. On the other hand, the Flux view can only dispatch new actions. This keeps our single entry point into the system intact and consistent with other mechanisms that want to change the state of our store data. In other words, an API response updates state in the exact same way as a user clicking a button does. Given that views should be restricted in terms of how data flows out of them (besides DOM updates) in a Flux architecture, you would think that views should be an actual Flux component. This would make sense insofar as making actions the only possible option for views. However, there's also no reason we can't enforce this now, with the benefit being that Flux remains entirely focused on creating information architectures. Keep in mind, however, that Flux is still in it's infancy. There's no doubt going to be external influences as more people start adopting Flux. Maybe Flux will have something to say about views in the future. Until then, views exist outside of Flux but are constrained by the unidirectional nature of Flux. Golden Flux Rule: The only way data flows out of a view is by dispatching an action. Installing the Flux package We'll get some of our boilerplate code setup tasks out of the way too, since we'll be using a similar setup throughout the book. We'll skip going over Node + NPM installation since it's sufficiently covered in great detail all over the Internet. We'll assume Node is installed and ready to go from this point forward. The first NPM package we'll need installed is Webpack. This is an advanced module bundler that's well suited for modern JavaScript applications, including Flux-based applications. We'll want to install this package globally so that the webpack command gets installed on our system: npm install webpack -g With Webpack in place, we can build each of the code examples that ship with this book. However, our project does require a couple local NPM packages, and these can be installed as follows: npm install flux babel-core babel-loader babel-preset-es2015 --save-dev The --save-dev option adds these development dependencies to our file, if one exists. This is just to get started—it isn't necessary to manually install these packages to run the code examples in this book. The examples you've downloaded already come with a package.json, so to install the local dependencies, simply run the following from within the same directory as the package.json file: npm install Now the webpack command can be used to build the example. Alternatively, if you plan on playing with the code, which is obviously encouraged, try running webpack --watch. This latter form of the command will monitor for file changes to the files used in the build, and run the build whenever they change. This is indeed a simple hello world to get us off to a running start, in preparation for the remainder of the book. We've taken care of all the boilerplate setup tasks by installing Webpack and it's supporting modules. Let's take a look at the code now. We'll start by looking at the markup that's used. <!doctype html> <html>   <head>     <title>Hello Flux</title>     <script src="main-bundle.js" defer></script>   </head>   <body></body> </html> Not a lot to it is there? There isn't even content within the body tag. The important part is the main-bundle.js script—this is the code that's built for us by Webpack. Let's take a look at this code now: // Imports the "flux" module. import * as flux from 'flux'; // Creates a new dispatcher instance. "Dispatcher" is // the only useful construct found in the "flux" module. const dispatcher = new flux.Dispatcher(); // Registers a callback function, invoked every time // an action is dispatched. dispatcher.register((e) => {   var p;   // Determines how to respond to the action. In this case,   // we're simply creating new content using the "payload"   // property. The "type" property determines how we create   // the content.   switch (e.type) {     case 'hello':       p = document.createElement('p');       p.textContent = e.payload;       document.body.appendChild(p);       break;     case 'world':       p = document.createElement('p');       p.textContent = `${e.payload}!`;       p.style.fontWeight = 'bold';       document.body.appendChild(p);       break;     default:       break;   } });   // Dispatches a "hello" action. dispatcher.dispatch({   type: 'hello',   payload: 'Hello' }); // Dispatches a "world" action. dispatcher.dispatch({   type: 'world',   payload: 'World' }); As you can see, there's not much too this hello world Flux application. In fact, the only Flux-specific component this code creates is a dispatcher. It then dispatches a couple of actions and the handler function that's registered to the store processes the actions. Don't worry that there's no stores or views in this example. The idea is that we've got the basic Flux NPM package installed and ready to go. Summary This article introduced you to Flux. Specifically, we looked at both what Flux is and what it isn't. Flux is a set of architectural patterns, that when applied to our JavaScript application, help with getting the data flow aspect of our architecture right. Flux isn't yet another framework used for solving specific implementation challenges, be it browser quirks or performance gains—there's a multitude of tools already available for these purposes. Perhaps the most important defining aspect of Flux are the conceptual problems it solves—things like unidirectional data flow. This is a major reason that there's no de facto Flux implementation. We wrapped the article up by walking through the setup of our build components used throughout the book. To test that the packages are all in place, we created a very basic hello world Flux application. Resources for Article: Further resources on this subject: Reactive Programming and the Flux Architecture [article] Advanced React [article] Qlik Sense's Vision [article]  
Read more
  • 0
  • 0
  • 1873
article-image-features-sitecore
Packt
25 Apr 2016
17 min read
Save for later

Features of Sitecore

Packt
25 Apr 2016
17 min read
In this article by Yogesh Patel, the author of the book, Sitecore Cookbook for Developers, we will discuss about the importance of Sitecore and its good features. (For more resources related to this topic, see here.) Why Sitecore? Sitecore Experience Platform (XP) is not only an enterprise-level content management system (CMS), but rather a web framework or web platform, which is the global leader in experience management. It continues to be very popular because of its highly scalable and robust architecture, continuous innovations, and ease of implementations compared to other CMSs available. It also provides an easier integration with many external platforms such as customer relationship management (CRM), e-commerce, and so on. Sitecore architecture is built with the Microsoft .NET framework and provides greater depth of APIs, flexibility, scalability, performance, and power to developers. It has great out-of-the-box capabilities, but one of its great strengths is the ease of extending these capabilities; hence, developers love Sitecore! Sitecore provides many features and functionalities out of the box to help content owners and marketing teams. These features can be extended and highly customized to meet the needs of your unique business rules. Sitecore provides these features with different user-friendly interfaces for content owners that helps them manage content and media easily and quickly. Sitecore user interfaces are supported on almost every modern browser. In addition, fully customized web applications can be layered in and integrated with other modules and tools using Sitecore as the core platform. It helps marketers to optimize the flow of content continuously for better results and more valuable outcomes. It also provides in-depth analytics, personalized experience to end users, and marketing automation tools, which play a significant role for marketing teams. The following are a few of the many features of Sitecore. CMS based on the .NET Framework Sitecore provides building components on ASP.NET Web forms as well as ASP.NET Model-View-Controller (MVC) frameworks, so developers can choose either approach to match the required architecture. Sitecore provides web controls and sublayouts while working with ASP.NET web forms and view rendering, controller rendering, and models and item rendering while working with the ASP.NET MVC framework. Sitecore also provides two frameworks to prepare user interface (UI) applications for Sitecore clients—Sheer UI and SPEAK. Sheer UI applications are prepared using Extensible Application Markup Language (XAML) and most of the Sitecore applications are prepared using Sheer UI. Sitecore Process Enablement and Accelerator Kit (SPEAK) is the latest framework to develop Sitecore applications with a consistent interface quickly and easily. SPEAK gives you a predefined set of page layouts and components: Component-based architecture Sitecore is built on a component-based architecture, which provides us with loosely coupled independent components. The main advantage of these components is their reusability and loosely coupled independent behaviour. It aims to provide reusability of components at the page level, site level, and Sitecore instance level to support multisite or multitenant sites. Components in Sitecore are built with the normal layered approach, where the components are split into layers such as presentation, business logic, data layer, and so on. Sitecore provides different presenation components, including layouts, sublayouts, web control renderings, MVC renderings, and placeholders. Sitecore manages different components in logical grouping by their templates, layouts, sublayouts, renderings, devices, media, content items, and so on: Layout engine The Sitecore layout engine extends the ASP.NET web application server to merge content with presentation logic dynamically when web clients request resources. A layout can be a web form page (.aspx) or MVC view (.cshtml) file. A layout can have multiple placeholders to place content on predefined places, where the controls are placed. Controls can be HTML markup controls such as a sublayout (.ascx) file, MVC view (.cshtml) file, or other renderings such as web control, controller rendering, and so on, which can contain business logic. Once the request criteria are resolved by the layout engine, such as item, language, and device, the layout engine creates a platform to render different controls and assemble their output to relevant placeholders on the layout. Layout engine provides both static and dynamic binding. So, with dynamic binding, we can have clean HTML markups and reusability of all the controls or components. Binding of controls, layouts, and devices can be applied on Sitecore content items itself, as shown in the following screenshot: Once the layout engine renders the page, you can see how the controls will be bound to the layout, as shown in the following image: The layout engine in Sitecore is reponsible for layout rendering, device detection, rule engine, and personalization: Multilingual support In Sitecore, content can be maintained in any number of languages. It provides easier integration with external translation providers for seamless translation and also supports the dynamic creation of multilingual web pages. Sitecore also supports the language fallback feature on the field, item, and template level, which makes life easier for content owners and developers. It also supports chained fallback. Multi-device support Devices represent different types of web clients that connect to the Internet and place HTTP requests. Each device represents a different type of web client. Each device can have unique markup requirements. As we saw, the layout engine applies the presentation components specified for the context device to the layout details of the context item. In the same way, developers can use devices to format the context item output using different collections of presentation components for various types of web clients. Dynamically assembled content can be transformed to conform to virtually any output format, such as a mobile, tablet, desktop, print, or RSS. Sitecore also supports the device fallback feature so that any web page not supported for the requesting device can still be served through the fallback device. It also supports chained fallback for devices. Multi-site capabilities There are many ways to manage multisites on a single Sitecore installation. For example, you can host multiple regional domains with different regional languages as the default language for a single site. For example, http://www.sitecorecookbook.com will serve English content, http://www.sitecorecookbook.de will serve German content of the same website, and so on. Another way is to create multiple websites for different subsidiaries or franchise of a company. In this approach, you can share some common resources across all the sites such as templates, renderings, user interface elements, and other content or media items, but have unique content and pages so that you can find a separate existence of each website in Sitecore. Sitecore has security capabilities so that each franchise or subsidiary can manage their own website independently without affecting other websites. Developers have full flexibility to re-architect Sitecore's multisite architecture as per business needs. Sitecore also supports multitenant multisite architecture so that each website can work as an individual physical website. Caching Caching plays a very important role in website performance. Sitecore contains multiple levels of caching such as prefetch cache, data cache, item cache, and HTML cache. Apart from this, Sitecore creates different caching such as standard values cache, filtered item cache, registry cache, media cache, user cache, proxy cache, AccessResult cache, and so on. This makes understanding all the Sitecore caches really important. Sitecore caching is a very vast topic to cover; you can read more about it at http://sitecoreblog.patelyogesh.in/2013/06/how-sitecore-caching-work.html. Configuration factory Sitecore is configured using IIS's configuration file, Web.config. Sitecore configuration factory allows you to configure pipelines, events, scheduling agents, commands, settings, properties, and configuration nodes in Web.config files, which can be defined in the /configuration/sitecore path. Configurations inside this path can be spread out between multiple files to make it scalable. This process is often called config patching. Instead of touching the Web.config file, Sitecore provides the Sitecore.config file in the App_ConfigInclude directory, which contains all the important Sitecore configurations. Functionality-specific configurations are split into the number of .config files based, which you can find in its subdirectories. These .config files are merged into a single configuration file at runtime, which you can evaluate using http://<domain>/sitecore/admin/showconfig.aspx. Thus, developers create custom .config files in the App_ConfigInclude directory to introduce, override, or delete settings, properties, configuration nodes, and attributes without touching Sitecore's default .config files. This makes managing .config files very easy from development to deployment. You can learn more about file patching from https://sdn.sitecore.net/upload/sitecore6/60/include_file_patching_facilities_sc6orlater-a4.pdf. Dependency injection in .NET has become very common nowadays. If you want to build a generic and reusable functionality, you will surely go for the inversion of control (IoC) framework. Fortunately, Sitecore provides a solution that will allow you to easily use different IoC frameworks between projects. Using patch files, Sitecore allows you to define objects that will be available at runtime. These nodes are defined under /configuration/sitecore and can be retrieved using the Sitecore API. We can define types, constructors, methods, properties, and their input parameters in logical nodes inside nodes of pipelines, events, scheduling agents, and so on. You can learn more examples of it from http://sitecore-community.github.io/docs/documentation/Sitecore%20Fundamentals/Sitecore%20Configuration%20Factory/. Pipelines An operation to be performed in multiple steps can be carried out using the pipeline system, where each individual step is defined as a processor. Data processed from one processor is then carried to the next processor in arguments. The flow of the pipeline can be defined in XML format in the .config files. You can find default pipelines in the Sitecore.config file or patch file under the <pipelines> node (which are system processes) and the <processors> node (which are UI processes). The following image visualizes the pipeline and processors concept: Each processor in a pipeline contains a method named Process() that accepts a single argument, Sitecore.Pipelines.PipelineArgs, to get different argument values and returns void. A processor can abort the pipeline, preventing Sitecore from invoking subsequent processors. A page request traverses through different pipelines such as <preProcessRequest>, <httpRequestBegin>, <renderLayout>, <httpRequestEnd>, and so on. The <httpRequestBegin> pipeline is the heart of the Sitecore HTTP request execution process. It defines different processors to resolve the site, device, language, item, layout, and so on sequentially, which you can find in Sitecore.config as follows: <httpRequestBegin>   ...   <processor type="Sitecore.Pipelines.HttpRequest.SiteResolver,     Sitecore.Kernel"/>   <processor type="Sitecore.Pipelines.HttpRequest.UserResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DatabaseResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.BeginDiagnostics,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DeviceResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.LanguageResolver,     Sitecore.Kernel"/>   ... </httpRequestBegin> There are more than a hundred pipelines, and the list goes on increasing after every new version release. Sitecore also allows us to create our own pipelines and processors. Background jobs When you need to do some long-running operations such as importing data from external services, sending e-mails to subscribers, resetting content item layout details, and so on, we can use Sitecore jobs, which are asynchronous operations in the backend that you can monitor in a foreground thread (Job Viewer) of Sitecore Rocks or by creating a custom Sitecore application. The jobs can be invoked from the user interface by users or can be scheduled. Sitecore provides APIs to invoke jobs with many different options available. You can simply create and start a job using the following code: public void Run() {   JobOptions options = new JobOptions("Job Name", "Job Category",     "Site Name", "current object", "Task Method to Invoke", new     object[] { rootItem })   {     EnableSecurity = true,     ContextUser = Sitecore.Context.User,     Priority = ThreadPriority.AboveNormal   };   JobManager.Start(options); } You can schedule tasks or jobs by creating scheduling agents in the Sitecore.config file. You can also set their execution frequency. The following example shows you how Sitecore has configured PublishAgent, which publishes a site every 12 hours and simply executes the Run() method of the Sitecore.Tasks.PublishAgent class: <scheduling>   <agent type="Sitecore.Tasks.PublishAgent" method="Run"     interval="12:00:00">     <param desc="source database">master</param>     <param desc="target database">web</param>     <param desc="mode (full or smart or       incremental)">incremental</param>     <param desc="languages">en, da</param>   </agent> </scheduling> Apart from this, Sitecore also provides you with the facility to define scheduled tasks in the database, which has a great advantage of storing tasks in the database, so that we can handle its start and end date and time. We can use it once or make it recurring as well. Workflow and publishing Workflows are essential to the content author experience. Workflows ensure that items move through a predefined set of states before they become publishable. It is necessary to ensure that content receives the appropriate reviews and approvals before publication to the live website. Apart from workflow, Sitecore provides highly configurable security features, access permissions, and versioning. Sitecore also provides full workflow history like when and by whom the content was edited, reviewed, or approved. It also allows you to restrict publishing as well as identify when it is ready to be published. Publishing is an essential part of working in Sitecore. Every time you edit or create new content, you have to publish it to see it on your live website. When publishing happens, the item is copied from the master database to the web database. So, the content of the web database will be shown on the website. When multiple users are working on different content pages or media items, publishing restrictions and workflows play a vital role to make releases, embargoed, or go-live successful. There are three types of publishing available in Sitecore: Republish: This publishes every item even though items are already published. Smart Publish: Sitecore compares the internal revision identifier of the item in the master and web databases. If both identifiers are different, it means that the item is changed in the master database, hence Sitecore will publish the item or skip the item if identifiers are the same. Incremental Publish: Every modified item is added to the publish queue. Once incremental publishing is done, Sitecore will publish all the items found in the publish queue and clear it. Sitecore also supports the publishing of subitems as well as related items (such as publishing a content item will also publish related media items). Search Sitecore comes with out-of-the-box Lucene support. You can also switch your Sitecore search to Solr, which just needs to install Solr and enable Solr configurations already available. Sitecore by default indexes Sitecore content in Lucene index files. The Sitecore search engine lets you search through millions of items of the content tree quickly with the help of different types of queries with Lucene or Solr indexes. Sitecore provides you with the following functionalities for content search: We can search content items and documents such as PDF, Word, and so on. It allows you to search content items based on preconfigured fields. It provides APIs to create and search composite fields as per business needs. It provides content search APIs to sort, filter, and page search results. We can apply wildcards to search complex results and autosuggest. We can apply boosting to influence search results or elevate results by giving more priority. We can create custom dictionaries and index files, using which we can suggest did you mean kind of suggestions to users. We can apply facets to refine search results as we can see on e-commerce sites. W can apply different analyzers to hunt MoreLikeThis results or similar results. We can tag content or media items to categorize them so that we can use features such as a tag cloud. It provides a scalable user interface to search content items and apply filters and operations to selected search results. It provides different indexing strategies to create transparent and diverse models for index maintenance. In short, Sitecore allows us to implement different searching techniques, which are available in Google or other search engines. Content authors always find it difficult while working with a big number of items. You can read more about Sitecore search at https://doc.sitecore.net/sitecore_experience_platform/content_authoring/searching/searching. Security model Sitecore has the reputation of being very easy to set up the security of users, roles, access rights, and so on. Sitecore follows the .NET security model, so we get all the basic information of the .NET membership in Sitecore, which offers several advantages: A variety of plug-and-play features provided directly by Microsoft The option to replace or extend the default configuration with custom providers It is also possible to store the accounts in different storage areas using several providers simultaneously Sitecore provides item-level and field-level rights and an option to create custom rights as well Dynamic user profile structure and role management is possible just through the user interface, which is simpler and easier compared to pure ASP.NET solutions It provides easier implementation for integration with external systems Even after having an extended wrapper on the .NET solution, we get the same performance as a pure ASP.NET solution Experience analytics and personalization Sitecore contains state-of-the-art Analysis, Insights, Decisions, Automation (AIDA) framework, which is the heart for marketing programs. It provides comprehensive analytics data and reports, insights from every website interaction with rules, behavior-based personalization, and marketing automation. Sitecore collects all the visitor interactions in a real-time, big data repository—Experience Database (xDB)—to increase the availability, scalability, and performance of website. Sitecore Marketing Foundation provides the following features: Sitecore uses MongoDB, a big marketing data repository that collects all customer interactions. It provides real-time data to marketers to automate interactions across all channels. It provides a unified 360 degree view of the individual website visitors and in-depth analytics reports. It provides fundamental analytics measurement components such as goals and events to evaluate the effectiveness of online business and marketing campaigns. It provides comprehensive conditions and actions to achieve conditional and behavioral or predictive personalization, which helps show customers what they are looking for instead of forcing them to see what we want to show. Sitecore collects, evaluates, and processes Omnichannel visitor behavioral patterns, which helps better planned effective marketing campaigns and improved user experience. Sitecore provides an engagement plan to control how your website interacts with visitors. It helps nurture relationships with your visitors by adapting personalized communication based on which state they are falling. Sitecore provides an in-depth geolocation service, helpful in optimizing campaigns through segmentation, personalization, and profiling strategies. The Sitecore Device Detection service is helpful in personalizing user experience or promotions based on the device they use. It provides different dimensions and reports to reflect data on full taxonomy provided in Marketing Control Panel. It provides different charting controls to get smart reports. It has full flexibility for developers to customize or extend all these features. High performance and scalability Sitecore supports heavy content management and content delivery usage with a large volume of data. Sitecore is architected for high performance and unlimited scalability. Sitecore cache engine provides caching on the raw data as well as rendered output data, which gives a high-performance platform. Sitecore uses the event queue concept for scalability. Theoretically, it makes Sitecore scalable to any number of instances under a load balancer. Summary In this article, we discussed about the importance of Sitecore and its good features. We also saw that Sitecore XP is not only an enterprise-level CMS, but also a web platform, which is the global leader in experience management. Resources for Article: Further resources on this subject: Building a Recommendation Engine with Spark [article] Configuring a MySQL linked server on SQL Server 2008 [article] Features and utilities in SQL Developer Data Modeler [article]
Read more
  • 0
  • 0
  • 5329

article-image-hello-world-program
Packt
20 Apr 2016
12 min read
Save for later

Hello World Program

Packt
20 Apr 2016
12 min read
In this article by Manoj Kumar, author of the book Learning Sinatra, we will write an application. Make sure that you have Ruby installed. We will get a basic skeleton app up and running and see how to structure the application. (For more resources related to this topic, see here.) In this article, we will discuss the following topics: A project that will be used to understand Sinatra Bundler gem File structure of the application Responsibilities of each file Before we begin writing our application, let's write the Hello World application. Getting started The Hello World program is as follows: 1require 'sinatra'23 get '/' do4 return 'Hello World!'5 end The following is how the code works: ruby helloworld.rb Executing this from the command line will run the application and the server will listen to the 4567 port. If we point our browser to http://localhost:4567/, the output will be as shown in the following screenshot: The application To understand how to write a Sinatra application, we will take a small project and discuss every part of the program in detail. The idea We will make a ToDo app and use Sinatra along with a lot of other libraries. The features of the app will be as follows: Each user can have multiple to-do lists Each to-do list will have multiple items To-do lists can be private, public, or shared with a group Items in each to-do list can be assigned to a user or group The modules that we build are as follows: Users: This will manage the users and groups List: This will manage the to-do lists Items: This will manage the items for all the to-do lists Before we start writing the code, let's see what the file structure will be like, understand why each one of them is required, and learn about some new files. The file structure It is always better to keep certain files in certain folders for better readability. We could dump all the files in the home folder; however, that would make it difficult for us to manage the code: The app.rb file This file is the base file that loads all the other files (such as, models, libs, and so on) and starts the application. We can configure various settings of Sinatra here according to the various deployment environments. The config.ru file The config.ru file is generally used when we need to deploy our application with different application servers, such as Passenger, Unicorn, or Heroku. It is also easy to maintain the different deployment environment using config.ru. Gemfile This is one of the interesting stuff that we can do with Ruby applications. As we know, we can use a variety of gems for different purposes. The gems are just pieces of code and are constantly updated. Therefore, sometimes, we need to use specific versions of gems to maintain the stability of our application. We list all the gems that we are going to use for our application with their version. Before we discuss how to use this Gemfile, we will talk about gem bundler. Bundler The gem bundler manages the installation of all the gems and their dependencies. Of course, we would need to install the gem bundler manually: gem install bundler This will install the latest stable version of bundler gem. Once we are done with this, we need to create a new file with the name Gemfile (yes, with a capital G) and add the gems that we will use. It is not necessary to add all the gems to Gemfile before starting to write the application. We can add and remove gems as we require; however, after every change, we need to run the following: bundle install This will make sure that all the required gems and their dependencies are installed. It will also create a 'Gemfile.lock' file. Make sure that we do not edit this file. It contains all the gems and their dependencies information. Therefore, we now know why we should use Gemfile. This is the lib/routes.rb path for folder containing the routes file. What is a route? A route is the URL path for which the application serves a web page when requested. For example, when we type http://www.example.com/, the URL path is / and when we type http://www.example.com/something/, /something/ is the URL path. Now, we need to explicitly define all the routes for which we will be serving requests so that our application knows what to return. It is not important to have this file in the lib folder or to even have it at all. We can also write the routes in the app.rb file. Consider the following examples: get '/' do # code end post '/something' do # code end Both of the preceding routes are valid. The get and post method are the HTTP methods. The first code block will be executed when a GET request is made on / and the second one will be executed when a POST request is made on /something. The only reason we are writing the routes in a separate file is to maintain clean code. The responsibility of each file will be clearly understood in the following: models/: This folder contains all the files that define model of the application. When we write the models for our application, we will save them in this folder. public/: This folder contains all our CSS, JavaScript, and image files. views/: This folder will contain all the files that define the views, such as HTML, HAML, and ERB files. The code Now, we know what we want to build. You also have a rough idea about what our file structure would be. When we run the application, the rackup file that we load will be config.ru. This file tells the server what environment to use and which file is the main application to load. Before running the server, we need to write a minimum code. It includes writing three files, as follows: app.rb config.ru Gemfile We can, of course, write these files in any order we want; however, we need to make sure that all three files have sufficient code for the application to work. Let's start with the app.rb file. The app.rb file This is the file that config.ru loads when the application is executed. This file, in turn, loads all the other files that help it to understand the available routes and the underlying model: 1 require 'sinatra' 2 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end What does this code do? Let's see what this code does in the following: 1 require 'sinatra' //This loads the sinatra gem into memory. 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end This defines our main application's class. This skeleton is enough to start the basic application. We inherit the Base class of the Sinatra module. Before starting the application, we may want to change some basic configuration settings such as logging, error display, user sessions, and so on. We handle all these configurations through the configure blocks. Also, we might need different configurations for different environments. For example, in development mode, we might want to see all the errors; however, in production we don’t want the end user to see the error dump. Therefore, we can define the configurations for different environments. The first step would be to set the application environment to the concerned one, as follows: 4 set :environment, ENV['RACK_ENV'] We will later see that we can have multiple configure blocks for multiple environments. This line reads the system environment RACK_ENV variable and sets the same environment for the application. When we discuss config.ru, we will see how to set RACK_ENV in the first place: 6 configure do 7 end The following is how we define a configure block. Note that here we have not informed the application that to which environment do these configurations need to be applied. In such cases, this becomes the generic configuration for all the environments and this is generally the last configuration block. All the environment-specific configurations should be written before this block in order to avoid code overriding: 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } If we see the file structure discussed earlier, we can see that models/ is a directory that contains the model files. We need to import all these files in the application. We have kept all our model files in the models/ folder: Dir[File.join(File.dirname(__FILE__),'models','*.rb')] This would return an array of files having the .rb extension in the models folder. Doing this, avoids writing one require line for each file and modifying this file again: 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } Similarly, we will import all the files in the lib/ folder. Therefore, in short, the app.rb configures our application according to the deployment environment and imports the model files and the other library files before starting the application. Now, let's proceed to write our next file. The config.ru file The config.ru is the rackup file of the application. This loads all the gems and app.rb. We generally pass this file as a parameter to the server, as follows: 1 require 'sinatra' 2 require 'bundler/setup' 3 Bundler.require 4 5 ENV["RACK_ENV"] = "development" 6 7 require File.join(File.dirname(__FILE__), 'app.rb') 8 9 Todo .start! W Working of the code Let's go through each of the lines, as follows: 1 require 'sinatra' 2 require 'bundler/setup' The first two lines import the gems. This is exactly what we do in other languages. The gem 'sinatra' command will include all the Sinatra classes and help in listening to requests, while the bundler gem will manage all the other gems. As we have discussed earlier, we will always use bundler to manage our gems. 3 Bundler.require This line of the code will check Gemfile and make sure that all the gems available match the version and all the dependencies are met. This does not import all the gems as all gems may not be needed in the memory at all times: 5 ENV["RACK_ENV"] = "development" This code will set the system environment RACK_ENV variable to development. This will help the server know which configurations does it need to use. We will later see how to manage a single configuration file with different settings for different environments and use one particular set of configurations for the given environment. If we use version control for our application, config.ru is not version controlled. It has to be customized on whether our environment is development, staging, testing, or production. We may version control a sample config.ru. We will discuss this when we talk about deploying our application. Next, we will require the main application file, as follows: 7 require File.join(File.dirname(__FILE__), 'app.rb') We see here that we have used the File class to include app.rb: File.dirname(__FILE__) It is a convention to keep config.ru and app.rb in the same folder. It is good practice to give the complete file path whenever we require a file in order to avoid breaking the code. Therefore, this part of the code will return the path of the folder containing config.ru. Now, we know that our main application file is in the same folder as config.ru, therefore, we do the following: File.join(File.dirname(__FILE__), 'app.rb') This would return the complete file path of app.rb and the line 7 will load the main application file in the memory. Now, all we need to do is execute app.rb to start the application, as follows: 9 Todo .start! We see that the start! method is not defined by us in the Todo class in app.rb. This is inherited from the Sinatra::Base class. It starts the application and listens to incoming requests. In short, config.ru checks the availability of all the gems and their dependencies, sets the environment variables, and starts the application. The easiest file to write is Gemfile. It has no complex code and logic. It just contains a list of gems and their version details. Gemfile In Gemfile, we need to specify the source from where the gems will be downloaded and the list of the gems. Therefore, let's write a Gemfile with the following lines: 1 source 'https://rubygems.org' 2 gem 'bundler', '1.6.0' 3 gem 'sinatra', '1.4.4' The first line specifies the source. The https://rubygems.org website is a trusted place to download gems. It has a large collection of gems hosted. We can view this page, search for gems that we want to use, read the documentation, and select the exact version for our application. Generally, the latest stable version of bundler is used. Therefore, we search the site for bundler and find out its version. We do the same for the Sinatra gem. Summary In this article, you learned how to build a Hello World program using Sinatra. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion[article] Quick start - your first Sinatra application[article] Building tiny Web-applications in Ruby using Sinatra[article]
Read more
  • 0
  • 0
  • 1214

article-image-creating-your-own-node-module
Soham Kamani
18 Apr 2016
6 min read
Save for later

Creating Your Own Node Module

Soham Kamani
18 Apr 2016
6 min read
Node.js has a great community and one of the best package managers I have ever seen. One of the reasons npm is so great is because it encourages you to make small composable modules, which usually have just one responsibility. Many of the larger, more complex node modules are built by composing smaller node modules. As of this writing, npm has over 219,897 packages. One of the reasons this community is so vibrant is because it is ridiculously easy to make your own node module. This post will go through the steps to create your own node module, as well as some of the best practices to follow while doing so. Prerequisites and Installation node and npm are a given. Additionally, you should also configure your npm author details: npm set init.author.name "My Name" npm set init.author.email "[email protected]" npm set init.author.url "http://your-website.com" npm adduser These are the details that would show up on npmjs.org once you publish. Hello World The reason that I say creating a node module is ridiculously easy is because you only need two files to create the most basic version of a node module. First up, create a package.json file inside of a new folder by running the npm init command. This will ask you to choose a name. Of course, the name you are thinking of might already exist in the npm registry, so to check for this run the command npm ls owner module_name , where module_name is replaced by the namespace you want to check. If it exists, you will get information about the authors: $ npm owner ls forever indexzero <[email protected]> bradleymeck <[email protected]> julianduque <[email protected]> jeffsu <[email protected]> jcrugzz <[email protected]> If your namespace is free you would get an error message. Something similar to : $ npm owner ls does_not_exist npm ERR! owner ls Couldnt get owner data does_not_exist npm ERR! Darwin 14.5.0 npm ERR! argv "node" "/usr/local/bin/npm" "owner" "ls" "does_not_exist" npm ERR! node v0.12.4 npm ERR! npm v2.10.1 npm ERR! code E404 npm ERR! 404 Registry returned 404 GET on https://registry.npmjs.org/does_not_exist npm ERR! 404 npm ERR! 404 'does_not_exist' is not in the npm registry. npm ERR! 404 You should bug the author to publish it (or use the name yourself!) npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. npm ERR! Please include the following file with any support request: npm ERR! /Users/sohamchetan/Documents/jekyll-blog/npm-debug.log After setting up package.json, add a JavaScript file: module.exports = function(){ return 'Hello World!'; } And that's it! Now execute npm publish . and your node module will be published to npmjs.org. Also, anyone can now install your node module by running npm install --save module_name, where module name is the "name" property contained in package.json. Now anyone can use your module like this : var someModule = require('module_name'); console.log(someModule()); // This will output "Hello World!" Dependencies As stated before, rarely will you find large scale node modules that do not depend on other smaller modules. This is because npm encourages modularity and composability. To add dependancies to your own module, simply install them. For example, one of the most depended upon packages is lodash, a utility library. To add this, run the command : npm install --save lodash Now you can use lodash everywhere in your module by "requiring" it, and when someone else downloads your module, they get lodash bundled along with it as well. Additionally you would want to have some modules purely for development and not for distribution. These are dev-dependencies, and can be installed with the npm install --save-dev command. Dev dependencies will not install when someone else installs your node module. Configuring package.json The package.json file is what contains all the metadata for your node_module. A few fields are filled out automatically (like dependencies or devDependencies during npm installs). There are a few more fields in package.json that you should consider filling out so that your node module is best fitted to its purpose. "main": The relative path of the entry point of your module. Whatever is assigned to module.exports in this file is exported when someone "requires" your module. By default this is the index.js file. "keywords": It’s an array of keywords describing your module. Quite helpful when others from the community are searching for something that your module happens to solve. "license": I normally publish all my packages with an "MIT" licence because of its openness and popularity in the open source community. "version": This is pretty crucial because you cannot publish a node module with the same version twice. Normally, semver versioning should be followed. If you want to know more about the different properties you can set in package.json there's a great interactive guide you can check out. Using Yeoman Generators Although it's really simple to make a basic node module, it can be quite a task to make something substantial using just index.js nd package.json file. In these cases, there's a lot more to do, such as: Writing and running tests. Setting up a CI tool like Travis. Measuring code coverage. Installing standard dev dependencies for testing. Fortunately, there are many Yeoman generators to help you bootstrap your project. Check out generator-nm for setting up a basic project structure for a simple node module. If writing in ES6 is more your style, you can take a look at generator-nm-es6. These generators get your project structure, complete with a testing framework and CI integration so that you don't have to spend all your time writing boilerplate code. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 1732
article-image-setting-build-chain-grunt
Packt
18 Apr 2016
24 min read
Save for later

Setting up a Build Chain with Grunt

Packt
18 Apr 2016
24 min read
In this article by Bass Jobsen, author of the book Sass and Compass Designer's Cookbook you will learn the following topics: Installing Grunt Installing Grunt plugins Utilizing the Gruntfile.js file Adding a configuration definition for a plugin Adding the Sass compiler task (For more resources related to this topic, see here.) This article introduces you to the Grunt Task Runner and the features it offers to make your development workflow a delight. Grunt is a JavaScript Task Runner that is installed and managed via npm, the Node.js package manager. You will learn how to take advantage of its plugins to set up your own flexible and productive workflow, which will enable you to compile your Sass code. Although there are many applications available for compiling Sass, Grunt is a more flexible, versatile, and cross-platform tool that will allow you to automate many development tasks, including Sass compilation. It can not only automate the Sass compilation tasks, but also wrap any other mundane jobs, such as linting and minifying and cleaning your code, into tasks and run them automatically for you. By the end of this article, you will be comfortable using Grunt and its plugins to establish a flexible workflow when working with Sass. Using Grunt in your workflow is vital. You will then be shown how to combine Grunt's plugins to establish a workflow for compiling Sass in real time. Grunt becomes a tool to automate integration testing, deployments, builds, and development in which you can use. Finally, by understanding the automation process, you will also learn how to use alternative tools, such as Gulp. Gulp is a JavaScript task runner for node.js and relatively new in comparison to Grunt, so Grunt has more plugins and a wider community support. Currently, the Gulp community is growing fast. The biggest difference between Grunt and Gulp is that Gulp does not save intermediary files, but pipes these files' content in memory to the next stream. A stream enables you to pass some data through a function, which will modify the data and then pass the modified data to the next function. In many situations, Gulp requires less configuration settings, so some people find Gulp more intuitive and easier to learn. In this article, Grunt has been chosen to demonstrate how to run a task runner; this choice does not mean that you will have to prefer the usage of Grunt in your own project. Both the task runners can run all the tasks described in this article. Simply choose the task runner that suits you best. This recipe demonstrates shortly how to compile your Sass code with Gulp. In this article, you should enter your commands in the command prompt. Linux users should open a terminal, while Mac users should run Terminal.app and Window users should use the cmd command for command line usage. Installing Grunt Grunt is essentially a Node.js module; therefore, it requires Node.js to be installed. The goal of this recipe is to show you how to install Grunt on your system and set up your project. Getting ready Installing Grunt requires both Node.js and npm. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications, and npm is a package manager for Node.js. You can download the Node.js source code or a prebuilt installer for your platform at https://nodejs.org/en/download/. Notice that npm is bundled with node. Also, read the instructions at https://github.com/npm/npm#super-easy-install. How to do it... After installing Node.js and npm, installing Grunt is as simple as running a single command, regardless of the operating system that you are using. Just open the command line or the Terminal and execute the following command: npm install -g grunt-cli That's it! This command will install Grunt globally and make it accessible anywhere on your system. Run the grunt --version command in the command prompt in order to confirm that Grunt has been successfully installed. If the installation is successful, you should see the version of Grunt in the Terminal's output: grunt --version grunt-cli v0.1.11 After installing Grunt, the next step is to set it up for your project: Make a folder on your desktop and call it workflow. Then, navigate to it and run the npm init command to initialize the setup process: mkdir workflow && cd $_ && npm init Press Enter for all the questions and accept the defaults. You can change these settings later. This should create a file called package.json that will contain some information about the project and the project's dependencies. In order to add Grunt as a dependency, install the Grunt package as follows: npm install grunt --save-dev Now, if you look at the package.json file, you should see that Grunt is added to the list of dependencies: ..."devDependencies": {"grunt": "~0.4.5" } In addition, you should see an extra folder created. Called node_modules, it will contain Grunt and other modules that you will install later in this article. How it works... In the preceding section, you installed Grunt (grunt-cli) with the -g option. The -g option installs Grunt globally on your system. Global installation requires superuser or administrator rights on most systems. You need to run only the globally installed packages from the command line. Everything that you will use with the require() function in your programs should be installed locally in the root of your project. Local installation makes it possible to solve your project's specific dependencies. More information about global versus local installation of npm modules can be found at https://www.npmjs.org/doc/faq.html. There's more... Node package managers are available for a wide range of operation systems, including Windows, OSX, Linux, SunOS, and FreeBSD. A complete list of package managers can be found at https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager. Notice that these package managers are not maintained by the Node.js core team. Instead, each package manager has its own maintainer. See also The npm Registry is a public collection of packages of open source code for Node.js, frontend web apps, mobile apps, robots, routers, and countless other needs of the JavaScript community. You can find the npm Registry at https://www.npmjs.org/. Also, notice that you do not have to use Task Runners to create build chains. Keith Cirkel wrote about how to use npm as a build tool at http://blog.keithcirkel.co.uk/how-to-use-npm-as-a-build-tool/. Installing Grunt plugins Grunt plugins are the heart of Grunt. Every plugin serves a specific purpose and can also work together with other plugins. In order to use Grunt to set up your Sass workflow, you need to install several plugins. You can find more information about these plugins in this recipe's How it works... section. Getting ready Before you install the plugins, you should first create some basic files and folders for the project. You should install Grunt and create a package.json file for your project. Also, create an index.html file to inspect the results in your browser. Two empty folders should be created too. The scss folder contains your Sass code and the css folder contains the compiled CSS code. Navigate to the root of the project, repeat the steps from the Installing Grunt recipe of this article, and create some additional files and directories that you are going to work with throughout the article. In the end, you should end up with the following folder and file structure: How to do it... Grunt plugins are essentially Node.js modules that can be installed and added to the package.json file in the list of dependencies using npm. To do this, follow the ensuing steps: Navigate to the root of the project and run the following command, as described in the Installing Grunt recipe of this article: npm init Install the modules using npm, as follows: npm install grunt-contrib-sass load-grunt-tasks grunt-postcss --save-dev Notice the single space before the backslash in each line. For example, on the second line, grunt-contrib-sass , there is a space before the backslash at the end of the line. The space characters are necessary because they act as separators. The backslash at the end is used to continue the commands on the next line. The npm install command will download all the plugins and place them in the node_modules folder in addition to including them in the package.json file. The next step is to include these plugins in the Gruntfile.js file. How it works... Grunt plugins can be installed and added to the package.json file using the npm install command followed by the name of the plugins separated by a space, and the --save-dev flag: npm install nameOfPlugin1 nameOfPlugin2 --save-dev The --save-dev flag adds the plugin names and a tilde version range to the list of dependencies in the package.json file so that the next time you need to install the plugins, all you need to do is run the npm install command. This command looks for the package.json file in the directory from which it was called, and will automatically download all the specified plugins. This makes porting workflows very easy; all it takes is copying the package.json file and running the npm install command. Finally, the package.json file contains a JSON object with metadata. It is also worth explaining the long command that you have used to install the plugins in this recipe. This command installs the plugins that are continued on to the next line by the backslash. It is essentially equivalent to the following: npm install grunt-contrib-sass –-save-dev npm install load-grunt-tasks –-save-dev npm install grunt-postcss –-save-dev As you can see, it is very repetitive. However, both yield the same results; it is up to you to choose the one that you feel more comfortable with. The node_modules folder contains all the plugins that you install with npm. Every time you run npm install name-of-plugin, the plugin is downloaded and placed in the folder. If you need to port your workflow, you do not need to copy all the contents of the folder. In addition, if you are using a version control system, such as Git, you should add the node_modules folder to the .gitignore file so that the folder and its subdirectories are ignored. There's more... Each Grunt plugin also has its own metadata set in a package.json file, so plugins can have different dependencies. For instance, the grunt-contrib-sass plugin, as described in the Adding the Sass compiler task recipe, has set its dependencies as follows: "dependencies": {     "async": "^0.9.0",     "chalk": "^0.5.1",     "cross-spawn": "^0.2.3",     "dargs": "^4.0.0",     "which": "^1.0.5"   } Besides the dependencies described previously, this task also requires you to have Ruby and Sass installed. In the following list, you will find the plugins used in this article, followed by a brief description: load-grunt-tasks: This loads all the plugins listed in the package.json file grunt-contrib-sass: This compiles Sass files into CSS code grunt-postcss: This enables you to apply one or more postprocessors to your compiled CSS code CSS postprocessors enable you to change your CSS code after compilation. In addition to installing plugins, you can remove them as well. You can remove a plugin using the npm uninstall name-of-plugin command, where name-of-plugin is the name of the plugin that you wish to remove. For example, if a line in the list of dependencies of your package.json file contains grunt-concurrent": "~0.4.2",, then you can remove it using the following command: npm uninstall grunt-concurrent Then, you just need to make sure to remove the name of the plugin from your package.json file so that it is not loaded by the load-grunt-tasks plugin the next time you run a Grunt task. Running the npm prune command after removing the items from the package.json file will also remove the plugins. The prune command removes extraneous packages that are not listed in the parent package's dependencies list. See also More information on the npm version's syntax can be found at https://www. npmjs.org/doc/misc/semver.html  Also, see http://caniuse.com/ for more information on the Can I Use database Utilizing the Gruntfile.js file The Gruntfile.js file is the main configuration file for Grunt that handles all the tasks and task configurations. All the tasks and plugins are loaded using this file. In this recipe, you will create this file and will learn how to load Grunt plugins using it. Getting ready First, you need to install Node and Grunt, as described in the Installing Grunt recipe of this article. You will also have to install some Grunt plugins, as described in the Installing Grunt plugins recipe of this article. How to do it... Once you have installed Node and Grunt, follow these steps: In your Grunt project directory (the folder that contains the package.json file), create a new file, save it as Gruntfile.js, and add the following lines to it: module.exports = function(grunt) {   grunt.initConfig({     pkg: grunt.file.readJSON('package.json'),       //Add the Tasks configurations here.   }); // Define Tasks here }; This is the simplest form of the Gruntfile.js file that only contains two information variables. The next step is to load the plugins that you installed in the Installing Grunt plugins recipe. Add the following lines at the end of your Gruntfile.js file: grunt.loadNpmTasks('grunt-sass'); In the preceding line of code, grunt-sass is the name of the plugin you want to load. That is all it takes to load all the necessary plugins. The next step is to add the configurations for each task to the Gruntfile.js file. How it works... Any Grunt plugin can be loaded by adding a line of JavaScript to the Gruntfile.js file, as follows: grunt.loadNpmTasks('name-of-module'); This line should be added every time a new plugin is installed so that Grunt can access the plugin's functions. However, it is tedious to load every single plugin that you install. In addition, you will soon notice that, as your project grows, the number of configuration lines will increase as well. The Gruntfile.js file should be written in JavaScript or CoffeeScript. Grunt tasks rely on configuration data defined in a JSON object passed to the grunt.initConfig method. JavaScript Object Notation (JSON) is an alternative for XML and used for data exchange. JSON describes name-value pairs written as "name": "value". All the JSON data is separated by commas with JSON objects written inside curly brackets and JSON arrays inside square brackets. Each object can hold more than one name/value pair with each array holding one or more objects. You can also group tasks into one task. Your alias groups of tasks using the following line of code: grunt.registerTask('alias',['task1', 'task2']); There's more... Instead of loading all the required Grunt plugins one by one, you can load them automatically with the load-grunt-tasks plugin. You can install this by using the following command in the root of your project: npm install load-grunt-tasks --save-dev Then, add the following line at the very beginning of your Gruntfile.js file after module.exports: require('load-grunt-tasks')(grunt); Now, your Gruntfile.js file should look like this: module.exports = function(grunt) {   require('load-grunt-tasks')(grunt);   grunt.initConfig({     pkg: grunt.file.readJSON('package.json'),       //Add the Tasks configurations here.   }); // Define Tasks here }; The load-grunt-tasks plugin loads all the plugins specified in the package.json file. It simply loads the plugins that begin with the grunt- prefix or any pattern that you specify. This plugin will also read dependencies, devDependencies, and peerDependencies in your package.json file and load the Grunt tasks that match the provided patterns. A pattern to load specifically chosen plugins can be added as a second parameter. You can load, for instance, all the grunt-contrib tasks with the following code in your Gruntfile.js file: require('load-grunt-tasks')(grunt, {pattern: 'grunt-contrib-*'}); See also Read more about the load-grunt-tasks module at https://github.com/sindresorhus/load-grunt-task Adding a configuration definition for a plugin Any Grunt task needs a configuration definition. The configuration definitions are usually added to the Gruntfile.js file itself and are very easy to set up. In addition, it is very convenient to define and work with them because they are all written in the JSON format. This makes it very easy to spot the configurations in the plugin's documentation examples and add them to your Gruntfile.js file. In this recipe, you will learn how to add the configuration for a Grunt task. Getting ready For this recipe, you will first need to create a basic Gruntfile.js file and install the plugin you want to configure. If you want to install the grunt-example plugin, you can install it using the following command in the root of your project: npm install grunt-example --save-dev How to do it... Once you have created the basic Gruntfile.js file (also refer to the Utilizing the Gruntfile.js file recipe of this article), follow this step: A simple form of the task configuration is shown in the following code. Start by adding it to your Gruntfile.js file wrapped inside grunt.initConfig{}: example: {   subtask: {    files: {      "stylesheets/main.css":      "sass/main.scss"     }   } } How it works... If you look closely at the task configuration, you will notice the files field that specifies what files are going to be operated on. The files field is a very standard field that appears in almost all the Grunt plugins simply due to the fact that many tasks require some or many file manipulations. There's more... The Don't Repeat Yourself (DRY) principle can be applied to your Grunt configuration too. First, define the name and the path added to the beginning of the Gruntfile.js file as follows: app {  dev : "app/dev" } Using the templates is a key in order to avoid hard coded values and inflexible configurations. In addition, you should have noticed that the template has been used using the <%= %> delimiter to expand the value of the development directory: "<%= app.dev %>/css/main.css": "<%= app.dev %>/scss/main.scss"   The <%= %> delimiter essentially executes inline JavaScript and replaces values, as you can see in the following code:   "app/dev/css/main.css": "app/dev/scss/main.scss" So, put simply, the value defined in the app object at the top of the Gruntfile.js file is evaluated and replaced. If you decide to change the name of your development directory, for example, all you need to do is change the app's variable that is defined at the top of your Gruntfile.js file. Finally, it is also worth mentioning that the value for the template does not necessarily have to be a string and can be a JavaScript literal. See also You can read more about templates in the Templates section of Grunt's documentation at http://gruntjs.com/configuring- tasks#templates Adding the Sass compiler task The Sass tasks are the core task that you will need for your Sass development. It has several features and options, but at the heart of it is the Sass compiler that can compile your Sass files into CSS. By the end of this recipe, you will have a good understanding of this plugin, how to add it to your Gruntfile.js file, and how to take advantage of it. In this recipe, the grunt-contrib-sass plugin will be used. This plugin compiles your Sass code by using Ruby Sass. You should use the grunt-sass plugin to compile Sass into CSS with node-sass (LibSass). Getting ready The only requirement for this recipe is to have the grunt-contrib-sass plugin installed and loaded in your Gruntfile.js file. If you have not installed this plugin in the Installing Grunt Plugins recipe of this article, you can do this using the following command in the root of your project: npm install grunt-contrib-sass --save-dev You should also install grunt local by running the following command: npm install grunt --save-dev Finally, your project should have the file and directory, as describe in the Installing Grunt plugins recipe of this article. How to do it... An example of the Sass task configuration is shown in the following code. Start by adding it to your Gruntfile.js file wrapped inside the grunt.initConfig({}) code. Now, your Gruntfile.js file should look as follows: module.exports = function(grunt) {   grunt.initConfig({     //Add the Tasks configurations here.     sass: {                                            dist: {                                            options: {                                       style: 'expanded'         },         files: {                                         'stylesheets/main.css': 'sass/main.scss'  'source'         }       }     }   });     grunt.loadNpmTasks('grunt-contrib-sass');     // Define Tasks here    grunt.registerTask('default', ['sass']);  } Then, run the following command in your console: grunt sass The preceding command will create a new stylesheets/main.css file. Also, notice that the stylesheets/main.css.map file has also been automatically created. The Sass compiler task creates CSS sourcemaps to debug your code by default. How it works... In addition to setting up the task configuration, you should run the Grunt command to test the Sass task. When you run the grunt sass command, Grunt will look for a configuration called Sass in the Gruntfile.js file. Once it finds it, it will run the task with some default options if they are not explicitly defined. Successful tasks will end with the following message: Done, without errors. There's more... There are several other options that you can include in the Sass task. An option can also be set at the global Sass task level, so the option will be applied in all the subtasks of Sass. In addition to options, Grunt also provides targets for every task to allow you to set different configurations for the same task. In other words, if, for example, you need to have two different versions of the Sass task with different source and destination folders, you could easily use two different targets. Adding and executing targets are very easy. Adding more builds just follows the JSON notation, as shown here:    sass: {                                      // Task       dev: {                                    // Target         options: {                               // Target options           style: 'expanded'         },         files: {                                 // Dictionary of files         'stylesheets/main.css': 'sass/main.scss' // 'destination': 'source'         }       },       dist: {                               options: {                        style: 'expanded',           sourcemap: 'none'                  },         files: {                                      'stylesheets/main.min.css': 'sass/main.scss'         }       }     } In the preceding example, two builds are defined. The first one is named dev and the second is called dist. Each of these targets belongs to the Sass task, but they use different options and different folders for the source and the compiled Sass code. Moreover, you can run a particular target using grunt sass:nameOfTarget, where nameOfTarge is the name of the target that you are trying to use. So, for example, if you need to run the dist target, you will have to run the grunt sass:dist command in your console. However, if you need to run both the targets, you could simply run grunt sass and it would run both the targets sequentially. As already mentioned, the grunt-contrib-sass plugin compiles your Sass code by using Ruby Sass, and you should use the grunt-sass plugin to compile Sass to CSS with node-sass (LibSass). To switch to the grunt-sass plugin, you will have to install it locally first by running the following command in your console: npm install grunt-sass Then, replace grunt.loadNpmTasks('grunt-contrib-sass'); with grunt.loadNpmTasks('grunt-sass'); in the Gruntfile.js file; the basic options for grunt-contrib-sass and grunt-sass are very similar, so you have to change the options for the Sass task when switching to grunt-sass. Finally, notice that grunt-contrib-sass also has an option to turn Compass on. See also Please refer to Grunt's documentation for a full list of options, which is available at https://gruntjs/grunt-contrib-sass#options Also, read Grunt's documentation for more details about configuring your tasks and targets at http://gruntjs.com/configuring-tasks#task-configuration-and-targets github.com/ Summary In this article you studied about installing Grunt, installing Grunt plugins, utilizing the Gruntfile.js file, adding a configuration definition for a plugin and adding the Sass compiler task. Resources for Article: Further resources on this subject: Meeting SAP Lumira [article] Security in Microsoft Azure [article] Basic Concepts of Machine Learning and Logistic Regression Example in Mahout [article]
Read more
  • 0
  • 0
  • 5159

Packt
13 Apr 2016
7 min read
Save for later

Nginx "expires" directive – Emitting Caching Headers

Packt
13 Apr 2016
7 min read
In this article by Alex Kapranoff, the author of the book Nginx Troubleshooting, explains how all browsers (and even many non-browser HTTP clients) support client-side caching. It is a part of the HTTP standard, albeit one of the most complex caching to understand. Web servers do not control client-side caching to full extent, obviously, but they may issue recommendations about what to cache and how, in the form of special HTTP response headers. This is a topic thoroughly discussed in many great articles and guides, so we will mention it shortly, and with a lean towards problems you may face and how to troubleshoot them. (For more resources related to this topic, see here.) In spite of the fact that browsers have been supporting caching on their side for at least 20 years, configuring cache headers was always a little confusing mostly due to the fact that there two sets of headers designed for the same purpose but having different scopes and totally different formats. There is the Expires: header, which was designed as a quick and dirty solution and also the new (relatively) almost omnipotent Cache-Control: header, which tries to support all the different ways an HTTP cache could work. This is an example of a modern HTTP request-response pair containing the caching headers. First is the request headers sent from the browser (here Firefox 41, but it does not matter): User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:41.0) Gecko/20100101 Firefox/41.0" Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" Accept-Encoding:"gzip, deflate" Connection:"keep-alive" Cache-Control:"max-age=0" Then, the response headers is: Cache-Control:"max-age=1800" Content-Encoding:"gzip" Content-Type:"text/html; charset=UTF-8" Date:"Sun, 10 Oct 2015 13:42:34 GMT" Expires:"Sun, 10 Oct 2015 14:12:34 GMT" We highlighted the parts that are relevant. Note that some directives may be sent by both sides of the conversation. First, the browser sent the Cache-Control: max-age=0 header because the user pressed the F5 key. This is an indication that the user wants to receive a response that is fresh. Normally, the request will not contain this header and will allow any intermediate cache to respond with a stale but still nonexpired response. In this case, the server we talked to responded with a gzipped HTML page encoded in UTF-8 and indicated that the response is okay to use for half an hour. It used both mechanisms available, the modern Cache-Control:max-age=1800 header and the very old Expires:Sun, 10 Oct 2015 14:12:34 GMT header. The X-Cache: "EXPIRED" header is not a standard HTTP header but was also probably (there is no way to know for sure from the outside) emitted by Nginx. It may be an indication that there are, indeed, intermediate caching proxies between the client and the server, and one of them added this header for debugging purposes. The header may also show that the backend software uses some internal caching. Another possible source of this header is a debugging technique used to find problems in the Nginx cache configuration. The idea is to use the cache hit or miss status, which is available in one of the handy internal Nginx variables as a value for an extra header and then to be able to monitor the status from the client side. This is the code that will add such a header: add_header X-Cache $upstream_cache_status; Nginx has a special directive that transparently sets up both of standard cache control headers, and it is named expires. This is a piece of the nginx.conf file using the expires directive: location ~* \.(?:css|js)$ { expires 1y; add_header Cache-Control "public"; } First, the pattern uses the so-called noncapturing parenthesis, which is a feature first appeared in Perl regular expressions. The effect of this regexp is the same as of a simpler \.(css|js)$ pattern, but the regular expression engine is specifically instructed not to create a variable containing the actual string from inside the parenthesis. This is a simple optimization. Then, the expires directive declares that the content of the css and js files will expire after a year of storage. The actual headers as received by the client will look like this: Server: nginx/1.9.8 (Ubuntu) Date: Fri, 11 Mar 2016 22:01:04 GMT Content-Type: text/css Last-Modified: Thu, 10 Mar 2016 05:45:39 GMT Expires: Sat, 11 Mar 2017 22:01:04 GMT Cache-Control: max-age=31536000 The last two lines contain the same information in wildly different forms. The Expires: header is exactly one year after the date in the Date: header, whereas Cache-Control: specifies the age in seconds so that the client do the date arithmetics itself. The last directive in the provided configuration extract adds another Cache-Control: header with a value of public explicitly. What this means is that the content of the HTTP resource is not access-controlled and therefore may be cached not only for one particular user but also anywhere else. A simple and effective strategy that was used in offices to minimize consumed bandwidth is to have an office-wide caching proxy server. When one user requested a resource from a website on the Internet and that resource had a Cache-Control: public designation, the company cache server would store that to serve to other users on the office network. This may not be as popular today due to cheap bandwidth, but because history has a tendency to repeat itself, you need to know how and why Cache-Control: public works. The Nginx expires directive is surprisingly expressive. It may take a number of different values. See this table: off This value turns off Nginx cache headers logic. Nothing will be added, and more importantly, existing headers received from upstreams will not be modified. epoch This is an artificial value used to purge a stored resource from all caches by setting the Expires header to "1 January, 1970 00:00:01 GMT". max This is the opposite of the "epoch" value. The Expires header will be equal to "31 December 2037 23:59:59 GMT", and the Cache-Control max-age set to 10 years. This basically means that the HTTP responses are guaranteed to never change, so clients are free to never request the same thing twice and may use their own stored values. Specific time An actual specific time value means an expiry deadline from the time of the respective request. For example, expires 10w; A negative value for this directive will emit a special header Cache-Control: no-cache. "modified" specific time If you add the keyword "modified" before the time value, then the expiration moment will be computed relatively to the modification time of the file that is served. "@" specific time A time with an @ prefix specifies an absolute time-of-day expiry. This should be less than 24 hours. For example, Expires @17h;. Many web applications choose to emit the caching headers themselves, and this is a good thing. They have more information about which resources change often and which never change. Tampering with the headers that you receive from the upstream may or may not be a thing you want to do. Sometimes, adding headers to a response while proxying it may produce a conflicting set of headers and therefore create an unpredictable behavior. The static files that you serve with Nginx yourself should have the expires directive in place. However, the general advice about upstreams is to always examine the caching headers you get and refrain from overoptimizing by setting up more aggressive caching policy. Resources for Article: Further resources on this subject: Nginx service [article] Fine-tune the NGINX Configuration [article] Nginx Web Services: Configuration and Implementation [article]
Read more
  • 0
  • 0
  • 18013