Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-what-is-functional-reactive-programming
Packt
08 Feb 2017
4 min read
Save for later

What is functional reactive programming?

Packt
08 Feb 2017
4 min read
Reactive programming is, quite simply, a programming paradigm where you are working with an asynchronous data flow. There are a lot of books and blog posts that argue about what reactive programming is, exactly, but if you delve too deeply too quickly it's easy to get confused. Then reactive programming isn't useful at all. Functional reactive programming takes the principles of functional programming and uses them to enhance reactive programming. You take functions - like map, filter, and reduce - and use them to better manage streams of data. Read now: What is the Reactive manifesto? How does imperative programming compare to reactive programming? Imperative programming makes you describe the steps a computer must do to execute a task. In comparison, functional reactive programming gives you the constructs to propagate changes. This means so you have to think more about what to do than how to do it. This can be illustrated in a simple sum of two numbers. This could be presented as a = b + c in an imperative programming. A single line of code expresses the sum - that's straightforward, right? However, if we change the value of b or c, the value doesn't change - you wouldn't want it to change if you were using an imperative approach. In reactive programming, by contrast, the changes you make to different figures would react accordingly. Imagine the sum in a Microsoft Excel spreadsheet. Every time you change the value of the column b or c it recalculate the value of a. This is like a very basic form of software propagation. You probably already use an asynchronous data flow. Every time you add a listener to a mouse click or a keystroke in a web page we pass a function to react to that user input. So, a mouse click might be seen as a stream of events which you can observe; you can then execute a function when it happens. But, this is only one way of using event streams. You might want more sophistication and control over your streams. Reactive programming takes this to the next level. When you use it you can react to changes in anything - that could be changes in: user inputs external sources database changes changes to variables and properties This then means you can create a stream of events following on from specific actions. For example, we can see the changing value of stocks stock as an EventStream. If you can do this you can then use it to show a user when to buy or sell those stocks in real time. Facebook and Twitter are another good example of where software reacts to changes in external source streams -reactive programming is an important component in developing really dynamic UI that are characteristic of social media sites. Functional reactive programming Functional reactive programming, then gives you the ability to do a lot with streams of data or events. You can filter, combine, map buffer, for example. Going back to the stock example above, you can 'listen' to different stocks and use a filter function to present ones worth buying to the user in real time: Why do I need functional reactive programming? Functional reactive programming is especially useful for: Graphical user interface Animation Robotics Simulation Computer Vision A few years ago, all a user could do in a web app was fill in a form with bits of data and post it to a server.  Today web and mobile apps are much richer for users. To go into more detail, by using reactive programming, you can abstract the source of data to the business logic of the application. What this means in practice is that you can write more concise and decoupled code. In turn, this makes code much more reusable and testable, as you can easily mock streams to test your business logic when testing application. Read more: Introduction to JavaScript Breaking into Microservices Architecture JSON with JSON.Net
Read more
  • 0
  • 0
  • 2574

article-image-introduction-magento-2
Packt
02 Feb 2017
10 min read
Save for later

Introduction to Magento 2

Packt
02 Feb 2017
10 min read
In this article, Gabriel Guarino, the author of the book Magento 2 Beginners Guide discusses, will cover the following topics: Magento as a life style: Magento as a platform and the Magento community Competitors: hosted and self-hosted e-commerce platforms New features in Magento 2 What do you need to get started? (For more resources related to this topic, see here.) Magento as a life style Magento is an open source e-commerce platform. That is the short definition, but I would like to define Magento considering the seven years that I have been part of the Magento ecosystem. In the seven years, Magento has been evolving to the point it is today, a complete solution backed up by people with a passion for e-commerce. If you choose Magento as the platform for your e-commerce website, you will receive updates for the platform on a regular basis. Those updates include new features, improvements, and bug fixes to enhance the overall experience in your website. As a Magento specialist, I can confirm that Magento is a platform that can be customized to fit any requirement. This means that you can add new features, include third-party libraries, and customize the default behavior of Magento. As the saying goes, the only limit is your imagination. Whenever I have to talk about Magento, I always take some time to talk about its community. Sherrie Rohde is the Magento Community Manager and she has shared some really interesting facts about the Magento community in 2016: Delivered over 725 talks on Magento or at Magento-centric events Produced over 100 podcast episodes around Magento Organized and produced conferences and meetup groups in over 34 countries Written over 1000 blog posts about Magento Types of e-commerce solutions There are two types of e-commerce solutions: hosted and self-hosted. We will analyze each e-commerce solution type, and we will cover the general information, pros, and cons of each platform from each category. Self-hosted e-commerce solutions The self-hosted e-commerce solution is a platform that runs on your server, which means that you can download the code, customize it based on your needs, and then implement it in the server that you prefer. Magento is a self-hosted e-commerce solution, which means that you have absolute control on the customization and implementation of your Magento store. WooCommerce WooCommerce is a free shopping cart plugin for WordPress that can be used to create a full-featured e-commerce website. WooCommerce has been created following the same architecture and standards of WordPress, which means that you can customize it with themes and plugins. The plugin currently has more than 18,000,000 downloads, which represents over 39% of all online stores. Pros: It can be downloaded for free Easy setup and configuration A lot of themes available Almost 400 extensions in the marketplace Support through the WooCommerce help desk Cons: WooCommerce cannot be used without WordPress Some essential features are not included out-of-the-box, such us PayPal as a payment method, which means that you need to buy several extensions to add those features Adding custom features to WooCommerce through extensions can be expensive PrestaShop PrestaShop is a free open source e-commerce platform. The platform is currently used by more than 250,000 online stores and is backed by a community of more than 1,000,000 members. The company behind PrestaShop provides a range of paid services, such us technical support, migration, and training to run, manage, and maintain the store. Pros: Free and open source 310 integrated features 3,500 modules and templates in the marketplace Downloaded over 4 million times 63 languages Cons: As WooCommerce, many basic features are not included by default and adding those features through extensions is expensive Multiple bugs and complaints from the PrestaShop community OpenCart OpenCart is an open source platform for e-commerce, available under the GNU General Public License. OpenCart is a good choice for a basic e-commerce website. Pros: Free and open source Easy learning curve More than 13,000 extensions available More than 1,500 themes available Cons: Limited features Not ready for SEO No cache management page in admin panel Hard to customize Hosted e-commerce solutions A hosted e-commerce solution is a platform that runs on the server from the company that provides that service, which means that the solution is easier to set up but there are limitations and you don’t have the freedom to customize the solution according to your needs. The monthly or annual fees increase when the store attracts more traffic and has more customers and orders placed. Shopify Shopify is a cloud-based e-commerce platform for small and medium-sized business. The platform currently powers over 325,000 online stores in approximately 150 countries. Pros: No technical skills required to use the platform Tool to import products from another platform during the sign up process More than 1,500 apps and integrations 24/7 support through phone, chat, and e-mail Cons: The source code is not provided Recurring fee to use the platform Hard to migrate from Shopify to another platform BigCommerce BigCommerce is one of the most popular hosted e-commerce platforms, which is powering more than 95,000 stores in 150 countries. Pros: No technical skills required to use the platform More than 300 apps and integrations available More than 75 themes available Cons: The source code is not provided Recurring fee to use the platform Hard to migrate from BigCommerce to another platform New features in Magento 2 Magento 2 is the new generation of the platform, with new features, technologies, and improvements that make Magento one of the most robust and complete e-commerce solutions available at the moment. In this section, we will describe the main differences between Magento 1 and Magento 2. New technologies Composer: This is a dependency manager for PHP. The dependencies can be declared and Composer will manage these dependencies by installing and updating them. In Magento 2, Composer simplifies the process of installing and upgrading extensions and upgrading Magento. Varnish 4: This is an open source HTTP accelerator. Varnish stores pages and other assets in memory to reduce the response time and network bandwidth consumption. Full Page Caching: In Magento 1, Full Page Caching was only included in the Magento Enterprise Edition. In Magento 2, Full Page Caching is included in all the editions, allowing the content from static pages to be cached, increasing the performance and reducing the server load. Elasticsearch: This is a search engine that improves the search quality in Magento and provides background re-indexing and horizontal scaling. RequireJS: It is a library to load Javascript files on-the-fly, reducing the number of HTTP requests and improving the speed of the Magento Store. jQuery: The frontend in Magento 1 was implemented using Prototype as the language for Javascript. In Magento 2, the language for Javascript code is jQuery. Knockout.js: This is an open source Javascript library that implements the Model-View-ViewModel (MVVM) pattern, providing a great way of creating interactive frontend components. LESS: This is an open source CSS preprocessor that allows the developer to write styles for the store in a more maintainable and extendable way. Magento UI Library: This is a modular frontend library that uses a set of mix-ins for general elements and allows developers to work more efficiently on frontend tasks. New tools Magento Performance Toolkit: This is a tool that allows merchants and developers to test the performance of the Magento installation and customizations. Magento 2 command-line tool: This is a tool to run a set of commands in the Magento installation to clear the cache, re-index the store, create database backups, enable maintenance mode, and more. Data Migration Tool: This tool allows developers to migrate the existing data from Magento 1.x to Magento 2. The tool includes verification, progress tracking, logging, and testing functions. Code Migration Toolkit: This allows developers to migrate Magento 1.x extensions and customizations to Magento 2. Manual verification and updates are required in order to make the Magento 1.x extensions compatible with Magento 2. Magento 2 Developer Documentation: One of the complaints by the Magento community was that Magento 1 didn’t have enough documentation for developers. In order to resolve this problem, the Magento team created the official Magento 2 Developer Documentation with information for developers, system administrators, designers, and QA specialists. Admin panel changes Better UI: The admin panel has a new look-and-feel, which is more intuitive and easier to use. In addition to that, the admin panel is now responsive and can be viewed from any device in any resolution. Inline editing: The admin panel grids allow inline editing to manage data in a more effective way. Step-by-step product creation: The product add/edit page is one of the most important pages in the admin panel. The Magento team worked hard to create a different experience when it comes to adding/editing products in the Magento admin panel, and the result is that you can manage products with a step-by-step page that includes the fields and import tools separated in different sections. Frontend changes Integrated video in product page: Magento 2 allows uploading a video for the product, introducing a new way of displaying products in the catalog. Simplified checkout: The steps in the checkout page have been reduced to allow customers to place orders in less time, increasing the conversion rate of the Magento store. Register section removed from checkout page: In Magento 1, the customer had the opportunity to register from step 1 of the checkout page. This required the customer to think about his account and the password before completing the order. In order to make the checkout simpler, Magento 2 allows the customer to register from the order success page without delaying the checkout process. What do you need to get started? Magento is a really powerful platform and there is always something new to learn. Just when you think you know everything about Magento, a new version is released with new features to discover. This makes Magento fun, and this makes Magento unique as an e-commerce platform. That being said, this book will be your guide to discover everything you need to know to implement, manage, and maintain your first Magento store. In addition to that, I would like to highlight additional resources that will be useful in your journey of mastering Magento: Official Magento Blog (https://magento.com/blog): Get the latest news from the Magento team: best practices, customer stories, information related to events, and general Magento news Magento Resources Library (https://magento.com/resources): Videos, webinars and publications covering useful information organized by categories: order management, marketing and merchandising, international expansion, customer experience, mobile architecture and technology, performance and scalability, security, payments and fraud, retail innovation, and business flexibility Magento Release Information (http://devdocs.magento.com/guides/v2.1/release-notes/bk-release-notes.html): This is the place where you will get all the information about the latest Magento releases, including the highlights of each release, security enhancements, information about known issues, new features, and instructions for upgrade Magento Security Center (https://magento.com/security): Information about each of the Magento security patches as well as best practices and guidelines to keep your Magento store secure Upcoming Events and Webinars (https://magento.com/events): The official list of upcoming Magento events, including live events and webinars Official Magento Forums (https://community.magento.com): Get feedback from the Magento community in the official Magento Forums Summary In this article, we reviewed Magento 2 and the changes that have been introduced in the new version of the platform. We also analyzed the types of e-commerce solutions and the most important platforms available. Resources for Article: Further resources on this subject: Installing Magento [article] Magento : Payment and shipping method [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 2269

article-image-get-familiar-angular
Packt
23 Jan 2017
26 min read
Save for later

Get Familiar with Angular

Packt
23 Jan 2017
26 min read
This article by Minko Gechev, the author of the book Getting Started with Angular - Second Edition, will help you understand what is required for the development of a new version of Angular from scratch and why its new features make intuitive sense for the modern Web in building high-performance, scalable, single-page applications. Some of the topics that we'll discuss are as follows: Semantic versioning and what chances to expect from Angular 2, 3, 5 and so on. How the evolution of the Web influenced the development of Angular. What lessons we learned from using Angular 1 in the wild for the last a couple of years. What TypeScript is and why it is a better choice for building scalable single-page applications than JavaScript. (For more resources related to this topic, see here.) Angular adopted semantic versioning so before going any further let's make an overview of what this actually means. Angular and semver Angular 1 was rewritten from scratch and replaced with its successor, Angular 2. A lot of us were bothered by this big step, which didn't allow us to have a smooth transition between these two versions of the framework. Right after Angular 2 got stable, Google announced that they want to follow the so called semantic versioning (also known as semver). Semver defines the version of given software project as the triple X.Y.Z, where Z is called patch version, Y is called minor version, and X is called major version. A change in the patch version means that there are no intended breaking changes between two versions of the same project but only bug fixes. The minor version of a project will be incremented when new functionality is introduced, and there are no breaking changes. Finally, the major version will be increased when in the API are introduced incompatible changes. This means that between versions 2.3.1 and 2.10.4, there are no introduced breaking changes but only a few added features and bug fixes. However, if we have version 2.10.4 and we want to change any of the already existing public APIs in a backward-incompatible manner (for instance, change the order of the parameters that a method accepts), we need to increment the major version, and reset the patch and minor versions, so we will get version 3.0.0. The Angular team also follows a strict schedule. According to it, a new patch version needs to be introduced every week; there should be three monthly minor release after each major release, and finally, one major release every six months. This means that by the end of 2018, we will have at least Angular 6. However, this doesn't mean that every six months we'll have to go through the same migration path like we did between Angular 1 and Angular 2. Not every major release will introduce breaking changes that are going to impact our projects . For instance, support for newer version of TypeScript or change of the last optional argument of a method will be considered as a breaking change. We can think of these breaking changes in a way similar to that happened between Angular 1.2 and Angular 1.3. We'll refer to Angular 2 as either Angular 2 or only Angular. If we explicitly mention Angular 2, this doesn't mean that the given paragraph will not be valid for Angular 4 or Angular 5; it most likely will. In case you're interested to know what are the changes between different versions of the framework, you can take a look at the changelog at https://github.com/angular/angular/blob/master/CHANGELOG.md. If we're discussing Angular 1, we will be more explicit by mentioning a version number, or the context will make it clear that we're talking about a particular version. Now that we introduced the Angular's semantic versioning and conventions for referring to the different versions of the framework, we can officially start our journey! The evolution of the Web - time for a new framework In the past couple of years, the Web has evolved in big steps. During the implementation of ECMAScript 5, the ECMAScript 6 standard started its development (now known as ECMAScript 2015 or ES2015). ES2015 introduced many changes in JavaScript, such as adding built-in language support for modules, block scope variable definition, and a lot of syntactical sugar, such as classes and destructuring. Meanwhile, Web Components were invented. Web Components allow us to define custom HTML elements and attach behavior to them. Since it is hard to extend the existing set of HTML elements with new ones (such as dialogs, charts, grids, and more), mostly because of the time required for consolidation and standardization of their APIs, a better solution is to allow developers to extend the existing elements the way they want. Web Components provide us with a number of benefits, including better encapsulation, the better semantics of the markup we produce, better modularity, and easier communication between developers and designers. We know that JavaScript is a single-threaded language. Initially, it was developed for simple client-side scripting, but over time, its role has shifted quite a bit. Now, with HTML5, we have different APIs that allow audio and video processing, communication with external services through a two-directional communication channel, transferring and processing big chunks of raw data, and more. All these heavy computations in the main thread may create a poor user experience. They may introduce freezing of the user interface when time-consuming computations are being performed. This led to the development of WebWorkers, which allow the execution of the scripts in the background that communicate with the main thread through message passing. This way, multithreaded programming was brought to the browser. Some of these APIs were introduced after the development of Angular 1 had begun; that's why the framework wasn't built with most of them in mind. Taking advantage of the APIs gives developers many benefits, such as the following: Significant performance improvements Development of software with better quality characteristics Now, let's briefly discuss how each of these technologies has been made part of the new Angular core and why. The evolution of ECMAScript Nowadays, browser vendors are releasing new features in short iterations, and users receive updates quite often. This helps developers take advantage of bleeding-edge Web technologies. ES2015 that is already standardized. The implementation of the latest version of the language has already started in the major browsers. Learning the new syntax and taking advantage of it will not only increase our productivity as developers but also will prepare us for the near future when all the browsers will have full support for it. This makes it essential to start using the latest syntax now. Some projects' requirements may enforce us to support older browsers, which do not support any ES2015 features. In this case, we can directly write ECMAScript 5, which has different syntax but equivalent semantics to ES2015. On the other hand, a better approach will be to take advantage of the process of transpilation. Using a transpiler in our build process allows us to take advantage of the new syntax by writing ES2015 and translating it to a target language that is supported by the browsers. Angular has been around since 2009. Back then, the frontend of most websites was powered by ECMAScript 3, the last main release of ECMAScript before ECMAScript 5. This automatically meant that the language used for the framework's implementation was ECMAScript 3. Taking advantage of the new version of the language requires porting of the entirety of Angular 1 to ES2015. From the beginning, Angular 2 took into account the current state of the Web by bringing the latest syntax in the framework. Although new Angular is written with a superset of ES2016 (TypeScript), it allows developers to use a language of their own preference. We can use ES2015, or if we prefer not to have any intermediate preprocessing of our code and simplify the build process, we can even use ECMAScript 5. Note that if we use JavaScript for our Angular applications we cannot use Ahead-of-Time (AoT) compilation. Web Components The first public draft of Web Components was published on May 22, 2012, about three years after the release of Angular 1. As mentioned, the Web Components standard allows us to create custom elements and attach behavior to them. It sounds familiar; we've already used a similar concept in the development of the user interface in Angular 1 applications. Web Components sound like an alternative to Angular directives; however, they have a more intuitive API and built-in browser support. They introduced a few other benefits, such as better encapsulation, which is very important, for example, in handling CSS-style collisions. A possible strategy for adding Web Components support in Angular 1 is to change the directives implementation and introduce primitives of the new standard in the DOM compiler. As Angular developers, we know how powerful but also complex the directives API is. It includes a lot of properties, such as postLink, preLink, compile, restrict, scope, controller, and much more, and of course, our favorite transclude. Approved as standard, Web Components will be implemented on a much lower level in the browsers, which introduces plenty of benefits, such as better performance and native API. During the implementation of Web Components, a lot of web specialists met with the same problems the Angular team did when developing the directives API and came up with similar ideas. Good design decisions behind Web Components include the content element, which deals with the infamous transclusion problem in Angular 1. Since both the directives API and Web Components solve similar problems in different ways, keeping the directives API on top of Web Components would have been redundant and added unnecessary complexity. That's why, the Angular core team decided to start from the beginning by building a framework compatible with Web Components and taking full advantage of the new standard. Web Components involve new features; some of them were not yet implemented by all browsers. In case our application is run in a browser, which does not support any of these features natively, Angular emulates them. An example for this is the content element polyfilled with the ng-content directive. WebWorkers JavaScript is known for its event loop. Usually, JavaScript programs are executed in a single thread and different events are scheduled by being pushed in a queue and processed sequentially, in the order of their arrival. However, this computational strategy is not effective when one of the scheduled events requires a lot of computational time. In such cases, the event's handling will block the main thread, and all other events will not be handled until the time-consuming computation is complete and passes the execution to the next one in the queue. A simple example of this is a mouse click that triggers an event, in which callback we do some audio processing using the HTML5 audio API. If the processed audio track is big and the algorithm running over it is heavy, this will affect the user's experience by freezing the UI until the execution is complete. The WebWorker API was introduced in order to prevent such pitfalls. It allows execution of heavy computations inside the context of a different thread, which leaves the main thread of execution free, capable of handling user input and rendering the user interface. How can we take advantage of this in Angular? In order to answer this question, let's think about how things work in Angular 1. What if we have an enterprise application, which processes a huge amount of data that needs to be rendered on the screen using data binding? For each binding, the framework will create a new watcher. Once the digest loop is run, it will loop over all the watchers, execute the expressions associated with them, and compare the returned results with the results gained from the previous iteration. We have a few slowdowns here: The iteration over a large number of watchers The evaluation of the expression in a given context The copy of the returned result The comparison between the current result of the expression's evaluation and the previous one All these steps could be quite slow, depending on the size of the input. If the digest loop involves heavy computations, why not move it to a WebWorker? Why not run the digest loop inside WebWorker, get the changed bindings, and then apply them to the DOM? There were experiments by the community, which aimed for this result. However, their integration into the framework wasn't trivial. One of the main reasons behind the lack of satisfying results was the coupling of the framework with the DOM. Often, inside the watchers' callbacks, Angular 1 directly manipulates the DOM, which makes it impossible to move the watchers inside WebWorkers since the WebWorkers are invoked in an isolated context, without access to the DOM. In Angular 1, we may have implicit or explicit dependencies between the different watchers, which require multiple iterations of the digest loop in order to get stable results. Combining the last two points, it is quite hard to achieve practical results in calculating the changes in threads other than the main thread of execution. Fixing this in Angular 1 introduces a great deal of complexity in the internal implementation. The framework simply was not built with this in mind. Since WebWorkers were introduced before the Angular 2 design process started, the core team took them into mind from the beginning. Lessons learned from Angular 1 in the wild It's important to remember that we're not starting completely from scratch. We're taking what we've learned from Angular 1 with us. In the period since 2009, the Web is not the only thing that evolved. We also started building more and more complex applications. Today, single-page applications are not something exotic, but more like a strict requirement for all the web applications solving business problems, which are aiming for high performance and a good user experience. Angular 1 helped us to efficiently build large-scale, single-page applications. However, by applying it in various use cases, we've also discovered some of its pitfalls. Learning from the community's experience, Angular's core team worked on new ideas aiming to answer the new requirements. Controllers Angular 1 follows the Model View Controller (MVC) micro-architectural pattern. Some may argue that it looks more like Model View ViewModel (MVVM) because of the view model attached as properties to the scope or the current context in case of "controller as syntax". It could be approached differently again, if we use the Model View Presenter pattern (MVP). Because of all the different variations of how we can structure the logic in our applications, the core team called Angular 1 a Model View Whatever (MVW) framework. The view in any Angular 1 application is supposed to be a composition of directives. The directives collaborate together in order to deliver fully functional user interfaces. Services are responsible for encapsulating the business logic of the applications. That's the place where we should put the communication with RESTful services through HTTP, real-time communication with WebSockets and even WebRTC. Services are the building block where we should implement the domain models and business rules of our applications. There's one more component, which is mostly responsible for handling user input and delegating the execution to the services--the controller. Although the services and directives have well-defined roles, we can often see the anti-pattern of the Massive View Controller, which is common in iOS applications. Occasionally, developers are tempted to access or even manipulate the DOM directly from their controllers. Initially, this happens while you want to achieve something simple, such as changing the size of an element, or quick and dirty changing elements' styles. Another noticeable anti-pattern is the duplication of the business logic across controllers. Often developers tend to copy and paste logic, which should be encapsulated inside services. The best practices for building Angular 1 applications state is that the controllers should not manipulate the DOM at all, instead, all DOM access and manipulations should be isolated in directives. If we have some repetitive logic between controllers, most likely we want to encapsulate it into a service and inject this service with the dependency injection mechanism of Angular in all the controllers that need that functionality. This is where we're coming from in Angular 1. All this said, it seems that the functionality of controllers could be moved into the directive's controllers. Since directives support the dependency injection API, after receiving the user's input, we can directly delegate the execution to a specific service, already injected. This is the main reason why now Angular uses a different approach by removing the ability to put controllers everywhere by using the ng-controller directive. Scope Data-binding in Angular 1 is achieved using the scope object. We can attach properties to it and explicitly declare in the template that we want to bind to these properties (one- or two-way). Although the idea of the scope seems clear, it has two more responsibilities, including event dispatching and the change detection-related behavior. Angular beginners have a hard time understanding what scope really is and how it should be used. Angular 1.2 introduced something called controller as syntax. It allows us to add properties to the current context inside the given controller (this), instead of explicitly injecting the scope object and later adding properties to it. This simplified syntax can be demonstrated through the following snippet: <div ng-controller="MainCtrl as main"> <button ng-click="main.clicked()">Click</button> </div> function MainCtrl() { this.name = 'Foobar'; } MainCtrl.prototype.clicked = function () { alert('You clicked me!'); }; The latest Angular took this even further by removing the scope object. All the expressions are evaluated in the context of the given UI component. Removing the entire scope API introduces higher simplicity; we don't need to explicitly inject it anymore, instead we add properties to the UI components to which we can later bind. This API feels much simpler and more natural. Dependency injection Maybe the first framework on the market that included inversion of control (IoC) through dependency injection (DI) in the JavaScript world was Angular 1. DI provides a number of benefits, such as easier testability, better code organization and modularization, and simplicity. Although the DI in the first version of the framework does an amazing job, Angular 2 took this even further. Since latest Angular is on top of the latest Web standards, it uses the ECMAScript 2016 decorators' syntax for annotating the code for using DI. Decorators are quite similar to the decorators in Python or annotations in Java. They allow us to decorate the behavior of a given object, or add metadata to it, using reflection. Since decorators are not yet standardized and supported by major browsers, their usage requires an intermediate transpilation step; however, if you don't want to take it, you can directly write a little bit more verbose code with ECMAScript 5 syntax and achieve the same semantics. The new DI is much more flexible and feature-rich. It also fixes some of the pitfalls of Angular 1, such as the different APIs; in the first version of the framework, some objects are injected by position (such as the scope, element, attributes, and controller in the directives' link function) and others, by name (using parameters names in controllers, directives, services, and filters). Server-side rendering The bigger the requirements of the Web are, the more complex the web applications become. Building a real-life, single-page application requires writing a huge amount of JavaScript, and including all the required external libraries may increase the size of the scripts on our page to a few megabytes. The initialization of the application may take up to several seconds or even tens of seconds on mobile until all the resources get fetched from the server, the JavaScript is parsed and executed, the page gets rendered, and all the styles are applied. On low-end mobile devices that use a mobile Internet connection, this process may make the users give up on visiting our application. Although there are a few practices that speed up this process, in complex applications, there's no silver bullet. In the process of trying to improve the user experience, developers discovered something called server-side rendering. It allows us to render the requested view of a single-page application on the server and directly provide the HTML for the page to the user. Later, once all the resources are processed, the event listeners and bindings can be added by the script files. This sounds like a good way to boost the performance of our application. One of the pioneers in this was React, which allowed prerendering of the user interface on the server side using Node.js DOM implementations. Unfortunately, the architecture of Angular 1 does not allow this. The showstopper is the strong coupling between the framework and the browser APIs, the same issue we had in running the change detection in WebWorkers. Another typical use case for the server-side rendering is for building Search Engine Optimization (SEO)-friendly applications. There were a couple of hacks used in the past for making the Angular 1 applications indexable by the search engines. One such practice, for instance, is the traversal of the application with a headless browser, which executes the scripts on each page and caches the rendered output into HTML files, making it accessible by the search engines. Although this workaround for building SEO-friendly applications works, server-side rendering solves both of the above-mentioned issues, improving the user experience and allowing us to build SEO-friendly applications much more easily and far more elegantly. The decoupling of Angular with the DOM allows us to run our Angular applications outside the context of the browser. Applications that scale MVW has been the default choice for building single-page applications since Backbone.js appeared. It allows separation of concerns by isolating the business logic from the view, allowing us to build well-designed applications. Taking advantage of the observer pattern, MVW allows listening for model changes in the view and updating it when changes are detected. However, there are some explicit and implicit dependencies between these event handlers, which make the data flow in our applications not obvious and hard to reason about. In Angular 1, we are allowed to have dependencies between the different watchers, which requires the digest loop to iterate over all of them a couple of times until the expressions' results get stable. The new Angular makes the data flow one-directional; this has a number of benefits: More explicit data flow. No dependencies between bindings, so no time to live (TTL) of the digest. Better performance of the framework: The digest loop is run only once. We can create apps, which are friendly to immutable or observable models, that allows us to make further optimizations. The change in the data flow introduces one more fundamental change in Angular 1 architecture. We may take another perspective on this problem when we need to maintain a large codebase written in JavaScript. Although JavaScript's duck typing makes the language quite flexible, it also makes its analysis and support by IDEs and text editors harder. Refactoring of large projects gets very hard and error-prone because in most cases, the static analysis and type inference are impossible. The lack of compiler makes typos all too easy, which are hard to notice until we run our test suite or run the application. The Angular core team decided to use TypeScript because of the better tooling possible with it and the compile-time type checking, which help us to be more productive and less error-prone. As the following diagram shows, TypeScript is a superset of ECMAScript; it introduces explicit type annotations and a compiler:  Figure 1 The TypeScript language is compiled to plain JavaScript, supported by today's browsers. Since version 1.6, TypeScript implements the ECMAScript 2016 decorators, which makes it the perfect choice for Angular. The usage of TypeScript allows much better IDE and text editors' support with static code analysis and type checking. All this increases our productivity dramatically by reducing the mistakes we make and simplifying the refactoring process. Another important benefit of TypeScript is the performance improvement we implicitly get by the static typing, which allows runtime optimizations by the JavaScript virtual machine. Templates Templates are one of the key features in Angular 1. They are simple HTML and do not require any intermediate translation, unlike most template engines, such as mustache. Templates in Angular combine simplicity with power by allowing us to extend HTML by creating an internal domain-specific language (DSL) inside it, with custom elements and attributes. This is one of the main purposes of Web Components as well. We already mentioned how and why Angular takes advantage of this new technology. Although Angular 1 templates are great, they can still get better! The new Angular templates took the best parts of the ones in the previous release of the framework and enhanced them by fixing some of their confusing parts. For example, let's say we have a directive and we want to allow the user to pass a property to it using an attribute. In Angular 1, we can approach this in the following three different ways: <user name="literal"></user> <user name="expression"></user> <user name="{{interpolate}}"></user> In the user directive, we pass the name property using three different approaches. We can either pass a literal (in this case, the string "literal"), a string, which will be evaluated as an expression (in our case "expression"), or an expression inside, {{ }}. Which syntax should be used completely depends on the directive's implementation, which makes its API tangled and hard to remember. It is a frustrating task to deal with a large amount of components with different design decisions on a daily basis. By introducing a common convention, we can handle such problems. However, in order to have good results and consistent APIs, the entire community needs to agree with it. The new Angular deals with this problem by providing special syntax for attributes, whose values need to be evaluated in the context of the current component, and a different syntax for passing literals. Another thing we're used to, based on our Angular 1 experience, is the microsyntax in template directives, such as ng-if and ng-for. For instance, if we want to iterate over a list of users and display their names in Angular 1, we can use: <div ng-for="user in users">{{user.name}}</div> Although this syntax looks intuitive to us, it allows limited tooling support. However, Angular 2 approached this differently by bringing a little bit more explicit syntax with richer semantics: <template ngFor let-user [ngForOf]="users"> {{user.name}} </template> The preceding snippet explicitly defines the property, which has to be created in the context of the current iteration (user), the one we iterate over (users). Since this syntax is too verbose for typing, developers can use the following syntax, which later gets translated to the more verbose one: <li *ngFor="let user of users"> {{user.name}} </li> The improvements in the new templates will also allow better tooling for advanced support by text editors and IDEs. Change detection We already mentioned the opportunity to run the digest loop in the context of a different thread, instantiated as WebWorker. However, the implementation of the digest loop in Angular 1 is not quite memory-efficient and prevents the JavaScript virtual machine from doing further code optimizations, which allows significant performance improvements. One such optimization is the inline caching ( http://mrale.ph/blog/2012/06/03/explaining-js-vms-in-js-inline-caches.html ). The Angular team did a lot of research in order to discover different ways the performance and the efficiency of the change detection could be improved. This led to the development of a brand new change detection mechanism. As a result, Angular performs change detection in code that the framework directly generates from the components' templates. The code is generated by the Angular compiler. There are two built-in code generation (also known as compilation) strategies: Just-in-Time (JiT) compilation: At runtime, Angular generates code that performs change detection on the entire application. The generated code is optimized for the JavaScript virtual machine, which provides a great performance boost. Ahead-of-Time (AoT) compilation: Similar to JiT with the difference that the code is being generated as part of the application's build process. It can be used for speeding the rendering up by not performing the compilation in the browser and also in environments that disallow eval(), such as CSP (Content-Security-Policy) and Chrome extensions. Summary In this article, we considered the main reasons behind the decisions taken by the Angular core team and the lack of backward compatibility between the last two major versions of the framework. We saw that these decisions were fueled by two things--the evolution of the Web and the evolution of the frontend development, with the lessons learned from the development of Angular 1 applications. We learned why we need to use the latest version of the JavaScript language, why to take advantage of Web Components and WebWorkers, and why it's not worth it to integrate all these powerful tools in version 1. We observed the current direction of frontend development and the lessons learned in the last few years. We described why the controller and scope were removed from Angular 2, and why Angular 1's architecture was changed in order to allow server-side rendering for SEO-friendly, high-performance, single-page applications. Another fundamental topic we took a look at was building large-scale applications, and how that motivated single-way data flow in the framework and the choice of the statically typed language, TypeScript. The new Angular reuses some of the naming of the concepts introduced by Angular 1, but generally changes the building blocks of our single-page applications completely. We will take a peek at the new concepts and compare them with the ones in the previous version of the framework. We'll make a quick introduction to modules, directives, components, routers, pipes, and services, and describe how they could be combined for building classy, single-page applications. Resources for Article: Further resources on this subject: Angular.js in a Nutshell [article] Angular's component architecture [article] AngularJS Performance [article]
Read more
  • 0
  • 0
  • 2525
Banner background image

article-image-building-scalable-microservices
Packt
18 Jan 2017
33 min read
Save for later

Building Scalable Microservices

Packt
18 Jan 2017
33 min read
In this article by Vikram Murugesan, the author of the book Microservices Deployment Cookbook, we will see a brief introduction to concept of the microservices. (For more resources related to this topic, see here.) Writing microservices with Spring Boot Now that our project is ready, let's look at how to write our microservice. There are several Java-based frameworks that let you create microservices. One of the most popular frameworks from the Spring ecosystem is the Spring Boot framework. In this article, we will look at how to create a simple microservice application using Spring Boot. Getting ready Any application requires an entry point to start the application. For Java-based applications, you can write a class that has the main method and run that class as a Java application. Similarly, Spring Boot requires a simple Java class with the main method to run it as a Spring Boot application (microservice). Before you start writing your Spring Boot microservice, you will also require some Maven dependencies in your pom.xml file. How to do it… Create a Java class called com.packt.microservices.geolocation.GeoLocationApplication.java and give it an empty main method: package com.packt.microservices.geolocation; public class GeoLocationApplication { public static void main(String[] args) { // left empty intentionally } } Now that we have our basic template project, let's make our project a child project of Spring Boot's spring-boot-starter-parent pom module. This module has a lot of prerequisite configurations in its pom.xml file, thereby reducing the amount of boilerplate code in our pom.xml file. At the time of writing this, 1.3.6.RELEASE was the most recent version: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.3.6.RELEASE</version> </parent> After this step, you might want to run a Maven update on your project as you have added a new parent module. If you see any warnings about the version of the maven-compiler plugin, you can either ignore it or just remove the <version>3.5.1</version> element. If you remove the version element, please perform a Maven update afterward. Spring Boot has the ability to enable or disable Spring modules such as Spring MVC, Spring Data, and Spring Caching. In our use case, we will be creating some REST APIs to consume the geolocation information of the users. So we will need Spring MVC. Add the following dependencies to your pom.xml file: <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> We also need to expose the APIs using web servers such as Tomcat, Jetty, or Undertow. Spring Boot has an in-memory Tomcat server that starts up as soon as you start your Spring Boot application. So we already have an in-memory Tomcat server that we could utilize. Now let's modify the GeoLocationApplication.java class to make it a Spring Boot application: package com.packt.microservices.geolocation; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class GeoLocationApplication { public static void main(String[] args) { SpringApplication.run(GeoLocationApplication.class, args); } } As you can see, we have added an annotation, @SpringBootApplication, to our class. The @SpringBootApplication annotation reduces the number of lines of code written by adding the following three annotations implicitly: @Configuration @ComponentScan @EnableAutoConfiguration If you are familiar with Spring, you will already know what the first two annotations do. @EnableAutoConfiguration is the only annotation that is part of Spring Boot. The AutoConfiguration package has an intelligent mechanism that guesses the configuration of your application and automatically configures the beans that you will likely need in your code. You can also see that we have added one more line to the main method, which actually tells Spring Boot the class that will be used to start this application. In our case, it is GeoLocationApplication.class. If you would like to add more initialization logic to your application, such as setting up the database or setting up your cache, feel free to add it here. Now that our Spring Boot application is all set to run, let's see how to run our microservice. Right-click on GeoLocationApplication.java from Package Explorer, select Run As, and then select Spring Boot App. You can also choose Java Application instead of Spring Boot App. Both the options ultimately do the same thing. You should see something like this on your STS console: If you look closely at the console logs, you will notice that Tomcat is being started on port number 8080. In order to make sure our Tomcat server is listening, let's run a simple curl command. cURL is a command-line utility available on most Unix and Mac systems. For Windows, use tools such as Cygwin or even Postman. Postman is a Google Chrome extension that gives you the ability to send and receive HTTP requests. For simplicity, we will use cURL. Execute the following command on your terminal: curl http://localhost:8080 This should give us an output like this: {"timestamp":1467420963000,"status":404,"error":"Not Found","message":"No message available","path":"/"} This error message is being produced by Spring. This verifies that our Spring Boot microservice is ready to start building on with more features. There are more configurations that are needed for Spring Boot, which we will perform later in this article along with Spring MVC. Writing microservices with WildFly Swarm WildFly Swarm is a J2EE application packaging framework from RedHat that utilizes the in-memory Undertow server to deploy microservices. In this article, we will create the same GeoLocation API using WildFly Swarm and JAX-RS. To avoid confusion and dependency conflicts in our project, we will create the WildFly Swarm microservice as its own Maven project. This article is just here to help you get started on WildFly Swarm. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created the Spring Boot Maven project, create a Maven WAR module with the groupId com.packt.microservices and name/artifactId geolocation-wildfly. Feel free to use either your IDE or the command line. Be aware that some IDEs complain about a missing web.xml file. We will see how to fix that in the next section. How to do it… Before we set up the WildFly Swarm project, we have to fix the missing web.xml error. The error message says that Maven expects to see a web.xml file in your project as it is a WAR module, but this file is missing in your project. In order to fix this, we have to add and configure maven-war-plugin. Add the following code snippet to your pom.xml file's project section: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.6</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> </plugins> </build> After adding the snippet, save your pom.xml file and perform a Maven update. Also, if you see that your project is using a Java version other than 1.8. Again, perform a Maven update for the changes to take effect. Now, let's add the dependencies required for this project. As we know that we will be exposing our APIs, we have to add the JAX-RS library. JAX-RS is the standard JSR-compliant API for creating RESTful web services. JBoss has its own version of JAX-RS. So let's  add that dependency to the pom.xml file: <dependencies> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.0_spec</artifactId> <version>1.0.0.Final</version> <scope>provided</scope> </dependency> </dependencies> The one thing that you have to note here is the provided scope. The provide scope in general means that this JAR need not be bundled with the final artifact when it is built. Usually, the dependencies with provided scope will be available to your application either via your web server or application server. In this case, when Wildfly Swarm bundles your app and runs it on the in-memory Undertow server, your server will already have this dependency. The next step toward creating the GeoLocation API using Wildfly Swarm is creating the domain object. Use the com.packt.microservices.geolocation.GeoLocation.java file. Now that we have the domain object, there are two classes that you need to create in order to write your first JAX-RS web service. The first of those is the Application class. The Application class in JAX-RS is used to define the various components that you will be using in your application. It can also hold some metadata about your application, such as your basePath (or ApplicationPath) to all resources listed in this Application class. In this case, we are going to use /geolocation as our basePath. Let's see how that looks: package com.packt.microservices.geolocation; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/geolocation") public class GeoLocationApplication extends Application { public GeoLocationApplication() {} } There are two things to note in this class; one is the Application class and the other is the @ApplicationPath annotation—both of which we've already talked about. Now let's move on to the resource class, which is responsible for exposing the APIs. If you are familiar with Spring MVC, you can compare Resource classes to Controllers. They are responsible for defining the API for any specific resource. The annotations are slightly different from that of Spring MVC. Let's create a new resource class called com.packt.microservices.geolocation.GeoLocationResource.java that exposes a simple GET API: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.List; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("/") public class GeoLocationResource { @GET @Produces("application/json") public List<GeoLocation> findAll() { return new ArrayList<>(); } } All the three annotations, @GET, @Path, and @Produces, are pretty self explanatory. Before we start writing the APIs and the service class, let's test the application from the command line to make sure it works as expected. With the current implementation, any GET request sent to the /geolocation URL should return an empty JSON array. So far, we have created the RESTful APIs using JAX-RS. It's just another JAX-RS project: In order to make it a microservice using Wildfly Swarm, all you have to do is add the wildfly-swarm-plugin to the Maven pom.xml file. This plugin will be tied to the package phase of the build so that whenever the package goal is triggered, the plugin will create an uber JAR with all required dependencies. An uber JAR is just a fat JAR that has all dependencies bundled inside itself. It also deploys our application in an in-memory Undertow server. Add the following snippet to the plugins section of the pom.xml file: <plugin> <groupId>org.wildfly.swarm</groupId> <artifactId>wildfly-swarm-plugin</artifactId> <version>1.0.0.Final</version> <executions> <execution> <id>package</id> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> Now execute the mvn clean package command from the project's root directory, and wait for the Maven build to be successful. If you look at the logs, you can see that wildfly-swarm-plugin will create the uber JAR, which has all its dependencies. You should see something like this in your console logs: After the build is successful, you will find two artifacts in the target directory of your project. The geolocation-wildfly-0.0.1-SNAPSHOT.war file is the final WAR created by the maven-war-plugin. The geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar file is the uber JAR created by the wildfly-swarm-plugin. Execute the following command in the same terminal to start your microservice: java –jar target/geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar After executing this command, you will see that Undertow has started on port number 8080, exposing the geolocation resource we created. You will see something like this: Execute the following cURL command in a separate terminal window to make sure our API is exposed. The response of the command should be [], indicating there are no geolocations: curl http://localhost:8080/geolocation Now let's build the service class and finish the APIs that we started. For simplicity purposes, we are going to store the geolocations in a collection in the service class itself. In a real-time scenario, you will be writing repository classes or DAOs that talk to the database that holds your geolocations. Get the com.packt.microservices.geolocation.GeoLocationService.java interface. We'll use the same interface here. Create a new class called com.packt.microservices.geolocation.GeoLocationServiceImpl.java that extends the GeoLocationService interface: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class GeoLocationServiceImpl implements GeoLocationService { private static List<GeoLocation> geolocations = new ArrayList<>(); @Override public GeoLocation create(GeoLocation geolocation) { geolocations.add(geolocation); return geolocation; } @Override public List<GeoLocation> findAll() { return Collections.unmodifiableList(geolocations); } } Now that our service classes are implemented, let's finish building the APIs. We already have a very basic stubbed-out GET API. Let's just introduce the service class to the resource class and call the findAll method. Similarly, let's use the service's create method for POST API calls. Add the following snippet to GeoLocationResource.java: private GeoLocationService service = new GeoLocationServiceImpl(); @GET @Produces("application/json") public List<GeoLocation> findAll() { return service.findAll(); } @POST @Produces("application/json") @Consumes("application/json") public GeoLocation create(GeoLocation geolocation) { return service.create(geolocation); } We are now ready to test our application. Go ahead and build your application. After the build is successful, run your microservice: let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you something like the following output (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This command should give you an output similar to the following (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Whatever we have seen so far will give you a head start in building microservices with WildFly Swarm. Of course, there are tons of features that WildFly Swarm offers. Feel free to try them out based on your application needs. I strongly recommend going through the WildFly Swarm documentation for any advanced usages. Writing microservices with Dropwizard Dropwizard is a collection of libraries that help you build powerful applications quickly and easily. The libraries vary from Jackson, Jersey, Jetty, and so on. You can take a look at the full list of libraries on their website. This ecosystem of libraries that help you build powerful applications could be utilized to create microservices as well. As we saw earlier, it utilizes Jetty to expose its services. In this article, we will create the same GeoLocation API using Dropwizard and Jersey. To avoid confusion and dependency conflicts in our project, we will create the Dropwizard microservice as its own Maven project. This article is just here to help you get started with Dropwizard. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created other Maven projects,  create a Maven JAR module with the groupId com.packt.microservices and name/artifactId geolocation-dropwizard. Feel free to use either your IDE or the command line. After the project is created, if you see that your project is using a Java version other than 1.8. Perform a Maven update for the change to take effect. How to do it… The first thing that you will need is the dropwizard-core Maven dependency. Add the following snippet to your project's pom.xml file: <dependencies> <dependency> <groupId>io.dropwizard</groupId> <artifactId>dropwizard-core</artifactId> <version>0.9.3</version> </dependency> </dependencies> Guess what? This is the only dependency you will need to spin up a simple Jersey-based Dropwizard microservice. Before we start configuring Dropwizard, we have to create the domain object, service class, and resource class: com.packt.microservices.geolocation.GeoLocation.java com.packt.microservices.geolocation.GeoLocationService.java com.packt.microservices.geolocation.GeoLocationImpl.java com.packt.microservices.geolocation.GeoLocationResource.java Let's see what each of these classes does. The GeoLocation.java class is our domain object that holds the geolocation information. The GeoLocationService.java class defines our interface, which is then implemented by the GeoLocationServiceImpl.java class. If you take a look at the GeoLocationServiceImpl.java class, we are using a simple collection to store the GeoLocation domain objects. In a real-time scenario, you will be persisting these objects in a database. But to keep it simple, we will not go that far. To be consistent with the previous, let's change the path of GeoLocationResource to /geolocation. To do so, replace @Path("/") with @Path("/geolocation") on line number 11 of the GeoLocationResource.java class. We have now created the service classes, domain object, and resource class. Let's configure Dropwizard. In order to make your project a microservice, you have to do two things: Create a Dropwizard configuration class. This is used to store any meta-information or resource information that your application will need during runtime, such as DB connection, Jetty server, logging, and metrics configurations. These configurations are ideally stored in a YAML file, which will them be mapped to your Configuration class using Jackson. In this application, we are not going to use the YAML configuration as it is out of scope for this article. If you would like to know more about configuring Dropwizard, refer to their Getting Started documentation page at http://www.dropwizard.io/0.7.1/docs/getting-started.html. Let's  create an empty Configuration class called GeoLocationConfiguration.java: package com.packt.microservices.geolocation; import io.dropwizard.Configuration; public class GeoLocationConfiguration extends Configuration { } The YAML configuration file has a lot to offer. Take a look at a sample YAML file from Dropwizard's Getting Started documentation page to learn more. The name of the YAML file is usually derived from the name of your microservice. The microservice name is usually identified by the return value of the overridden method public String getName() in your Application class. Now let's create the GeoLocationApplication.java application class: package com.packt.microservices.geolocation; import io.dropwizard.Application; import io.dropwizard.setup.Environment; public class GeoLocationApplication extends Application<GeoLocationConfiguration> { public static void main(String[] args) throws Exception { new GeoLocationApplication().run(args); } @Override public void run(GeoLocationConfiguration config, Environment env) throws Exception { env.jersey().register(new GeoLocationResource()); } } There are a lot of things going on here. Let's look at them one by one. Firstly, this class extends Application with the GeoLocationConfiguration generic. This clearly makes an instance of your GeoLocationConfiguraiton.java class available so that you have access to all the properties you have defined in your YAML file at the same time mapped in the Configuration class. The next one is the run method. The run method takes two arguments: your configuration and environment. The Environment instance is a wrapper to other library-specific objects such as MetricsRegistry, HealthCheckRegistry, and JerseyEnvironment. For example, we could register our Jersey resources using the JerseyEnvironment instance. The env.jersey().register(new GeoLocationResource())line does exactly that. The main method is pretty straight-forward. All it does is call the run method. Before we can start the microservice, we have to configure this project to create a runnable uber JAR. Uber JARs are just fat JARs that bundle their dependencies in themselves. For this purpose, we will be using the maven-shade-plugin. Add the following snippet to the build section of the pom.xml file. If this is your first plugin, you might want to wrap it in a <plugins> element under <build>: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <configuration> <createDependencyReducedPom>true</createDependencyReducedPom> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.packt.microservices.geolocation.GeoLocationApplication</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> The previous snippet does the following: It creates a runnable uber JAR that has a reduced pom.xml file that does not include the dependencies that are added to the uber JAR. To learn more about this property, take a look at the documentation of maven-shade-plugin. It utilizes com.packt.microservices.geolocation.GeoLocationApplication as the class whose main method will be invoked when this JAR is executed. This is done by updating the MANIFEST file. It excludes all signatures from signed JARs. This is required to avoid security errors. Now that our project is properly configured, let's try to build and run it from the command line. To build the project, execute mvn clean package from the project's root directory in your terminal. This will create your final JAR in the target directory. Execute the following command to start your microservice: java -jar target/geolocation-dropwizard-0.0.1-SNAPSHOT.jar server The server argument instructs Dropwizard to start the Jetty server. After you issue the command, you should be able to see that Dropwizard has started the in-memory Jetty server on port 8080. If you see any warnings about health checks, ignore them. Your console logs should look something like this: We are now ready to test our application. Let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation It should give you an output similar to the following (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Excellent! You have created your first microservice with Dropwizard. Dropwizard offers more than what we have seen so far. Some of it is out of scope for this article. I believe the metrics API that Dropwizard uses could be used in any type of application. Writing your Dockerfile So far in this article, we have seen how to package our application and how to install Docker. Now that we have our JAR artifact and Docker set up, let's see how to Dockerize our microservice application using Docker. Getting ready In order to Dockerize our application, we will have to tell Docker how our image is going to look. This is exactly the purpose of a Dockerfile. A Dockerfile has its own syntax (or Dockerfile instructions) and will be used by Docker to create images. Throughout this article, we will try to understand some of the most commonly used Dockerfile instructions as we write our Dockerfile for the geolocation tracker microservice. How to do it… First, open your STS IDE and create a new file called Dockerfile in the geolocation project. The first line of the Dockerfile is always the FROM instruction followed by the base image that you would like to create your image from. There are thousands of images on Docker Hub to choose from. In our case, we would need something that already has Java installed on it. There are some images that are official, meaning they are well documented and maintained. Docker Official Repositories are very well documented, and they follow best practices and standards. Docker has its own team to maintain these repositories. This is essential in order to keep the repository clear, thus helping the user make the right choice of repository. To read more about Docker Official Repositories, take a look at https://docs.docker.com/docker-hub/official_repos/ We will be using the Java official repository. To find the official repository, go to hub.docker.com and search for java. You have to choose the one that says official. At the time of writing this, the Java image documentation says it will soon be deprecated in favor of the openjdk image. So the first line of our Dockerfile will look like this: FROM openjdk:8 As you can see, we have used version (or tag) 8 for our image. If you are wondering what type of operating system this image uses, take a look at the Dockerfile of this image, which you can get from the Docker Hub page. Docker images are usually tagged with the version of the software they are written for. That way, it is easy for users to pick from. The next step is creating a directory for our project where we will store our JAR artifact. Add this as your next line: RUN mkdir -p /opt/packt/geolocation This is a simple Unix command that creates the /opt/packt/geolocation directory. The –p flag instructs it to create the intermediate directories if they don't exist. Now let's create an instruction that will add the JAR file that was created in your local machine into the container at /opt/packt/geolocation. ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ As you can see, we are picking up the uber JAR from target directory and dropping it into the /opt/packt/geolocation directory of the container. Take a look at the / at the end of the target path. That says that the JAR has to be copied into the directory. Before we can start the application, there is one thing we have to do, that is, expose the ports that we would like to be mapped to the Docker host ports. In our case, the in-memory Tomcat instance is running on port 8080. In order to be able to map port 8080 of our container to any port to our Docker host, we have to expose it first. For that, we will use the EXPOSE instruction. Add the following line to your Dockerfile: EXPOSE 8080 Now that we are ready to start the app, let's go ahead and tell Docker how to start a container for this image. For that, we will use the CMD instruction: CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] There are two things we have to note here. Once is the way we are starting the application and the other is how the command is broken down into comma-separated Strings. First, let's talk about how we start the application. You might be wondering why we haven't used the mvn spring-boot:run command to start the application. Keep in mind that this command will be executed inside the container, and our container does not have Maven installed, only OpenJDK 8. If you would like to use the maven command, take that as an exercise, and try to install Maven on your container and use the mvn command to start the application. Now that we know we have Java installed, we are issuing a very simple java –jar command to run the JAR. In fact, the Spring Boot Maven plugin internally issues the same command. The next thing is how the command has been broken down into comma-separated Strings. This is a standard that the CMD instruction follows. To keep it simple, keep in mind that for whatever command you would like to run upon running the container, just break it down into comma-separated Strings (in whitespaces). Your final Dockerfile should look something like this: FROM openjdk:8 RUN mkdir -p /opt/packt/geolocation ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ EXPOSE 8080 CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] This Dockerfile is one of the simplest implementations. Dockerfiles can sometimes get bigger due to the fact that you need a lot of customizations to your image. In such cases, it is a good idea to break it down into multiple images that can be reused and maintained separately. There are some best practices to follow whenever you create your own Dockerfile and image. Though we haven't covered that here as it is out of the scope of this article, you still should take a look at and follow them. To learn more about the various Dockerfile instructions, go to https://docs.docker.com/engine/reference/builder/. Building your Docker image We created the Dockerfile, which will be used in this article to create an image for our microservice. If you are wondering why we would need an image, it is the only way we can ship our software to any system. Once you have your image created and uploaded to a common repository, it will be easier to pull your image from any location. Getting ready Before you jump right into it, it might be a good idea to get yourself familiar with some of the most commonly used Docker commands. In this article, we will use the build command. Take a look at this URL to understand the other commands: https://docs.docker.com/engine/reference/commandline/#/image-commands. After familiarizing yourself with the commands, open up a new terminal, and change your directory to the root of the geolocation project. Make sure your docker-machine instance is running. If it is not running, use the docker-machine start command to run your docker-machine instance: docker-machine start default If you have to configure your shell for the default Docker machine, go ahead and execute the following command: eval $(docker-machine env default) How to do it… From the terminal, issue the following docker build command: docker build –t packt/geolocation. We'll try to understand the command later. For now, let's see what happens after you issue the preceding command. You should see Docker downloading the openjdk image from Docker Hub. Once the image has been downloaded, you will see that Docker tries to validate each and every instruction provided in the Dockerfile. When the last instruction has been processed, you will see a message saying Successfully built. This says that your image has been successfully built. Now let's try to understand the command. There are three things to note here: The first thing is the docker build command itself. The docker build command is used to build a Docker image from a Dockerfile. It needs at least one input, which is usually the location of the Dockerfile. Dockerfiles can be renamed to something other than Dockerfile and can be referred to using the –f option of the docker build command. An instance of this being used is when teams have different Dockerfiles for different build environments, for example, using DockerfileDev for the dev environment, DockerfileStaging for the staging environment, and DockerfileProd for the production environment. It is still encouraged as best practice to use other Docker options in order to keep the same Dockerfile for all environments. The second thing is the –t option. The –t option takes the name of the repo and a tag. In our case, we have not mentioned the tag, so by default, it will pick up latest as the tag. If you look at the repo name, it is different from the official openjdk image name. It has two parts: packt and geolocation. It is always a good practice to put the Docker Hub account name followed by the actual image name as the name of your repo. For now, we will use packt as our account name, we will see how to create our own Docker Hub account and use that account name here. The third thing is the dot at the end. The dot operator says that the Dockerfile is located in the current directory, or the present working directory to be more precise. Let's go ahead and verify whether our image was created. In order to do that, issue the following command on your terminal: docker images The docker images command is used to list down all images available in your Docker host. After issuing the command, you should see something like this: As you can see, the newly built image is listed as packt/geolocation in your Docker host. The tag for this image is latest as we did not specify any. The image ID uniquely identifies your image. Note the size of the image. It is a few megabytes bigger than the openjdk:8 image. That is most probably because of the size of our executable uber JAR inside the container. Now that we know how to build an image using an existing Dockerfile, we are at the end of this article. This is just a very quick intro to the docker build command. There are more options that you can provide to the command, such as CPUs and memory. To learn more about the docker build command, take a look at this page: https://docs.docker.com/engine/reference/commandline/build/ Running your microservice as a Docker container We successfully created our Docker image in the Docker host. Keep in mind that if you are using Windows or Mac, your Docker host is the VirtualBox VM and not your local computer. In this article, we will look at how to spin off a container for the newly created image. Getting ready To spin off a new container for our packt/geolocation image, we will use the docker run command. This command is used to run any command inside your container, given the image. Open your terminal and go to the root of the geolocation project. If you have to start your Docker machine instance, do so using the docker-machine start command, and set the environment using the docker-machine env command. How to do it… Go ahead and issue the following command on your terminal: docker run packt/geolocation Right after you run the command, you should see something like this: Yay! We can see that our microservice is running as a Docker container. But wait—there is more to it. Let's see how we can access our microservice's in-memory Tomcat instance. Try to run a curl command to see if our app is up and running: Open a new terminal instance and execute the following cURL command in that shell: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation Did you get an error message like this? curl: (7) Failed to connect to localhost port 8080: Connection refused Let's try to understand what happened here. Why would we get a connection refused error when our microservice logs clearly say that it is running on port 8080? Yes, you guessed it right: the microservice is not running on your local computer; it is actually running inside the container, which in turn is running inside your Docker host. Here, your Docker host is the VirtualBox VM called default. So we have to replace localhost with the IP of the container. But getting the IP of the container is not straightforward. That is the reason we are going to map port 8080 of the container to the same port on the VM. This mapping will make sure that any request made to port 8080 on the VM will be forwarded to port 8080 of the container. Now go to the shell that is currently running your container, and stop your container. Usually, Ctrl + C will do the job. After your container is stopped, issue the following command: docker run –p 8080:8080 packt/geolocation The –p option does the port mapping from Docker host to container. The port number to the left of the colon indicates the port number of the Docker host, and the port number to the right of the colon indicates that of the container. In our case, both of them are same. After you execute the previous command, you should see the same logs that you saw before. We are not done yet. We still have to find the IP that we have to use to hit our RESTful endpoint. The IP that we have to use is the IP of our Docker Machine VM. To find the IP of the docker-machine instance, execute the following command in a new terminal instance: docker-machine ip default. This should give you the IP of the VM. Let's say the IP that you received was 192.168.99.100. Now, replace localhost in your cURL command with this IP, and execute the cURL command again: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://192.168.99.100:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } This confirms that you are able to access your microservice from the outside. Take a moment to understand how the port mapping is done. The following figure shows how your machine, VM, and container are orchestrated: This confirms that you are able to access your microservice from the outside. Summary We looked at an example of a geolocation tracker application to see how it can be broken down into smaller and manageable services. Next, we saw how to create the GeoLocationTracker service using the Spring Boot framework. Resources for Article: Further resources on this subject: Domain-Driven Design [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 3824

article-image-professional-environment-react-native-part-2
Pierre Monge
12 Jan 2017
4 min read
Save for later

A Professional Environment for React Native, Part 2

Pierre Monge
12 Jan 2017
4 min read
In Part 1 of this series, I covered the full environment and everything you need to start creating your own React Native applications. Now here in Part 2, we are going to dig in and go over the tools that you can take advantage of for maintaining those React Native apps. Maintaining the application Maintaining a React Native application, just like any software, is very complex and requires a lot of organization. In addition to having strict code (a good syntax with eslint or a good understanding of the code with flow), you must have intelligent code, and you must organize your files, filenames, and variables. It is necessary to have solutions for the maintenance of the application in the long term as well as have tools that provide feedback. Here are some tools that we use, which should be in place early in the cycle of your React Native development. GitHub GitHub is a fantastic tool, but you need to know how to control it. In my company, we have our own Git flow with a Dev branch, a master branch, release branches, bugs and other useful branches. It's up to you to make your own flow for Git! One of the most important things is the Pull Request, or the PR! And if there are many people on your project, it is important for your group to agree on the organization of the code. BugTracker & Tooling We use many tools in my team, but here is our Must-Have list to maintain the application: circleCI: This is a continuous integration tool that we integrate with GitHub. It allows us to pass recurrent tests with each new commit. BugSnag: This is a bug tracking tool that can be used in a React Native integration, which makes it possible to raise user bugs by the webs without the user noticing it. codePush: This is useful for deploying code on versions already in production. And yes, you can change business code while the application is already in production. I do not pay much attention to it, yet the states of applications (Debug, Beta, and Production) are a big part that has to be treated because it is a workset to have for quality work and a long application life. We also have quality assurance in our company, which allows us to validate a product before it is set up, which provides a regular process of putting a React Native app into production. As you can see, there are many tools that will help you maintain a React Native mobile application. Despite the youthfulness of the product, the community grows quickly and developers are excited about creating apps. There are more and more large companies using React Native, such as AirBnB , Wix, Microsoft, and many others. And with the technology growing and improving, there are more and more new tools and integrations coming to React Native. I hope this series has helped you create and maintain your own React Native applications. Here is a summary of the tools covered: Atom is a text editor that's modern, approachable, yet hackable to the core—a tool that you can customize to do anything, but you also need to use it productively without ever touching a config file. GitHub is a web-based Git repository hosting service. CircleCI is a modern continuous integration and delivery platform that software teams love to use. BugSnag monitors application errors to improve customer experiences and code quality. react-native-code-push is a plugin that provides client-side integration, allowing you to easily add a dynamic update experience to your React Native app. About the author Pierre Monge ([email protected]) is a 21-year-old student. He is a developer in C, JavaScript, and all things web development, and he has recently been creating mobile applications. He is currently working as an intern at a company named Azendoo, where he is developing a 100% React Native application.
Read more
  • 0
  • 0
  • 1479

article-image-installing-and-using-vuejs
Packt
10 Jan 2017
14 min read
Save for later

Installing and Using Vue.js

Packt
10 Jan 2017
14 min read
In this article by Olga Filipova, the author of the book Learning Vue.js 2, explores the key concepts of Vue.js framework to understand all its behind the scenes. Also in this article, we will analyze all possible ways of installing Vue.js. We will also learn the ways of debugging and testing our applications. (For more resources related to this topic, see here.) So, in this article we are going to learn: What is MVVM architecture paradigm and how does it apply to Vue.js How to install, start, run, and debug Vue application MVVM architectural pattern Do you remember how to create the Vue instance? We were instantiating it calling new Vue({…}). You also remember that in the options we were passing the element on the page where this Vue instance should be bound and the data object that contained properties we wanted to bind to our view. The data object is our model and DOM element where Vue instance is bound is view. Classic View-Model representation where Vue instance binds one to another In the meantime, our Vue instance is something that helps to bind our model to the View and vice-versa. Our application thus follows Model-View-ViewModel (MVVM) pattern where the Vue instance is a ViewModel. The simplified diagram of Model View ViewModel pattern Our Model contains data and some business logic, our View is responsible for its representation. ViewModel handles data binding ensuring the data changed in the Model is immediately affecting the View layer and vice-versa. Our Views thus become completely data-driven. ViewModel becomes responsible for the control of data flow, making data binding fully declarative for us. Installing, using, and debugging a Vue.js application In this section, we will analyze all possible ways of installing Vue.js. We will also create a skeleton for our. We will also learn the ways of debugging and testing our applications. Installing Vue.js There are a number of ways to install Vue.js. Starting from classic including the downloaded script into HTML within <script> tags, using tools like bower, npm, or Vue's command-line interface (vue-cli) to bootstrap the whole application. Let's have a look at all these methods and choose our favorite. In all these examples we will just show a header on a page saying Learning Vue.js. Standalone Download the vue.js file. There are two versions, minified and developer version. The development version is here: https://vuejs.org/js/vue.js. The minified version is here: https://vuejs.org/js/vue.min.js. If you are developing, make sure you use the development non-minified version of Vue. You will love nice tips and warnings on the console. Then just include vue.js in the script tags: <script src=“vue.js”></script> Vue is registered in the global variable. You are ready to use it. Our example will then look as simple as the following: <div id="app"> <h1>{{ message }}</h1> </div> <script src="vue.js"></script> <script> var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); </script> CDN Vue.js is available in the following CDN's: jsdelivr: https://cdn.jsdelivr.net/vue/1.0.25/vue.min.js cdnjs: https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js npmcdn: https://npmcdn.com/[email protected]/dist/vue.min.js Just put the url in source in the script tag and you are ready to use Vue! <script src=“ https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js”></script> Beware so, the CDN version might not be synchronized with the latest available version of Vue. Thus, the example will look like exactly the same as in the standalone version, but instead of using downloaded file in the <script> tags, we are using a CDN URL. Bower If you are already managing your application with bower and don't want to use other tools, there's also a bower distribution of Vue. Just call bower install: # latest stable release bower install vue Our example will look exactly like the two previous examples, but it will include the file from the bower folder: <script src=“bower_components/vue/dist/vue.js”></script> CSP-compliant CSP (content security policy) is a security standard that provides a set of rules that must be obeyed by the application in order to prevent security attacks. If you are developing applications for browsers, more likely you know pretty well about this policy! For the environments that require CSP-compliant scripts, there’s a special version of Vue.js here: https://github.com/vuejs/vue/tree/csp/dist Let’s do our example as a Chrome application to see the CSP compliant vue.js in action! Start from creating a folder for our application example. The most important thing in a Chrome application is the manifest.json file which describes your application. Let’s create it. It should look like the following: { "manifest_version": 2, "name": "Learning Vue.js", "version": "1.0", "minimum_chrome_version": "23", "icons": { "16": "icon_16.png", "128": "icon_128.png" }, "app": { "background": { "scripts": ["main.js"] } } } The next step is to create our main.js file which will be the entry point for the Chrome application. The script should listen for the application launching and open a new window with given sizes. Let’s create a window of 500x300 size and open it with index.html: chrome.app.runtime.onLaunched.addListener(function() { // Center the window on the screen. var screenWidth = screen.availWidth; var screenHeight = screen.availHeight; var width = 500; var height = 300; chrome.app.window.create("index.html", { id: "learningVueID", outerBounds: { width: width, height: height, left: Math.round((screenWidth-width)/2), top: Math.round((screenHeight-height)/2) } }); }); At this point the Chrome specific application magic is over and now we shall just create our index.html file that will do the same thing as in the previous examples. It will include the vue.js file and our script where we will initialize our Vue application: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - CSP-compliant</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="assets/vue.js"></script> <script src="assets/app.js"></script> </body> </html> Download the CSP-compliant version of vue.js and add it to the assets folder. Now let’s create the app.js file and add the code that we already wrote added several times: var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Add it to the assets folder. Do not forget to create two icons of 16 and 128 pixels and call them icon_16.png and icon_128.png. Your code and structure in the end should look more or less like the following: Structure and code for the sample Chrome application using vue.js And now the most important thing. Let’s check if it works! It is very very simple. Go to chrome://extensions/ url in your Chrome browser. Check Developer mode checkbox. Click on Load unpacked extension... and check the folder that we’ve just created. Your app will appear in the list! Now just open a new tab, click on apps, and check that your app is there. Click on it! Sample Chrome application using vue.js in the list of chrome apps Congratulations! You have just created a Chrome application! NPM NPM installation method is recommended for large-scale applications. Just run npm install vue: # latest stable release npm install vue # latest stable CSP-compliant release npm install vue@csp And then require it: var Vue = require(“vue”); Or, for ES2015 lovers: import Vue from “vue”; Our HTML in our example will look exactly like in the previous examples: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - NPM Installation</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="main.js"></script> </body> </html> Now let’s create a script.js file that will look almost exactly the same as in standalone or CDN version with only difference that it will require vue.js: var Vue = require("vue"); var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Let’s install vue and browserify in order to be able to compile our script.js into the main.js file: npm install vue –-save-dev npm install browserify –-save-dev In the package.json file add also a script for build that will execute browserify on script.js transpiling it into main.js. So our package.json file will look like this: { "name": "learningVue", "scripts": { "build": "browserify script.js -o main.js" }, "version": "0.0.1", "devDependencies": { "browserify": "^13.0.1", "vue": "^1.0.25" } } Now run: npm install npm run build And open index.html in the browser. I have a friend that at this point would say something like: really? So many steps, installations, commands, explanations… Just to output some header? I’m out! If you are also thinking this, wait. Yes, this is true, now we’ve done something really simple in a rather complex way, but if you stay with me a bit longer, you will see how complex things become easy to implement if we use the proper tools. Also, do not forget to check your Pomodoro timer, maybe it’s time to take a rest! Vue-cli Vue provides its own command line interface that allows bootstrapping single page applications using whatever workflows you want. It immediately provides hot reloading and structure for test driven environment. After installing vue-cli just run vue init <desired boilerplate> <project-name> and then just install and run! # install vue-cli $ npm install -g vue-cli # create a new project $ vue init webpack learn-vue # install and run $ cd learn-vue $ npm install $ npm run dev Now open your browser on localhost:8080. You just used vue-cli to scaffold your application. Let’s adapt it to our example. Open a source folder. In the src folder you will find an App.vue file. Do you remember we talked about Vue components that are like bricks from which you build your application? Do you remember that we were creating them and registering inside our main script file and I mentioned that we will learn to build components in more elegant way? Congratulations, you are looking at the component built in a fancy way! Find the line that says import Hello from './components/Hello'. This is exactly how the components are being reused inside other components. Have a look at the template at the top of the component file. At some point it contains the tag <hello></hello>. This is exactly where in our HTML file will appear the Hello component. Have a look at this component, it is in the src/components folder. As you can see, it contains a template with {{ msg }} and a script that exports data with defined msg. This is exactly the same what we were doing in our previous examples without using components. Let’s slightly modify the code to make it the same as in the previous examples. In the Hello.vue file change the msg in data object: <script> export default { data () { return { msg: “Learning Vue.js” } } } </script> In the App.vue component remove everything from the template except the hello tag, so the template looks like this: <template> <div id="app"> <hello></hello> </div> </template> Now if you rerun the application you will see our example with beautiful styles we didn’t touch: vue application bootstrapped using vue-cli Besides webpack boilerplate template you can use the following configurations with your vue-cli: webpack-simple: A simple Webpack + vue-loader setup for quick prototyping. browserify: A full-featured Browserify + vueify setup with hot-reload, linting and unit testing. browserify-simple: A simple Browserify + vueify setup for quick prototyping. simple: The simplest possible Vue setup in a single HTML file Dev build My dear reader, I can see your shining eyes and I can read your mind. Now that you know how to install and use Vue.js and how does it work, you definitely want to put your hands deeply into the core code and contribute! I understand you. For this you need to use development version of Vue.js which you have to download from GitHub and compile yourself. Let’s build our example with this development version vue. Create a new folder, for example, dev-build and copy all the files from the npm example to this folder. Do not forget to copy the node_modules folder. You should cd into it and download files from GitHub to it, then run npm install and npm run build. cd <APP-PATH>/node_modules git clone https://github.com/vuejs/vue.git cd vue npm install npm run build Now build our example application: cd <APP-PATH> npm install npm run build Open index.html in the browser, you will see the usual Learning Vue.js header. Let’s now try to change something in vue.js source! Go to the node_modules/vue/src folder. Open config.js file. The second line defines delimeters: let delimiters = ['{{', '}}'] This defines the delimiters used in the html templates. The things inside these delimiters are recognized as a Vue data or as a JavaScript code. Let’s change them! Let’s replace “{{” and “}}” with double percentage signs! Go on and edit the file: let delimiters = ['%%', '%%'] Now rebuild both Vue source and our application and refresh the browser. What do you see? After changing Vue source and replacing delimiters, {{}} delimeters do not work anymore! The message inside {{}} is no longer recognized as data that we passed to Vue. In fact, it is being rendered as part of HTML. Now go to the index.html file and replace our curly brackets delimiters with double percentage: <div id="app"> <h1>%% message %%</h1> </div> Rebuild our application and refresh the browser! What about now? You see how easy it is to change the framework’s code and to try out your changes. I’m sure you have plenty of ideas about how to improve or add some functionality to Vue.js. So change it, rebuild, test, deploy! Happy pull requests! Debug Vue application You can debug your Vue application the same way you debug any other web application. Use your developer tools, breakpoints, debugger statements, and so on. Vue also provides vue devtools so it gets easier to debug Vue applications. You can download and install it from the Chrome web store: https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd After installing it, open, for example, our shopping list application. Open developer tools. You will see the Vue tab has automatically appeared: Vue devtools In our case we only have one component—Root. As you can imagine, once we start working with components and having lots of them, they will all appear in the left part of the Vue devtools palette. Click on the Root component and inspect it. You’ll see all the data attached to this component. If you try to change something, for example, add a shopping list item, check or uncheck a checkbox, change the title, and so on, all these changes will be immediately propagated to the data in the Vue devtools. You will immediately see the changes on the right side of it. Let’s try, for example, to add shopping list item. Once you start typing, you see on the right how newItem changes accordingly: The changes in the models are immediately propagated to the Vue devtools data When we start adding more components and introduce complexity to our Vue applications, the debugging will certainly become more fun! Summary In this article we have analyzed the behind the scenes of Vue.js. We learned how to install Vue.js. We also learned how to debug Vue application. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [Article] Tips & Tricks for Ext JS 3.x [Article] Working with Forms using REST API [Article]
Read more
  • 0
  • 0
  • 11768
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-professional-environment-react-native-part-1
Pierre Monge
09 Jan 2017
5 min read
Save for later

A Professional Environment for React Native, Part 1

Pierre Monge
09 Jan 2017
5 min read
React Native, a new framework, allows you to build mobile apps using JavaScript. It uses the same design as React.js, letting you compose a rich mobile UI from declarative components. Although many developers are talking about this technology, React Native is not yet approved by most professionals for several reasons: React Native isn’t fully stable yet. At the time of writing this, we are at version 0.40 It can be scary to use a web technology in a mobile application It’s hard to find good React Native developers because knowing the React.js stack is not enough to maintain a mobile React Native app from A to Z! To confront all these prejudices, this series will act as a guide, detailing how we see things in my development team. We will cover the entire React Native environment as well as discuss how to maintain a React Native application. This series may be of interest to companies who want to implement a React Native solution and also of interest to anyone who is looking for the perfect tools to maintain a mobile application in React Native. Let’s start here in part 1 by exploring the React Native environment. The environment The React Native environment is pretty consistent. To manage all of the parts of such an application, you will need a native stack, a JavaScript stack, and specific components from React Native. Let's examine all the aspects of the React Native environment: The Native part consists of two important pieces of software: Android Studio (Android) and Xcode (iOS). Both the pieces of software are provided with their emulators, so there is no need for a physical device! The negative point of Android Studio, however, is that you need to download the SDK, and you will have to find the good versions and download them all. In addition, these two programs take up a lot of room on your hard disk! The JavaScript part naturally consists of Node.js, but we must add Watchman to that to check the changes in a file in real time. The React Native CLI will automate the linking of all software. You only have to run react-native init helloworld to create a project and react-native run-ios --scheme 'Dev' to launch the project on an iOS simulator in debug mode. The supplied react-native controls will load almost everything! You have, no doubt, come to our first conclusion. React Native has a lot of prerequisites, and although it makes sense to have as much dependency as possible, you will have to master them all, which can take some time. And also a lot of space on your hard drive! Try this as your starting point if you want more information on getting started with React Native. Atom, linter, and flow A developer never starts coding without his text editor and his little tricks, just as a woodcutter never goes out into the forest without his ax! More than 80% of people around me use Atom as a text editor. And they are not wrong! React Native is 100% OpenSource, and Atom is also open source. And it is full of plug-ins and themes of all kinds. I personally use a lot of plug-ins, such as color-picker, file-icons, indent-guide-improved, minimap, etc., but there are some plug-ins that should be essential for every JavaScript coder, especially for your React Native application. linter-eslint To work alone or in a group, you must have a common syntax for all your files. To do this, we use linter-eslint with the fbjs configurations. This plugin provides the following: Good indentation Good syntax on how to define variables, objects, classes, etc. Indications on non-existent, unused variables and functions And many other great benefits. Flow One question you may be thinking is what is the biggest problem with using JavaScript? One issue with using JavaScript has always been that it's a language that has no type. In fact, there are types, such as String, Number, Boolean, Function, etc., but that's just not enough. There is no static type. To deal with this, we use Flow, which allows you to perform type check before run-time. This is, of course, useful for predicting bugs! There is even a plug-in version for Atom: linter-flow. Conclusion At this point, you should have everything you need to create your first React Native mobile applications. Here are some great examples of apps that are out there already. Check out part 2 in this series where I cover the tools that can help you maintain your React Native apps. About the author Pierre Monge ([email protected]) is a 21 year old student. He is a developer in C, JavaScript, and all things related to web development, and he has recently been creating mobile applications. He is currently working as an intern at a company named Azendoo, where he is developing a 100% React Native application.
Read more
  • 0
  • 0
  • 1975

article-image-writing-reddit-reader-rxphp
Packt
09 Jan 2017
9 min read
Save for later

Writing a Reddit Reader with RxPHP

Packt
09 Jan 2017
9 min read
In this article by Martin Sikora, author of the book, PHP Reactive Programming, we will cover writing a CLI Reddit reader app using RxPHP, and we will see how Disposables are used in the default classes that come with RxPHP, and how these are going to be useful for unsubscribing from Observables in our app. (For more resources related to this topic, see here.) Examining RxPHP's internals As we know, Disposables as a means for releasing resources used by Observers, Observables, Subjects, and so on. In practice, a Disposable is returned, for example, when subscribing to an Observable. Consider the following code from the default RxObservable::subscribe() method: function subscribe(ObserverI $observer, $scheduler = null) { $this->observers[] = $observer; $this->started = true; return new CallbackDisposable(function () use ($observer) { $this->removeObserver($observer); }); } This method first adds the Observer to the array of all subscribed Observers. It then marks this Observable as started and, at the end, it returns a new instance of the CallbackDisposable class, which takes a Closure as an argument and invokes it when it's disposed. This is probably the most common use case for Disposables. This Disposable just removes the Observer from the array of subscribers and therefore, it receives no more events emitted from this Observable. A closer look at subscribing to Observables It should be obvious that Observables need to work in such way that all subscribed Observables iterate. Then, also unsubscribing via a Disposable will need to remove one particular Observer from the array of all subscribed Observables. However, if we have a look at how most of the default Observables work, we find out that they always override the Observable::subscribe() method and usually completely omit the part where it should hold an array of subscribers. Instead, they just emit all available values to the subscribed Observer and finish with the onComplete() signal immediately after that. For example, we can have a look at the actual source code of the subscribe() method of the RxReturnObservable class: function subscribe(ObserverI $obs, SchedulerI $sched = null) { $value = $this->value; $scheduler = $scheduler ?: new ImmediateScheduler(); $disp = new CompositeDisposable(); $disp->add($scheduler->schedule(function() use ($obs, $val) { $obs->onNext($val); })); $disp->add($scheduler->schedule(function() use ($obs) { $obs->onCompleted(); })); return $disp; } The ReturnObservable class takes a single value in its constructor and emits this value to every Observer as they subscribe. The following is a nice example of how the lifecycle of an Observable might look: When an Observer subscribes, it checks whether a Scheduler was also passed as an argument. Usually, it's not, so it creates an instance of ImmediateScheduler. Then, an instance of CompositeDisposable is created, which is going to keep an array of all Disposables used by this method. When calling CompositeDisposable::dispose(), it iterates all disposables it contains and calls their respective dispose() methods. Right after that we start populating our CompositeDisposable with the following: $disposable->add($scheduler->schedule(function() { ... })); This is something we'll see very often. SchedulerInterface::schedule() returns a DisposableInterface, which is responsible for unsubscribing and releasing resources. In this case, when we're using ImmediateScheduler, which has no other logic, it just evaluates the Closure immediately: function () use ($obs, $val) { $observer->onNext($val); } Since ImmediateScheduler::schedule() doesn't need to release any resources (it didn't use any), it just returns an instance of RxDisposableEmptyDisposable that does literally nothing. Then the Disposable is returned, and could be used to unsubscribe from this Observable. However, as we saw in the preceding source code, this Observable doesn't let you unsubscribe, and if we think about it, it doesn't even make sense because ReturnObservable class's value is emitted immediately on subscription. The same applies to other similar Observables, such as IteratorObservable, RangeObservable or ArrayObservable. These just contain recursive calls with Schedulers but the principle is the same. A good question is, why on Earth is this so complicated? All the preceding code does could be stripped into the following three lines (assuming we're not interested in using Schedulers): function subscribe(ObserverI $observer) { $observer->onNext($this->value); $observer->onCompleted(); } Well, for ReturnObservable this might be true, but in real applications, we very rarely use any of these primitive Observables. It's true that we usually don't even need to deal with Schedulers. However, the ability to unsubscribe from Observables or clean up any resources when unsubscribing is very important and we'll use it in a few moments. A closer look at Operator chains Before we start writing our Reddit reader, we should talk briefly about an interesting situation that might occur, so it doesn't catch us unprepared later. We're also going to introduce a new type of Observable, called ConnectableObservable. Consider this simple Operator chain with two subscribers: // rxphp_filters_observables.php use RxObservableRangeObservable; use RxObservableConnectableObservable; $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObs = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { return $val % 2;, }); $disposable1 = $filteredObs->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $filteredObs->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); The ConnectableObservable class is a special type of Observable that behaves similarly to Subject (in fact, internally, it really uses an instance of the Subject class). Any other Observable emits all available values right after you subscribe to it. However, ConnectableObservable takes another Observable as an argument and lets you subscribe Observers to it without emitting anything. When you call ConnectableObservable::connect(), it connects Observers with the source Observables, and all values go one by one to all subscribers. Internally, it contains an instance of the Subject class and when we called subscribe(), it just subscribed this Observable to its internal Subject. Then when we called the connect() method, it subscribed the internal Subject to the source Observable. In the $filteredObs variable we keep a reference to the last Observable returned from filter() call, which is an instance of AnnonymousObservable where, on next few lines, we subscribe both Observers. Now, let's see what this Operator chain prints: $ php rxphp_filters_observables.php S1: 1 S2: 1 S1: 9 S2: 9 S1: 25 S2: 25 As we can see, each value went through both Observers in the order they were emitted. Just out of curiosity, we can also have a look at what would happen if we didn't use ConnectableObservable, and used just the RangeObservable instead: $ php rxphp_filters_observables.php S1: 1 S1: 9 S1: 25 S2: 1 S2: 9 S2: 25 This time, RangeObservable emitted all values to the first Observer and then, again, all values to the second Observer. Right now, we can tell that the Observable had to generate all the values twice, which is inefficient, and with a large dataset, this might cause a performance bottleneck. Let's go back to the first example with ConnectableObservable, and modify the filter() call so it prints all the values that go through: $filteredObservable = $connObservable ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }); Now we run the code again and see what happens: $ php rxphp_filters_observables.php Filter: 0 Filter: 0 Filter: 1 S1: 1 Filter: 1 S2: 1 Filter: 4 Filter: 4 Filter: 9 S1: 9 Filter: 9 S2: 9 Filter: 16 Filter: 16 Filter: 25 S1: 25 Filter: 25 S2: 25 Well, this is unexpected! Each value is printed twice. This doesn't mean that the Observable had to generate all the values twice, however. It's not obvious at first sight what happened, but the problem is that we subscribed to the Observable at the end of the Operator chain. As stated previously, $filteredObservable is an instance of AnnonymousObservable that holds many nested Closures. By calling its subscribe() method, it runs a Closure that's created by its predecessor, and so on. This leads to the fact that every call to subscribe() has to invoke the entire chain. While this might not be an issue in many use cases, there are situations where we might want to do some special operation inside one of the filters. Also, note that calls to the subscribe() method might be out of our control, performed by another developer who wanted to use an Observable we created for them. It's good to know that such a situation might occur and could lead to unwanted behavior. It's sometimes hard to see what's going on inside Observables. It's very easy to get lost, especially when we have to deal with multiple Closures. Schedulers are prime examples. Feel free to experiment with the examples shown here and use debugger to examine step-by-step what code gets executed and in what order. So, let's figure out how to fix this. We don't want to subscribe at the end of the chain multiple times, so we can create an instance of Subject class, where we'll subscribe both Observers, and the Subject class itself will subscribe to the AnnonymousObservable as discussed a moment ago: // ... use RxSubjectSubject; $subject = new Subject(); $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObservable = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }) ->subscribe($subject); $disposable1 = $subject->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $subject->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); Now we can run the script again and see that it does what we wanted it to do: $ php rxphp_filters_observables.php Filter: 0 Filter: 1 S1: 1 S2: 1 Filter: 4 Filter: 9 S1: 9 S2: 9 Filter: 16 Filter: 25 S1: 25 S2: 25 This might look like an edge case, but soon we'll see that this issue, left unhandled, could lead to some very unpredictable behavior. We'll bring out both these issues (proper usage of Disposables and Operator chains) when we start writing our Reddit reader. Summary In this article, we looked in more depth at how to use Disposables and Operators, how these work internally, and what it means for us. We also looked at a couple of new classes from RxPHP, such as ConnectableObservable, and CompositeDisposable. Resources for Article: Further resources on this subject: Working with JSON in PHP jQuery [article] Working with Simple Associations using CakePHP [article] Searching Data using phpMyAdmin and MySQL [article]
Read more
  • 0
  • 0
  • 1545

article-image-bug-tracking
Packt
04 Jan 2017
11 min read
Save for later

Bug Tracking

Packt
04 Jan 2017
11 min read
In this article by Eduardo Freitas, the author of the book Building Bots with Node.js, we will learn about Internet Relay Chat (IRC). It enables us to communicate in real time in the form of text. This chat runs on a TCP protocol in a client server model. IRC supports group messaging which is called as channels and also supports private message. (For more resources related to this topic, see here.) IRC is organized into many networks with different audiences. IRC being a client server, users need IRC clients to connect to IRC servers. IRC Client software comes as a packaged software as well as web based clients. Some browsers are also providing IRC clients as add-ons. Users can either install on their systems and then can be used to connect to IRC servers or networks. While connecting these IRC Servers, users will have to provide unique nick or nickname and choose existing channel for communication or users can start a new channel while connecting to these servers. In this article, we are going to develop one of such IRC bots for bug tracking purpose. This bug tracking bot will provide information about bugs as well as details about a particular bug. All this will be done seamlessly within IRC channels itself. It's going to be one window operations for a team when it comes to knowing about their bugs or defects. Great!! IRC Client and server As mentioned in introduction, to initiate an IRC communication, we need an IRC Client and Server or a Network to which our client will be connected. We will be using freenode network for our client to connect to. Freenode is the largest free and open source software focused IRC network. IRC Web-based Client I will be using IRC web based client using URL(https://webchat.freenode.net/). After opening the URL, you will see the following screen, As mentioned earlier, while connecting, we need to provide Nickname: and Channels:. I have provided Nickname: as Madan and at Channels: as #BugsChannel. In IRC, channels are always identified with #, so I provided # for my bugs channel. This is the new channel that we will be starting for communication. All the developers or team members can similarly provide their nicknames and this channel name to join for communication. Now let's ensure Humanity: by selecting I'm not a robot and click button Connect. Once connected, you will see the following screen. With this, our IRC client is connected to freenode network. You can also see username on right hand side as @Madan within this #BugsChannel. Whoever is joining this channel using this channel name and a network, will be shown on right hand side. In the next article, we will ask our bot to join this channel and the same network and will see how it appears within the channel. IRC bots IRC bot is a program which connects to IRC as one of the clients and appears as one of the users in IRC channels. These IRC bots are used for providing IRC Services or to host chat based custom implementations which will help teams for efficient collaboration. Creating our first IRC bot using IRC and NodeJS Let's start by creating a folder in our local drive in order to store our bot program from the command prompt. mkdir ircbot cd ircbot Assuming we have Node.js and NPM installed and let's create and initialize our package.json, which will store our bot's dependencies and definitions. npm init Once you go through the npm init options (which are very easy to follow), you'll see something similar to this. On your project folder you'll see the result which is your package.json file. Let's install irc package from NPM. This can be located at https://www.npmjs.com/package/irc. In order to install it, run this npm command. npm install –-save irc You should then see something similar to this. Having done this, the next thing to do is to update your package.json in order to include the "engines" attribute. Open with a text editor the package.json file and update it as follows. "engines": { "node": ">=5.6.0" } Your package.json should then look like this. Let's create our app.js file which will be the entry point to our bot as mentioned while setting up our node package. Our app.js should like this. var irc = require('irc'); var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); Now let's run our Node.js program and at first see how our console looks. If everything works well, our console should show our bot as connected to the required network and also joined a channel. Console can be seen as the following, Now if you look at our channel #BugsChannel in our web client, you should see our bot has joined and also sent a welcome message as well. Refer the following screen: If you look at the the preceding screen, our bot program got has executed successfully. Our bot BugTrackerIRCBot has joined the channel #BugsChannel and also bot sent an introduction message to all whoever is on channel. If you look at the right side of the screen under usernames, we are seeing BugTrackerIRCBot below @Madan Code understanding of our basic bot After seeing how our bot looks in IRC client, let's look at basic code implementation from app.js. We used irc library with the following lines, var irc = require('irc'); Using irc library, we instantiated client to connect one of the IRC networks using the following code snippet, var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); Here we connected to network irc.freenode.net and provided a nickname as BugTrackerIRCBot. This name has been given as I would like my bot to track and report the bugs in future. Now we ask client to connect and join a specific channel using the following code snippet, client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); In preceeding code snippet, once client is connected, we get reply from server. This reply we are showing on a console. Once successfully connected, we ask bot to join a channel using the following code lines: client.join('#BugsChannel', function(input) { Remember, #BugsChannel is where we have joined from web client at the start. Now using client.join(), I am asking my bot to join the same channel. Once bot is joined, bot is saying a welcome message in the same channel using function client.say(). Hope this has given some basic understanding of our bot and it's code implementations. In the next article, we will enhance our bot so that our teams can have effective communication experience while chatting itself. Enhancing our BugTrackerIRCBot Having built a very basic IRC bot, let's enhance our BugTrackerIRCBot. As developers, we always would like to know how our programs or a system is functioning. To do this typically our testing teams carry out testing of a system or a program and log their bugs or defects into a bug tracking software or a system. We developers later can take a look at those bugs and address them as a part of our development life cycle. During this journey, developers will collaborate and communicate over messaging platforms like IRC. We would like to provide unique experience during their development by leveraging IRC bots. So here is what exactly we are doing. We are creating a channel for communication all the team members will be joined and our bot will also be there. In this channel, bugs will be reported and communicated based on developers' request. Also if developers need some additional information about a bug, chat bot can also help them by providing a URL from the bug tracking system. Awesome!! But before going in to details, let me summarize using the following steps about how we are going to do this, Enhance our basic bot program for more conversational experience Bug tracking system or bug storage where bugs will be stored and tracked for developers Here we mentioned about bug storage system. In this article, I would like to explain DocumentDB which is a NoSQL JSON based cloud storage system. What is DocumentDB? I have already explained NoSQLs. DocumentDB is also one of such NoSQLs where data is stored in JSON documents and offered by Microsoft Azure platform. Details of DocumentDB can be referred from (https://azure.microsoft.com/en-in/services/documentdb/) Setting up a DocumentDB for our BugTrackerIRCBot Assuming you already have a Microsoft Azure subscription follow these steps to configure DocumentDB for your bot. Create account ID for DocumentDB Let's create a new account called botdb using the following screenshot from Azure portal. Select NoSQL API as of DocumentDB. Select appropriate subscription and resources. I am using existing resources for this account. You can also create a new dedicated resource for this account. Once you enter all the required information, hit Create button at the bottom to create new account for DocumentDB. Newly created account botdb can be seen as the following, Create collection and database Select a botdb account from account lists shown precedingly. This will show various menu options like Properties, Settings, Collections etc. Under this account we need to create a collection to store bugs data. To create a new collection, click on Add Collection option as shown in the following screenshot, On click of Add Collection option, following screen will be shown on right side of the screen. Please enter the details as shown in the following screenshot: In the preceding screen, we are creating a new database along with our new collection Bugs. This new database will be named as BugDB. Once this database is created, we can add other bugs related collections in future in the same database. This can be done in future using option Use existing from the preceding screen. Once you enter all the relevant data, click OK to create database as well as collection. Refer the following screenshot: From the preceding screen, COLLECTION ID and DATABASE shown will be used during enhancing our bot. Create data for our BugTrackerIRCBot Now we have BugsDB with Bugs collection which will hold all the data for bugs. Let's add some data into our collection. To add a data let's use menu option Document Explorer shown in the following screenshot: This will open up a screen showing list of Databases and Collections created so far. Select our database as BugDB and collection as Bugs from the available list. Refer the following screenshot: To create a JSON document for our Bugs collection, click on Create option. This will open up a New Document screen to enter JSON based data. Please enter a data as per the following screenshot: We will be storing id, status, title, description, priority,assignedto, url attributes for our single bug document which will get stored in Bugs collection. To save JOSN document in our collection click Save button. Refer the following screenshot: This way we can create sample records in bugs collection which will be later wired up in NodeJS program. Sample list of bugs can be seen in the following screenshot: Summary Every development team needs bug tracking and reporting tools. There are typical needs of bug reporting and bug assignment. In case of critical projects these needs become also very critical for project timelines. This article showed us how we can provide a seamless experience to developers while they are communicating with peers within a channel. To summarize so far, we understood how to use DocumentDB from Microsoft Azure. Using DocumentDB, we created a new collection along with new database to store bugs data. We also added some sample JSON documents in Bugs collection. In today's world of collaboration, development teams who would be using such integrations and automations would be efficient and effective while delivering their quality products. Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 1852

article-image-web-development-react-and-bootstrap
Packt
04 Jan 2017
18 min read
Save for later

Web Development with React and Bootstrap

Packt
04 Jan 2017
18 min read
In this article by Harmeet Singh and Mehul Bhat, the authors of the book Learning Web Development with React and Bootstrap, we are going to see how we can build responsive web applications with the help of Bootstrap and ReactJS. (For more resources related to this topic, see here.) There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and a lot of new theory to learn. This book introduces you to ReactJS and Bootstrap which you will likely come across as you learn about modern web app development. They both are used for building fast and scalable user interfaces. React is famously known as a view (V) in MVC when we talk about defining M and C we need to look somewhere else or we can use other frameworks like Redux and Flux to handle remote data. The best way to learn code is to write code, so we're going to jump right in. To show you just how easy it is to get up and running with Bootstrap and ReactJS, we're going to cover theory and will see how we can make super simple applications as well as integrate with other applications. ReactJS React (sometimes styled React.js or ReactJS) is an open-source JavaScript library which provides a view for data rendered as HTML. Components have been used typically to render React views which contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow and explicit mutation. It is very methodical in updating the HTML document when data changes; and a clean separation between components on a modern single-page application. As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner and the React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only V in MVC, it has stateful components, it handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC would do. Let's look at React's component life cycle and it's different levels: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a faster and an easier way to develop powerful mobile-first user interface. Bootstrap grid system allows us to create responsive 12 column grids, layout, and components. It includes predefined classes for easy layout options (fixed width and full width). Bootstrap have pre-styled dozen reusable components and custom jQuery plugins like button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons and many more. Bootstrap package includes the compiled and minified version of CSS and JS for our app we just need CSS bootstrap.min.css and fonts folder. This style sheet will provide you the look and feel of all components, responsive layout structure for our application. In the previous version Bootstrap included icons as image but in version 3 they have replaced icons as fonts. We can also customize the Bootstrap CSS stylesheet as per the component featured in our application. React-Bootstrap The React-Bootstrap JavaScript framework is similar to Bootstrap rebuilt for React. It's a complete reimplementation of the Bootstrap frontend reusable components in React. React-Bootstrap has no dependency on any other framework, such as Bootstrap.js or jQuery. It means that if you are using React-Bootstrap then we don't need to include the jQuery in your project as a dependency. Using React-Bootstrap, we can be sure that, there won't be external JavaScript calls to render the component which might be incompatible with the React DOM render. However, you can still achieve the same functionality and look and feel as Twitter Bootstrap, but with much cleaner code. Benefits of React-Bootstrap Compare to Twitter Bootstrap, we can import required code/component. It saves a bit of typing and bugs by compressing the Bootstrap. It reduces typing efforts and, more importantly, conflicts by compressing the Bootstrap. We don't need to think about the different approaches taken by Bootstrap versus. React It is easy to use It encapsulates in elements It uses JSX syntax It avoids React rendering of the virtual DOM It is easy to detect DOM changes and update the DOM without any conflict It doesn't have any dependency on other libraries, such as jQuery Bootstrap grid system Bootstrap is based on a 12-column grid system which includes a powerful responsive structure and a mobile-first fluid grid system that allows us to scaffold our web app with very few elements. In Bootstrap, we have a predefined series of classes to compose of rows and columns, so before we start we need to include the <div>tag with the container class to wrap our rows and columns. Otherwise, the framework won't respond as expected because Bootstrap has written CSS which is dependent on it. The preceding snippet has HTML structure of container class <div> tag: <div class="container"><div> This will make your web app the centre of the page as well as control the rows and columns to work as expected in responsive. There are four class prefixes which help to define the behaviour of the columns. All the classes are related to different device screen sizes and react in familiar ways. The following table from http://getbootstrap.com defines the variations between all four classes:   Extra small devices Phones (<768px) Small devices Tablets (≥768px) Medium devices Desktops (≥992px) Large devices Desktops (≥1200px) Grid behavior Horizontal at all times Collapsed to start, horizontal above breakpoints Container width None (auto) 750px 970px 1170px Class prefix .col-xs- .col-sm- .col-md- .col-lg- # of columns 12 Column width Auto ~62px ~81px ~97px Gutter width 30px (15px on each side of a column) Nestable Yes Offsets Yes Column ordering Yes React components React is basically based on a modular build, with encapsulated components that manage their own state so it will efficiently update and render your components when data changes. In React, components logic is written in JavaScript instead of templates so you can easily pass rich data through your app and manage the state out of the DOM. Using the render() method, we are rendering a component in React that takes input data and returns what you want to display. It can either take HTML tags (strings) or React components (classes). Let's take a quick look at examples of both: var myReactElement = <div className="hello" />; ReactDOM.render(myReactElement, document.getElementById('example')); In this example, we are passing HTML as a string into the render method which we have used before creating the <Navbar>: var ReactComponent = React.createClass({/*...*/}); var myReactElement = <ReactComponent someProperty={true} />; ReactDOM.render(myReactElement, document.getElementById('example')); In the preceding example, we are rendering the component, just to create a local variable that starts with an uppercase convention. Using the upper versus lowercase convention in React's JSX will distinguish between local component classes and HTML tags. So, we can create our React elements or components in two ways: either we can use Plain JavaScript with React.createElement or React's JSX. React.createElement() Using JSX in React is completely optional for creating the react app. As we know, we can create elements with React.createElement which take three arguments: a tag name or component, a properties object, and a variable number of child elements, which is optional: var profile = React.createElement('li',{className:'list-group-item'}, 'Profile'); var profileImageLink = React.createElement('a',{className:'center-block text-center',href:'#'},'Image'); var profileImageWrapper = React.createElement('li',{className:'list-group-item'}, profileImageLink); var sidebar = React.createElement('ul', { className: 'list-group' }, profile, profileImageWrapper); ReactDOM.render(sidebar, document.getElementById('sidebar')); In the preceding example, we have used React.createElement to generate a ulli structure. React already has built-in factories for common DOM HTML tags. JSX in React JSX is extension of JavaScript syntax and if you observe the syntax or structure of JSX, you will find it similar to XML coding. JSX is doing preprocessor footstep which adds XML syntax to JavaScript. Though, you can certainly use React without JSX but JSX makes react a lot more neat and elegant. Similar like XML, JSX tags are having tag name, attributes, and children and in that if an attribute value is enclosed in quotes that value becomes a string. The way XML is working with balanced opening and closing tags, JSX works similarly and it also helps to understand and read huge amount of structures easily than JavaScript functions and objects. Advantages of using JSX in React Take a look at the following points: JSX is very simple to understand and think about than JavaScript functions Mark-up of JSX would be more familiar to non-programmers Using JSX, your markup becomes more semantic, organized, and significant JSX – acquaintance or understanding In the development region, user interface developer, user experience designer, and quality assurance people are not much familiar with any programming language but JSX makes their life easy by providing easy syntax structure which is visually similar to HTML structure. JSX shows a path to indicate and see through your mind's eye, the structure in a solid and concise way. JSX – semantics/structured syntax Till now, we have seen how JSX syntax is easy to understand and visualize, behind this there is big reason of having semantic syntax structure. JSX with pleasure converts your JavaScript code into more standard way, which gives clarity to set your semantic syntax and significance component. With the help of JSX syntax you can declare structure of your custom component with information the way you do in HTML syntax and that will do all magic to transform your syntax to JavaScript functions. ReactDOM namespace helps us to use all HTML elements with the help of ReactJS, isn't this an amazing feature! It is. Moreover, the good part is, you can write your own named components with help of ReactDOM namespace. Please check out below HTML simple mark-up and how JSX component helps you to have semantic markup. <div className="divider"> <h2>Questions</h2><hr /> </div> As you can see in the preceding example, we have wrapped <h2>Questions</h2><hr /> with <div> tag which has classNamedivider so, in React composite component, you can create similar structure and it is as easy as you do your HTML coding with semantic syntax: <Divider> Questions </Divider> Composite component As we know that, you can create your custom component with JSX markup and JSX syntax will transform your component to JavaScript syntax component. Namespace components It's another feature request which is available in React JSX. We know that JSX is just an extension of JavaScript syntax and it also provides ability to use namespace so, React is also using JSX namespace pattern rather than XML namespacing. By using, standard JavaScript syntax approach which is object property access, this feature is useful for assigning component directly as <Namespace.Component/> instead of assigning variables to access components which are stored in an object. JSXTransformer JSXTransformer is another tool to compile JSX in the browser. While reading a code, browser will read attribute type="text/jsx" in your mentioned <script> tag and it will only transform those scripts which has mentioned type attribute and then it will execute your script or written function in that file. The code will be executed in same manner the way React-tools executes on the server. JSXTransformer is deprecating in current version of React, but you can find the current version on any provided CDNs and Bower. As per my opinion, it would be great to use Babel REPL tool to compile JavaScript. It has already adopted by React and broader JavaScript community. Attribute expressions If you can see above example of show/Hide we have used attribute expression for show the message panel and hide it. In react, there is a bit change in writing an attribute value, in JavaScript expression we write attribute in quotes ("") but we have to provide pair of curly braces ({}). var showhideToggle = this.state.collapse ? (<MessagePanel>):null/>; Boolean attributes As in Boolean attribute, there are two values, either it can be true or false and if we neglect its value in JSX while declaring attribute, it by default takes value as true. If we want to have attribute value false then we have to use an attribute expression. This scenario can come regularly when we use HTML form elements, for example disabled attribute, required attribute, checked attribute, and readOnly attribute. In Bootstrap example: aria-haspopup="true"aria-expanded="true" // example of writing disabled attribute in JSX <input type="button" disabled />; <input type="button" disabled={true} />; JavaScript expressions As seen in the preceding example, you can embed JavaScript expressions in JSX using syntax that will be accustomed to any handlebars user, for example style = { displayStyle } allocates the value of the JavaScript variable displayStyle to the element's style attribute. Styles Same as the expression, you can set styles by assigning an ordinary JavaScript object to the style attribute. How interesting, if someone tells you, not to write CSS syntax but you can write JavaScript code to achieve the same, no extra efforts. Isn't it superb stuff! Yes, it is. Events There is a set of event handlers that you can bind in a way that should look much acquainted to anybody who knows HTML. Generally, as per our practice we set properties on to the object which is anti-pattern in JSX attribute standard. var component = <Component />; component.props.foo = x; // bad component.props.bar = y; // also bad As shown in the preceding example, you can see the anti-pattern and it's not the best practice. If you don't know about properties of JSX attributes then propTypes won't be set and it will throw errors which would be difficult for you to trace. Props is very sensitive part of attribute so, you should not change it, as each props is having predefined method and you should use it as it is meant for, like we use other JavaScript methods or HTML tags. This doesn't mean that it is impossible to change Props, it is possible but it is against standard defined by React. Even in React, it will throw error. Spread attributes Let's check out JSX feature—spread attributes: var props = {}; props.foo = x; props.bar = y; var component = <Component {...props} />; As you see in above example, your properties which you have declared have become part of your component's props as well. Reusability of attributes is also possible here and you can also map it with other attributes. But you have to be very careful in ordering your attributes while you declare it, as it will override the previous declared attribute with lastly declared one. Props and state React components translate your raw data into Rich HTML, the props and state together build with that raw data to keep your UI consistent. Ok, let's identify what exactly it is: Props and state are both plain JS objects. It triggers with a render update. React manage the component state by calling setState (data, callback). This method will merge data into this.state, and re-renders the component to keep our UI up to date. For example, the state of the drop-down menu (visible or hidden). React component props - short for "properties" that don't change over time. For example, drop-down menu items. Sometimes components only take some data with this .props method and render it, which makes your component stateless. Using props and state together helps you to make an interactive app. Component life cycle methods In React each component has its own specific callback function. These callback's functions play an important role when we are thinking about DOM manipulation or integrating other plugins in React (jQuery). Let's look at some commonly used methods in the lifecycle of a component: getInitialState(): This method will help you to get the initial state of a component. componentDidMount: This method is called automatically when a component is rendered or mounted for the first time in DOM. Integrate JavaScript frameworks, we'll use this method to perform operations like setTimeout or setInterval, or send AJAX requests. componentWillReceiveProps: This method will be used to receive a new props. componentWillUnmount: This method is invoked before component is unmounted from DOM. Cleanup the DOM memory elements which are mounted in componentDidMount method. componentWillUpdate: This method invoked before updating a new props and state. componentDidUpdate: This is invoked immediately when the component has been updated in DOM What is Redux? As we know, in single page applications (SPAs) when we have to contract with state and time, it would be difficult to handgrip state over time. Here, Redux helps a lot, how? Because, in JavaScript application, Redux is handling two states: one is Data state and another is UI state and it's standard option for SPAs (single page applications). Moreover, bearing in mind, Redux can be used with AngularJS or jQuery or React JavaScript libraries or frameworks. Now we know, what does Redux mean? In short, Redux is a helping hand to play with states while developing JavaScript applications. We have seen in our previous examples like, the data flows in one direction only from parent level to child level and it is known as "unidirectional data flow". React has same flow direction from data to components so in this case it would be very difficult for proper communication of two components in React. Redux's architecture benefits: Compare to other frameworks, it has more benefits: It might not have any other way effects As we know, binning is not needed because components cant not interact directly States are managed globally so less possibility of mismanagement Sometimes, for middleware it would be difficult to manage other way effects React Top Level API When we are talking about React API, it's the starting step to get into React library. Different usage of React will provide different output like using React script tag will make top-level APIs available on the React global, using ES6 with npm will allow us to write import React from 'react' and using ES5 with npm will allow us to write var React = require('react'), so there are multiple ways to intialize the React with different features. Mount/Unmount component Always, it's recommended to have custom wrapper API in your API, suppose we have single root or more than one root and it will be deleted at some period, so you will not lose it. Facebook is also having the similar set up which automatically calls unmountComponentAtNode. I also suggest not to call ReactDOM.render() every time but ideal way is to write or use it through library so, by that way Application will have mounting and unmounting to manage it. Creating custom wrapper will help you to manage configuration at one place like internationalization, routers, user data and it would be very painful to set up all configuration every time at different places. React integration with other APIs React integration is nothing but converting Web component to React component by using JSX, Redux, and other methods of React. I would like to share here, some of the best practices to be followed to have 100% quality output. Things to remember while creating application with React Take a look at the following points to remember: Before you start working on React, always remember that it is just a View library, not the MVC framework. It is advisable to have small length of component to deal with classes and modules as well as it makes life easy while code understanding, unit testing and long run maintenance of component. React has introduced functions of props in its 0.14 version which is recommended to use, it is also known as functional component which helps to split your component. To avoid painful journey while dealing with React-based app, please don't use much states. As I said earlier that React is only view library so, to deal with rendering part, I recommend to use Redux rather than other frameworks of Flux. If you want to have more type safety then always use propTypes which also helps to catch bug early and acts as a document. I recommend use of shallow rendering method to test React component which allows rendering single component without touching their child components. While dealing with large React applications, always use Webpack, NPM, ES6, JSX, and Babel to complete your application. If you want to have deep dive into React's application and its elements, you can use Reduxdev tools. Summary To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. With Bootstrap, we work towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 9736
article-image-setting-environment
Packt
04 Jan 2017
14 min read
Save for later

Setting Up the Environment

Packt
04 Jan 2017
14 min read
In this article by Sohail Salehi, the author of the book Angular 2 Services, you can ask the two fundamental questions, when a new development tool is announced or launched are: how different is the new tool from other competitor tools and how enhanced it is when compared to its own the previous versions? If we are going to invest our time in learning a new framework, common sense says we need to make sure we get a good return on our investment. There are so many good articles out there about cons and pros of each framework. To me, choosing Angular 2 boils down to three aspects: The foundation: Angular is introduced and supported by Google and targeted at “ever green” modern browsers. This means we, as developers, don't need to lookout for hacky solutions in each browser upgrade, anymore. The browser will always be updated to the latest version available, letting Angular worry about the new changes and leaving us out of it. This way we can focus more on our development tasks. The community: Think about the community as an asset. The bigger the community, the wider range of solutions to a particular problem. Looking at the statistics, Angular community still way ahead of others and the good news is this community is leaning towards being more involved and more contributing on all levels. The solution: If you look at the previous JS frameworks, you will see most of them focusing on solving a problem for a browser first, and then for mobile devices. The argument for that could be simple: JS wasn't meant to be a language for mobile development. But things have changed to a great extent over the recent years and people now use mobile devices more than before. I personally believe a complex native mobile application – which is implemented in Java or C – is more performant, as compared to its equivalent implemented in JS. But the thing here is that not every mobile application needs to be complex. So business owners have started asking questions like: Why do I need a machine-gun to kill a fly? (For more resources related to this topic, see here.) With that question in mind, Angular 2 chose a different approach. It solves the performance challenges faced by mobile devices first. In other words, if your Angular 2 application is fast enough on mobile environments, then it is lightning fast inside the “ever green” browsers. So that is what we are going to do in this article. First we are going to learn about Angular 2 and the main problem it is going to solve. Then we talk a little bit about the JavaScript history and the differences between Angular 2 and AngularJS 1. Introducing “The Sherlock Project” is next and finally we install the tools and libraries we need to implement this project. Introducing Angular 2 The previous JS frameworks we've used already have a fluid and easy workflow towards building web applications rapidly. But as developers what we are struggling with is the technical debt. In simple words, we could quickly build a web application with an impressive UI. But as the product kept growing and the change requests started kicking in, we had to deal with all maintenance nightmares which forces a long list of maintenance tasks and heavy costs to the businesses. Basically the framework that used to be an amazing asset, turned into a hairy liability (or technical debt if you like). One of the major "revamps" in Angular 2 is the removal of a lot of modules resulting in a lighter and faster core. For example, if you are coming from an Angular 1.x background and don't see $scope or $log in the new version, don't panic, they are still available to you via other means, But there is no need to add overhead to the loading time if we are not going to use all modules. So taking the modules out of the core results in a better performance. So to answer the question, one of the main issues Angular 2 addresses is the performance issues. This is done through a lot of structural changes. There is no backward compatibility We don't have backward compatibility. If you have some Angular projects implemented with the previous version (v1.x), depending on the complexity of the project, I wouldn't recommend migrating to the new version. Most likely, you will end up hammering a lot of changes into your migrated Angular 2 project and at the end you will realize it was more cost effective if you would just create a new project based on Angular 2 from scratch. Please keep in mind, the previous versions of AngularJS and Angular 2 share just a name, but they have huge differences in their nature and that is the price we pay for a better performance. Previous knowledge of AngularJS 1.x is not necessary You might wondering if you need to know AngularJS 1.x before diving into Angular 2. Absolutely not. To be more specific, it might be even better if you didn't have any experience using Angular at all. This is because your mind wouldn't be preoccupied with obsolete rules and syntaxes. For example, we will see a lot of annotations in Angular 2 which might make you feel uncomfortable if you come from a Angular 1 background. Also, there are different types of dependency injections and child injectors which are totally new concepts that have been introduced in Angular 2. Moreover there are new features for templating and data-binding which help to improve loading time by asynchronous processing. The relationship between ECMAScript, AtScript and TypeScript The current edition of ECMAScript (ES5) is the one which is widely accepted among all well known browsers. You can think of it as the traditional JavaScript. Whatever code is written in ES5 can be executed directly in the browsers. The problem is most of modern JavaScript frameworks contain features which require more than the traditional JavaScript capabilities. That is why ES6 was introduced. With this edition – and any future ECMAScript editions – we will be able to empower JavaScript with the features we need. Now, the challenge is running the new code in the current browsers. Most browsers, nowadays recognize standard JavaScript codes only. So we need a mediator to transform ES6 to ES5. That mediator is called a transpiler and the technical term for transformations is transpiling. There are many good transpilers out there and you are free to choose whatever you feel comfortable with. Apart from TypeScript, you might want to consider Babel (babeljs.io) as your main transpiler. Google originally planned to use AtScript to implement Angular 2, but later they joined forces with Microsoft and introduced TypeScript as the official transpiler for Angular 2. The following figure summarizes the relationship between various editions of ECMAScript, AtScript and TypeScript. For more details about JavaScript, ECMAScript and how they evolved during the past decade visit the following link: https://en.wikipedia.org/wiki/ECMAScript Setting up tools and getting started! It is important to get the foundation right before installing anything else. Depending on your operating system, install Node.js and its package manager- npm. . You can find a detailed installation manual on Node.js official website. https://nodejs.org/en/ Make sure both Node.js and npm are installed globally (they are accessible system wide) and have the right permissions. At the time of writing npm comes with Node.js out of the box. But in case their policy changes in the future, you can always download the npm and follow the installation process from the following link. https://npmjs.com The next stop would be the IDE. Feel free to choose anything that you are comfortable with. Even a simple text editor will do. I am going to use WebStorm because of its embedded TypeScript syntax support and Angular 2 features which speeds up development process. Moreover it is light weight enough to handle the project we are about to develop. You can download it from here: https://jetbrains.com/webstorm/download We are going to use simple objects and arrays as a place holder for the data. But at some stage we will need to persist the data in our application. That means we need a database. We will use the Google's Firebase Realtime database for our needs. It is fast, it doesn't need to download or setup anything locally and more over it responds instantly to your requests and aligns perfectly with Angular's two-way data-binding. For now just leave the database as it is. You don't need to create any connections or database objects. Setting up the seed project The final requirement to get started would be an Angular 2 seed project to shape the initial structure of our project. If you look at the public source code repositories you can find several versions of these seeds. But I prefer the official one for two reasons: Custom made seeds usually come with a personal twist depending on the developers taste. Although sometimes it might be a great help, but since we are going to build everything from scratch and learn the fundamental concepts, they are not favorable to our project. The official seeds are usually minimal. They are very slim and don't contain overwhelming amount of 3rd party packages and environmental configurations. Speaking about packages, you might be wondering what happened to the other JavaScript packages we needed for this application. We didn't install anything else other than Node and NPM. The next section will answer this question. Setting up an Angular 2 project in WebStorm Assuming you have installed WebStorm, fire the IDE and and checkout a new project from a git repository. Now set the repository URL to: https://github.com/angular/angular2-seed.git and save the project in a folder called “the sherlock project” as shown in the figure below: Hit the Clone button and open the project in WebStorm. Next, click on the package.json file and observe the dependencies. As you see, this is a very lean seed with minimal configurations. It contains the Angular 2 plus required modules to get the project up and running. The first thing we need to do is install all required dependencies defined inside the package.json file. Right click on the file and select the “run npm install” option. Installing the packages for the first time will take a while. In the mean time, explore the devDependencies section of package.json in the editor. As you see, we have all the required bells and whistles to run the project including TypeScript, web server and so on to start the development: "devDependencies": { "@types/core-js": "^0.9.32", "@types/node": "^6.0.38", "angular2-template-loader": "^0.4.0", "awesome-typescript-loader": "^1.1.1", "css-loader": "^0.23.1", "raw-loader": "^0.5.1", "to-string-loader": "^1.1.4", "typescript": "^2.0.2", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0", "webpack-merge": "^0.8.4" }, We also have some nifty scripts defined inside package.json that automate useful processes. For example, to start a webserver and see the seed in action we can simply execute following command in a terminal: $ npm run server or we can right click on package.json file and select “Show npm Scripts”. This will open another side tab in WebStorm and shows all available scripts inside the current file. Basically all npm related commands (which you can run from command-line) are located inside the package.json file under the scripts key. That means if you have a special need, you can create your own script and add it there. You can also modify the current ones. Double click on start script and it will run the web server and loads the seed application on port 3000. That means if you visit http://localhost:3000 you will see the seed application in your browser: If you are wondering where the port number comes from, look into package.json file and examine the server key under the scripts section:    "server": "webpack-dev-server --inline --colors --progress --display-error-details --display-cached --port 3000 --content-base src", There is one more thing before we move on to the next topic. If you open any .ts file, WebStorm will ask you if you want it to transpile the code to JavaScript. If you say No once and it will never show up again. We don't need WebStorm to transpile for us because the start script is already contains a transpiler which takes care of all transformations for us. Front-end developers versus back-end developers Recently, I had an interesting conversation with a couple of colleagues of mine which worth sharing here in this article. One of them is an avid front-end developer and the other is a seasoned back-end developer. You guessed what I'm going to talk about: The debate between back-end/front-end developers and who is the better half. We have seen these kind of debates between back-end and front-end people in development communities long enough. But the interesting thing which – in my opinion – will show up more often in the next few months (years) is a fading border between the ends (front-end/back-end). It feels like the reason that some traditional front-end developers are holding up their guard against new changes in Angular 2, is not just because the syntax has changed thus causing a steep learning curve, but mostly because they now have to deal with concepts which have existed natively in back-end development for many years. Hence, the reason that back-end developers are becoming more open to the changes introduced in Angular 2 is mostly because these changes seem natural to them. Annotations or child dependency injections for example is not a big deal to back-enders, as much as it bothers the front-enders. I won't be surprised to see a noticeable shift in both camps in the years to come. Probably we will see more back-enders who are willing to use Angular as a good candidate for some – if not all – of their back-end projects and probably we will see more front-enders taking Object-Oriented concepts and best practices more seriously. Given that JavaScript was originally a functional scripting language they probably will try to catch-up with the other camp as fast as they can. There is no comparison here and I am not saying which camp has advantage over the other one. My point is, before modern front-end frameworks, JavaScript was open to be used in a quick and dirty inline scripts to solve problems quickly. While this is a very convenient approach it causes serious problems when you want to scale a web application. Imagine the time and effort you might have to make finding all of those dependent codes and re-factor them to reflect the required changes. When it comes to scalability, we need a full separation between layers and that requires developers to move away from traditional JavaScript and embrace more OOP best practices in their day to day development tasks. That what has been practiced in all modern front-end frameworks and Angular 2 takes it to the next level by completely restructuring the model-view-* concept and opening doors to the future features which eventually will be native part of any web browser. Introducing “The Sherlock Project” During the course of this journey we are going to dive into all new Angular 2 concepts by implementing a semi-AI project called: “The Sherlock Project”. This project will basically be about collecting facts, evaluating and ranking them, and making a decision about how truthful they are. To achieve this goal, we will implement a couple of services and inject them to the project, wherever they are needed. We will discuss one aspect of the project and will focus on one related service. At the end, all services will come together to act as one big project. Summary This article covered a brief introduction to Angular 2. We saw where Angular2 comes from, what problems is it going to solve and how we can benefit from it. We talked about the project that we will be creating with Angular 2. We also saw whic other tools are needed for our project and how to install and configure them. Finally, we cloned an Angular 2 seed project, installed all its dependencies and started a web server to examine the application output in a browser Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Third Party Libraries [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 1199

article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 1784

article-image-building-extensible-chat-bot-using-javascript-yaml
Andrea Falzetti
03 Jan 2017
6 min read
Save for later

Building an extensible Chat Bot using JavaScript & YAML

Andrea Falzetti
03 Jan 2017
6 min read
In this post, I will share with you my experience in designing and coding an extensible chat bot using YAML and JavaScript. I will show you that it's not always required to use AI, ML, or NPL to make a great bot. This won't be a step-by-step guide. My goal is to give you an overview of a project like this and inspire you to build your own. Getting Started The concept I will offer you consists of creating a collection of YAML scripts that will represent all the possible conversations that the bot can handle. When a new conversation starts, the YAML code gets converted to a JavaScript object that we can easily manage in Node.js. I have used WebSockets implemented with socket.io to transmit the messages back and forth. First, we need to agree on a set of commands that can be used to create the YAML scripts, let’s start with some essentials: messages: To send an array of messages from the bot to the user input: To ask the user for some data next: The action we want to perform next within the script script: The next script that will follow in the conversation with the user when, empty, not-empty: conditional statements YAML script example A sample script will look like the following:   Link to the gist Then we need to implement those commands in our bot engine. The Bot engine Using the WebSockets I send to the bot the name of the script that I want to play, the bot engine loads it and converts it to a JavaScript object. If the client doesn't know which script to play first, it can call a default script called “hello” that will introduce the bot. In each script, the first action to run will be the one with index 0. As per the example above, the first thing the bot will do is sending two messages to the user:   With the command next we jump to the next block, index = 1. Again we send another message and immediately after an input request.   At this point, the client receives the input request and allows the user to type the answer. Submitted the value, we send the data back to the bot that will append the information to a key-value store, where all data is received from the user live and is accessible via a key (for example,user_name). Using the when statement, we define conditional branches of the conversation. When dealing with data validation this is often this case. In the example, we want to make sure to get a valid name from the user, so if the value received is empty, we jump back in the script and ask for it again, continuing with the following steponly when the name is valid. Finally, the bot will send a message, this time containing a variable previously received and stored in the key-value store.   In the next script, we will see how to handle multiple choice and buttons, to allow the user to make a decision.   Link to the gist The conversation will start with a message from the bot,followed by an input request, with input type set as buttons_single_select which in the in client it translates to “display multiple buttons” using an array of options received with the input request:   When the user clicks on one of the options, the UI sends back the choice of the user to the bot which will eventually match it with one of the existing branches. Found the correct branch the bot engine will look for the next command to run, in this case is another another input request, this time expecting to receive an image from the client.   Once the file has been sent, the conversation ends just after the bot sends a last message confirming the file got uploaded successfully. Using YAML gives you the flexibility to build many different conversations and also allows you to easily implement A/B testing of your conversation. Implementing the bot engine with JavaScript / Node.js To build something able to play the YAML scripts above, you need to iterate the script commands until you find an explicit end command. It’s very important to keep in memory the index of current command in progress, so that you can move on as soon as the current task completes. When you meet a new command, you should pass it to a resolver that knows what each command does and is able to run the specific portion of code or function. Additionally, you will need a function that listens to the input received from the clients, validating and saving it into a key-value store. Extending the command set This approach allows you to create a large set of commands, that do different things including queries, Ajax requests, API calls to external services, etc. You can combine your command with a when statement so that a callback or promise can evolve in its specific branch depending on the result you got. Conclusion If you are wondering where the demo images come from, they are a screenshot of a React view built with the help of devices.css, a CSS package that provides the flat shape of iPhone, Android, and Windows phones in different colors only using pure CSS. I have built this view to test the bot, using socket.io-client for the WebSockets and React for the UI. This is not just a proof of concept; I am working on a project where we have currently implemented this logic. I invite you to review, think about it and leave a feedback. Thanks! About the author Andrea Falzetti is an enthusiastic full-stack developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots and machine learning. He is currently working at Activate Media, where his role is to estimate, design and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 5810
article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 2166

article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 2611