Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-creating-administration-interface-django
Packt
16 Oct 2009
5 min read
Save for later

Creating an Administration Interface in Django

Packt
16 Oct 2009
5 min read
Activating the Administration Interface The administration interface comes as a Django application. To activate it, we will follow a simple procedure that is similar to how we enabled the user authentication system. The admininistration application is located in the django.contrib.admin package. So the first step is adding the path of this package to the INSTALLED_APPS variable. Open settings.py, locate INSTALLED_APPS and edit it as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'django.contrib.comments', 'django_bookmarks.bookmarks',) Next, run the following command to create the necessary tables for the administration application: $ python manage.py syncdb Now we need to make the administration interface accessible from within our site by adding URL entries for it. The administration application defines many views, so manually adding a separate entry for each view can become a tedious task. Therefore, Django provides a shortcut for this. The administration interface defines all of its URL entries in a module located at django.contrib.admin.urls, and we can include these entries in our project under a particular path by using a function called include(). Open urls.py and add the following URL entry to it: urlpatterns = ( # Admin interface (r'^admin/', include('django.contrib.admin.urls')),) This looks different from how we usually define URL entries. We are basically telling Django to retrieve all of the URL entries in the django.contrib.admin.urls module, and to include them in our application under the path ^admin/. This will make the views of the administration interface accessible from within our project. One last thing remains before we see the administration page in action. We need to tell Django what models can be managed in the administration interface. This is done by defining a class called Admin inside each model. Open bookmarks/models.py and add the highlighted section to the Link model: class Link(models.Model): url = models.URLField(unique=True) def __str__(self): return self.url class Admin: pass The Admin class defined inside the model effectively tells Django to enable the Link model in the administration interface. The keyword pass means that the class is empty. Later, we will use this class to customize the administration page, so it won't remain empty. Do the same to the Bookmark, Tag and SharedBookmark models; append an empty class called Admin to each of them. The User model is provided by Django and therefore we don't have control over it. Fortunately however, it already contains an Admin class so it's available in the administration interface by default. Next, launch the development server and direct your browser to http://127.0.0.1:8000/admin/. You will be greeted by a login page. Remember we need to create a superuser account after writing the database model. This is the account that you have to use in order to log in: Next, you will see a list of the models that are available to the administration interface. As discussed earlier, only models with a class named Admin inside them will appear on this page: If you click on a model name, you will get a list of the objects that are stored in the database under this model. You can use this page to view or edit a particular object, or to add a new one. The figure below shows the listing page for the Link model. The edit form is generated according to the fields that exist in the model. The Link form, for example, contains a single text field called Url. You can use this form to view and change the URL of a Link object. In addition, the form performs proper validation of fields before saving the object. So if you try to save a Link object with an invalid URL, you will receive an error message asking you to correct the field. The figure below shows a validation error when trying to save an invalid link: Fields are mapped to form widgets according to their type. Date fields are edited using a calendar widget for example, whereas foreign key fields are edited using a list widget, and so on. The figure below shows a calendar widget from the user edit page. Django uses it for date and time fields: As you may have noticed, the administration interface represents models by using the string returned by the __str__ method. It was indeed a good idea to replace the generic strings returned by the default __str__ method with more helpful ones. This greatly helps when working with the administration page, as well as with debugging. Experiment with the administration pages; try to create, edit and delete objects. Notice how changes made in the administration interface are immediately reflected on the live site. Also, the administration interface keeps track of the actions that you make, and lets you review the history of changes for each object. This section has covered most of what you need to know in order to use the administration interface provided by Django. This feature is actually one of the main advantages of using Django; you get a fully featured administration interface from writing only a few lines of code! Next, we will see how to tweak and customize the administration pages. And as a bonus, we will learn more about the permissions system offered by Django.
Read more
  • 0
  • 0
  • 12160

article-image-web-scraping-electron
Dylan Frankland
23 Dec 2016
12 min read
Save for later

Web Scraping with Electron

Dylan Frankland
23 Dec 2016
12 min read
Web scraping is a necessary evil that takes unordered information from the Web and transforms it into something that can be used programmatically. The first big use of this was indexing websites for use in search engines like Google. Whether you like it or not, there will likely be a time in your life, career, academia, or personal projects where you will need to pull information that has no API or easy way to digest programatically. The simplest way to accomplish this is to do something along the lines of a curl request and then RegEx for the necessary information. That has been what the standard procedure for much of the history of web scraping until the single-page application came along. With the advent of Ajax, JavaScript became the mainstay of the Web and prevented much of it from being scraped with traditional methods such as curl that could only get static server rendered content. This is where Electron truly comes into play because it is a full-fledged browser but can be run programmatically. Electron's Advantages Electron goes beyond the abilities of other programmatic browser solutions: PhantomJS, SlimerJS, or Selenium. This is because Electron is more than just a headless browser or automation framework. PhantomJS (and by extension SlimerJS), run headlessly only preventing most debugging and because they have their own JavaScript engines, they are not 100% compatible with node and npm. Selenium, used for automating other browsers, can be a bit of a pain to set up because it requires setting up a central server and separately installing a browser you would like to scrape with. On the other hand, Electron can do all the above and more which makes it much easier to setup, run, and debug as you start to create your scraper. Applications of Electron Considering the flexibility of Electron and all the features it provides, it is poised to be used in a variety of development, testing, and data collection situations. As far as development goes, Electron provides all the features that Chrome does with its DevTools, making it easy to inspect, run arbitrary JavaScript, and put debugger statements. Testing can easily be done on websites that may have issues with bugs in browsers, but work perfectly in a different environment. These tests can easily be hooked up to a continuous integration server too. Data collection, the real reason we are all here, can be done on any type of website, including those with logins, using a familiar API with DevTools for inspection, and optionally run headlessly for automation. What more could one want? How to Scrape with Electron Because of Electron's ability to integrate with Node, there are many different npm modules at your disposal. This takes doing most of the leg work of automating the web scraping process and makes development a breeze. My personal favorite, nightmare, has never let me down. I have used it for tons of different projects from collecting info and automating OAuth processes to grabbing console errors, and also taking screenshots for visual inspection of entire websites and services. Packt's blog actually gives a perfect situation to use a web scraper. On the blog page, you will find a small single-page application that dynamically loads different blog posts using JavaScript. Using Electron, let's collect the latest blog posts so that we can use that information elsewhere. Full disclosure, Packt's Blog server-side renders the first page and could possibly be scraped using traditional methods. This won't be the case for all single-page applications. Prerequisites and Assumptions Following this guide, I assume that you already have Node (version 6+) and npm (version 3+) installed and that you are using some form of a Unix shell. I also assume that you are already inside a directory to do some coding in. Setup First of all, you will need to install some npm modules, so create a package.json like this: { "scripts": { "start": "DEBUG=nightmare babel-node ./main.js" }, "devDependencies": { "babel-cli": "6.18.0", "babel-preset-modern-node": "3.2.0", "babel-preset-stage-0": "6.16.0" }, "dependencies": { "nightmare": "2.8.1" }, "babel": { "presets": [ [ "modern-node", { "version": "6.0" } ], "stage-0" ] } } Using babel we can make use of some of the newer JavaScript utilities like async/await, making our code much more readable. nightmare has electron as a dependency, so there is no need to list that in our package.json (so easy). After this file is made, run npm install as usual. Investgating the Content to be Scraped First, we will need to identify the information we want to collect and how to get it: We go to here and we can see a page of different blog posts, but we are not sure of the order in which they are. We only want the latest and greatest, so let's change the sort order to "Date of Publication (New to Older)". Now we can see that we have only the blog posts that we want. Changing the sort order Next, to make it easier to collect more blog posts, change the number of blog posts shown from 12 to 48. Changing post view Last, but not least, are the blog posts themselves. Packt blog posts Next, we will figure the selectors that we will be using with nightmare to change the page and collect data: Inspecting the sort order selector, we find something like the following HTML: <div class="solr-pager-sort-selector"> <span>Sort By: &nbsp;</span> <select class="solr-page-sort-selector-select selectBox" style="display: none;"> <option value="" selected="">Relevance</option> <option value="ds_blog_release_date asc">Date of Publication (Older - Newer)</option> <option value="ds_blog_release_date desc">Date of Publication (Newer - Older)</option> <option value="sort_title asc">Title (A - Z)</option> <option value="sort_title desc">Title (Z - A)</option> </select> <a class="selectBox solr-page-sort-selector-select selectBox-dropdown" title="" tabindex="0" style="width: 243px; display: inline-block;"> <span class="selectBox-label" style="width: 220.2px;">Relevance</span> <span class="selectBox-arrow">▼</span> </a> </div> Checking out the "View" options, we see the following HTML: <div class="solr-pager-rows-selector"> <span>View: &nbsp;</span> <strong>12</strong> &nbsp; <a class="solr-page-rows-selector-option" data-rows="24">24</a> &nbsp; <a class="solr-page-rows-selector-option" data-rows="48">48</a> &nbsp; </div> Finally, the info we need from the post should look similar to this: <div class="packt-article-line-view float-left"> <div class="packt-article-line-view-image"> <a href="/books/content/mapt-v030-release-notes"> <noscript> &lt;img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" class="bookimage imagecache imagecache-ppv4_article_thumb"/&gt; </noscript> <img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" data-original="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" class="bookimage imagecache imagecache-ppv4_article_thumb" style="opacity: 1;"> <div class="packt-article-line-view-bg"></div> <div class="packt-article-line-view-title">Mapt v.0.3.0 Release Notes</div> </a> </div> <a href="/books/content/mapt-v030-release-notes"> <div class="packt-article-line-view-text ellipsis"> <div> <p>New features and fixes for the Mapt platform</p> </div> </div> </a> </div> Great! Now we have all the information necessary to start collecting data from the page! Creating the Scraper First things first, create a main.js file in the same directory as the package.json. Now let's put some initial code to get the ball rolling on the first step: import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .click(SELECTOR_SORT); })(); If you try to run this code, you will notice a window pop up with Packt's blog, then resize the window, and then nothing. This is because the sort selector only listens for mousedown events, so we will need to modify our code a little, by changing click to mousedown: await nightmare .goto(BLOG_URL) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT); Running the code again, after the mousedown, we get a small HTML dropdown, which we can inspect and see the following: <ul class="selectBox-dropdown-menu selectBox-options solr-page-sort-selector-select-selectBox-dropdown-menu selectBox-options-bottom" style="width: 274px; top: 354.109px; left: 746.188px; display: block;"> <li class="selectBox-selected"><a rel="">Relevance</a></li> <li class=""><a rel="ds_blog_release_date asc">Date of Publication (Older - Newer)</a></li> <li class=""><a rel="ds_blog_release_date desc">Date of Publication (Newer - Older)</a></li> <li class=""><a rel="sort_title asc">Title (A - Z)</a></li> <li class=""><a rel="sort_title desc">Title (Z - A)</a></li> </ul> So, to select the option to sort the blog posts by date, we will need to alter our code a little further (again, it does not work with click, so use mousedown plus mouseup): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION); })(); Awesome! We can clearly see the page reload, with posts from newest to oldest. After we have sorted everything, we need to change the number of blog posts shown (finally we can use click; lol!): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW); })(); Collecting the data is next; all we need to do is evaluate parts of the HTML and return the formatted information. The way the evaluate function works is by stringifying the function provided to it and injecting it into Electron's event loop. Because the function gets stringified, only a function that does not rely on outside variables or functions will execute properly. The value that is returned by this function must also be able to be stringified, so only JSON, strings, numbers, and booleans are applicable. import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW) .evaluate( () => { let posts = []; document.querySelectorAll('.packt-article-line-view').forEach( post => { posts = posts.concat({ image: post.querySelector('img').src, title: post.querySelector('.packt-article-line-view-title').textContent.trim(), link: post.querySelector('a').href, description: post.querySelector('p').textContent.trim(), }); } ); return posts; } ) .end() .then( result => console.log(result) ); })(); Once you run your function again, you should get a very long array of objects containing information about each post. There you have it! You have successfully scraped a page using Electron! There’s a bunch more of possibilities and tricks with this, but hopefully this small example gives you a good idea of what can be done. About the author Dylan Frankland is a frontend engineer at Narvar. He is an agile web developer, with over 4 years of experience developing and designing for start-ups and medium-sized businesses to create a functional, fast, and beautiful web experiences.
Read more
  • 0
  • 0
  • 11968

article-image-testing-restful-web-services-with-postman
Vijin Boricha
10 Apr 2018
3 min read
Save for later

Testing RESTful Web Services with Postman

Vijin Boricha
10 Apr 2018
3 min read
In today's tutorial, we are going to leverage Postman framework to successfully test RESTful Web Services. We will also discuss a simple JUnit test case, which is calling the getAllUsers method in userService. We can check the following code: @RunWith(SpringRunner.class) @SpringBootTest public class UserTests { @Autowired UserService userSevice; @Test public void testAllUsers(){ List<User> users = userSevice.getAllUsers(); assertEquals(3, users.size()); } } In the preceding code, we have called getAllUsers and verified the total count. Let's test the single-user method in another test case: // other methods @Test public void testSingleUser(){ User user = userSevice.getUser(100); assertTrue(user.getUsername().contains("David")); } In the preceding code snippets, we just tested our service layer and verified the business logic. However, we can directly test the controller by using mocking methods. Postman First, we shall start with a simple API for getting all the users: http://localhost:8080/user The earlier method will get all the users. The Postman screenshot for getting all the users is as follows: In the preceding screenshot, we can see that we get all the users that we added before. We have used the GET method to call this API. Adding a user – Postman Let's try to use the POST method in user to add a new user: http://localhost:8080/user Add the user, as shown in the following screenshot: In the preceding result, we can see the JSON output: { "result" : "added" } Generating a JWT – Postman Let's try generating the token (JWT) by calling the generate token API in Postman using the following code: http://localhost:8080/security/generate/token We can clearly see that we use subject in the Body to generate the token. Once we call the API, we will get the token. We can check the token in the following screenshot: Getting the subject from the token By using the existing token that we created before, we will get the subject by calling the get subject API: http://localhost:8080/security/get/subject The result will be as shown in the following screenshot: In the preceding API call, we sent the token in the API to get the subject. We can see the subject in the resulting JSON. You read an excerpt from Building RESTful Web Services with Spring 5 - Second Edition written by Raja CSP Raman.  From this book, you will learn to build resilient software in Java with the help of the Spring 5.0 framework. Check out the other tutorials from this book: How to develop RESTful web services in Spring Applying Spring Security using JSON Web Token (JWT) More Spring 5 tutorials: Introduction to Spring Framework Preparing the Spring Web Development Environment  
Read more
  • 0
  • 0
  • 11954
Visually different images

article-image-installing-and-using-vuejs
Packt
10 Jan 2017
14 min read
Save for later

Installing and Using Vue.js

Packt
10 Jan 2017
14 min read
In this article by Olga Filipova, the author of the book Learning Vue.js 2, explores the key concepts of Vue.js framework to understand all its behind the scenes. Also in this article, we will analyze all possible ways of installing Vue.js. We will also learn the ways of debugging and testing our applications. (For more resources related to this topic, see here.) So, in this article we are going to learn: What is MVVM architecture paradigm and how does it apply to Vue.js How to install, start, run, and debug Vue application MVVM architectural pattern Do you remember how to create the Vue instance? We were instantiating it calling new Vue({…}). You also remember that in the options we were passing the element on the page where this Vue instance should be bound and the data object that contained properties we wanted to bind to our view. The data object is our model and DOM element where Vue instance is bound is view. Classic View-Model representation where Vue instance binds one to another In the meantime, our Vue instance is something that helps to bind our model to the View and vice-versa. Our application thus follows Model-View-ViewModel (MVVM) pattern where the Vue instance is a ViewModel. The simplified diagram of Model View ViewModel pattern Our Model contains data and some business logic, our View is responsible for its representation. ViewModel handles data binding ensuring the data changed in the Model is immediately affecting the View layer and vice-versa. Our Views thus become completely data-driven. ViewModel becomes responsible for the control of data flow, making data binding fully declarative for us. Installing, using, and debugging a Vue.js application In this section, we will analyze all possible ways of installing Vue.js. We will also create a skeleton for our. We will also learn the ways of debugging and testing our applications. Installing Vue.js There are a number of ways to install Vue.js. Starting from classic including the downloaded script into HTML within <script> tags, using tools like bower, npm, or Vue's command-line interface (vue-cli) to bootstrap the whole application. Let's have a look at all these methods and choose our favorite. In all these examples we will just show a header on a page saying Learning Vue.js. Standalone Download the vue.js file. There are two versions, minified and developer version. The development version is here: https://vuejs.org/js/vue.js. The minified version is here: https://vuejs.org/js/vue.min.js. If you are developing, make sure you use the development non-minified version of Vue. You will love nice tips and warnings on the console. Then just include vue.js in the script tags: <script src=“vue.js”></script> Vue is registered in the global variable. You are ready to use it. Our example will then look as simple as the following: <div id="app"> <h1>{{ message }}</h1> </div> <script src="vue.js"></script> <script> var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); </script> CDN Vue.js is available in the following CDN's: jsdelivr: https://cdn.jsdelivr.net/vue/1.0.25/vue.min.js cdnjs: https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js npmcdn: https://npmcdn.com/[email protected]/dist/vue.min.js Just put the url in source in the script tag and you are ready to use Vue! <script src=“ https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js”></script> Beware so, the CDN version might not be synchronized with the latest available version of Vue. Thus, the example will look like exactly the same as in the standalone version, but instead of using downloaded file in the <script> tags, we are using a CDN URL. Bower If you are already managing your application with bower and don't want to use other tools, there's also a bower distribution of Vue. Just call bower install: # latest stable release bower install vue Our example will look exactly like the two previous examples, but it will include the file from the bower folder: <script src=“bower_components/vue/dist/vue.js”></script> CSP-compliant CSP (content security policy) is a security standard that provides a set of rules that must be obeyed by the application in order to prevent security attacks. If you are developing applications for browsers, more likely you know pretty well about this policy! For the environments that require CSP-compliant scripts, there’s a special version of Vue.js here: https://github.com/vuejs/vue/tree/csp/dist Let’s do our example as a Chrome application to see the CSP compliant vue.js in action! Start from creating a folder for our application example. The most important thing in a Chrome application is the manifest.json file which describes your application. Let’s create it. It should look like the following: { "manifest_version": 2, "name": "Learning Vue.js", "version": "1.0", "minimum_chrome_version": "23", "icons": { "16": "icon_16.png", "128": "icon_128.png" }, "app": { "background": { "scripts": ["main.js"] } } } The next step is to create our main.js file which will be the entry point for the Chrome application. The script should listen for the application launching and open a new window with given sizes. Let’s create a window of 500x300 size and open it with index.html: chrome.app.runtime.onLaunched.addListener(function() { // Center the window on the screen. var screenWidth = screen.availWidth; var screenHeight = screen.availHeight; var width = 500; var height = 300; chrome.app.window.create("index.html", { id: "learningVueID", outerBounds: { width: width, height: height, left: Math.round((screenWidth-width)/2), top: Math.round((screenHeight-height)/2) } }); }); At this point the Chrome specific application magic is over and now we shall just create our index.html file that will do the same thing as in the previous examples. It will include the vue.js file and our script where we will initialize our Vue application: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - CSP-compliant</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="assets/vue.js"></script> <script src="assets/app.js"></script> </body> </html> Download the CSP-compliant version of vue.js and add it to the assets folder. Now let’s create the app.js file and add the code that we already wrote added several times: var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Add it to the assets folder. Do not forget to create two icons of 16 and 128 pixels and call them icon_16.png and icon_128.png. Your code and structure in the end should look more or less like the following: Structure and code for the sample Chrome application using vue.js And now the most important thing. Let’s check if it works! It is very very simple. Go to chrome://extensions/ url in your Chrome browser. Check Developer mode checkbox. Click on Load unpacked extension... and check the folder that we’ve just created. Your app will appear in the list! Now just open a new tab, click on apps, and check that your app is there. Click on it! Sample Chrome application using vue.js in the list of chrome apps Congratulations! You have just created a Chrome application! NPM NPM installation method is recommended for large-scale applications. Just run npm install vue: # latest stable release npm install vue # latest stable CSP-compliant release npm install vue@csp And then require it: var Vue = require(“vue”); Or, for ES2015 lovers: import Vue from “vue”; Our HTML in our example will look exactly like in the previous examples: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - NPM Installation</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="main.js"></script> </body> </html> Now let’s create a script.js file that will look almost exactly the same as in standalone or CDN version with only difference that it will require vue.js: var Vue = require("vue"); var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Let’s install vue and browserify in order to be able to compile our script.js into the main.js file: npm install vue –-save-dev npm install browserify –-save-dev In the package.json file add also a script for build that will execute browserify on script.js transpiling it into main.js. So our package.json file will look like this: { "name": "learningVue", "scripts": { "build": "browserify script.js -o main.js" }, "version": "0.0.1", "devDependencies": { "browserify": "^13.0.1", "vue": "^1.0.25" } } Now run: npm install npm run build And open index.html in the browser. I have a friend that at this point would say something like: really? So many steps, installations, commands, explanations… Just to output some header? I’m out! If you are also thinking this, wait. Yes, this is true, now we’ve done something really simple in a rather complex way, but if you stay with me a bit longer, you will see how complex things become easy to implement if we use the proper tools. Also, do not forget to check your Pomodoro timer, maybe it’s time to take a rest! Vue-cli Vue provides its own command line interface that allows bootstrapping single page applications using whatever workflows you want. It immediately provides hot reloading and structure for test driven environment. After installing vue-cli just run vue init <desired boilerplate> <project-name> and then just install and run! # install vue-cli $ npm install -g vue-cli # create a new project $ vue init webpack learn-vue # install and run $ cd learn-vue $ npm install $ npm run dev Now open your browser on localhost:8080. You just used vue-cli to scaffold your application. Let’s adapt it to our example. Open a source folder. In the src folder you will find an App.vue file. Do you remember we talked about Vue components that are like bricks from which you build your application? Do you remember that we were creating them and registering inside our main script file and I mentioned that we will learn to build components in more elegant way? Congratulations, you are looking at the component built in a fancy way! Find the line that says import Hello from './components/Hello'. This is exactly how the components are being reused inside other components. Have a look at the template at the top of the component file. At some point it contains the tag <hello></hello>. This is exactly where in our HTML file will appear the Hello component. Have a look at this component, it is in the src/components folder. As you can see, it contains a template with {{ msg }} and a script that exports data with defined msg. This is exactly the same what we were doing in our previous examples without using components. Let’s slightly modify the code to make it the same as in the previous examples. In the Hello.vue file change the msg in data object: <script> export default { data () { return { msg: “Learning Vue.js” } } } </script> In the App.vue component remove everything from the template except the hello tag, so the template looks like this: <template> <div id="app"> <hello></hello> </div> </template> Now if you rerun the application you will see our example with beautiful styles we didn’t touch: vue application bootstrapped using vue-cli Besides webpack boilerplate template you can use the following configurations with your vue-cli: webpack-simple: A simple Webpack + vue-loader setup for quick prototyping. browserify: A full-featured Browserify + vueify setup with hot-reload, linting and unit testing. browserify-simple: A simple Browserify + vueify setup for quick prototyping. simple: The simplest possible Vue setup in a single HTML file Dev build My dear reader, I can see your shining eyes and I can read your mind. Now that you know how to install and use Vue.js and how does it work, you definitely want to put your hands deeply into the core code and contribute! I understand you. For this you need to use development version of Vue.js which you have to download from GitHub and compile yourself. Let’s build our example with this development version vue. Create a new folder, for example, dev-build and copy all the files from the npm example to this folder. Do not forget to copy the node_modules folder. You should cd into it and download files from GitHub to it, then run npm install and npm run build. cd <APP-PATH>/node_modules git clone https://github.com/vuejs/vue.git cd vue npm install npm run build Now build our example application: cd <APP-PATH> npm install npm run build Open index.html in the browser, you will see the usual Learning Vue.js header. Let’s now try to change something in vue.js source! Go to the node_modules/vue/src folder. Open config.js file. The second line defines delimeters: let delimiters = ['{{', '}}'] This defines the delimiters used in the html templates. The things inside these delimiters are recognized as a Vue data or as a JavaScript code. Let’s change them! Let’s replace “{{” and “}}” with double percentage signs! Go on and edit the file: let delimiters = ['%%', '%%'] Now rebuild both Vue source and our application and refresh the browser. What do you see? After changing Vue source and replacing delimiters, {{}} delimeters do not work anymore! The message inside {{}} is no longer recognized as data that we passed to Vue. In fact, it is being rendered as part of HTML. Now go to the index.html file and replace our curly brackets delimiters with double percentage: <div id="app"> <h1>%% message %%</h1> </div> Rebuild our application and refresh the browser! What about now? You see how easy it is to change the framework’s code and to try out your changes. I’m sure you have plenty of ideas about how to improve or add some functionality to Vue.js. So change it, rebuild, test, deploy! Happy pull requests! Debug Vue application You can debug your Vue application the same way you debug any other web application. Use your developer tools, breakpoints, debugger statements, and so on. Vue also provides vue devtools so it gets easier to debug Vue applications. You can download and install it from the Chrome web store: https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd After installing it, open, for example, our shopping list application. Open developer tools. You will see the Vue tab has automatically appeared: Vue devtools In our case we only have one component—Root. As you can imagine, once we start working with components and having lots of them, they will all appear in the left part of the Vue devtools palette. Click on the Root component and inspect it. You’ll see all the data attached to this component. If you try to change something, for example, add a shopping list item, check or uncheck a checkbox, change the title, and so on, all these changes will be immediately propagated to the data in the Vue devtools. You will immediately see the changes on the right side of it. Let’s try, for example, to add shopping list item. Once you start typing, you see on the right how newItem changes accordingly: The changes in the models are immediately propagated to the Vue devtools data When we start adding more components and introduce complexity to our Vue applications, the debugging will certainly become more fun! Summary In this article we have analyzed the behind the scenes of Vue.js. We learned how to install Vue.js. We also learned how to debug Vue application. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [Article] Tips & Tricks for Ext JS 3.x [Article] Working with Forms using REST API [Article]
Read more
  • 0
  • 0
  • 11800

article-image-jailbreaking-ipad-ubuntu
Packt
20 Jul 2010
3 min read
Save for later

Jailbreaking the iPad - in Ubuntu

Packt
20 Jul 2010
3 min read
(For more resources on Ubuntu, see here.) What is jailbreaking? Jailbreaking an iPhone or iPad allows you to run unsigned code by unlocking the root account on the device. Simply, this allows you to install any software you like - without the restriction of having to be in the main Apple app store. Remember, jailbreaking is not SIM unlocking. Jailbreaking voids the Apple-supplied warranty. What does this mean for developers? The mass availability of jailbreaking for these devices allows developers to write apps without having to shell out Apple's developer fees. Previously a one-off payment of $300 US, an "official" developer must now pay $100 US each year to keep the right to develop applications. What jailbreaks are available? Arguably the most advanced jailbreak available now is called Spirit. Though unlike a few others, which can now hack iOS 4.0, Spirit differs in a few key features. Not only is Spirit the first to be able to jailbreak the iPad, but this jailbreak also allows an "untethered" jailbreak - you won't have to plug it into a computer every boot to "keep" it jailbroken. Support for jailbreaking iOS 4.0 is coming soon for Spirit. There are tutorials on jailbreaking using Spirit, like this one, but they generally skip over the fact that there's a Linux version, and only talk about Windows and/or OS X. Jailbreaking the iPad A very simple process, you can now jailbreak the iPad very quickly thanks to excellent device support and drivers in Ubuntu. Please note that from now on, you should only plug in the device to iTunes 9 before 9.2, or better still, just use Rhythmbox or gtkpod to manage your library. Install git if you haven't already got it: sudo apt-get install git Clone the Spirit repository: git clone http://github.com/posixninja/spirit-linux.git Install the dev package for libimobiledevice: sudo apt-get install libimobiledevice-dev Enter the Spirit directory and build the program: cd spirit-linux make I've noticed that though Ubuntu has excellent Apple device support, and you can mount these devices just fine, that the jailbreak program won't detect the device without iFuse. Install this first: sudo apt-get install ifuse Now for the fun! Plug in your iPad (you'll see it under the Places menu) and run the jailbreak: ./spirit You'll see output similar to this: INFO: Retriving device listINFO: Opening deviceINFO: Creating lockdownd clientINFO: Starting AFC serviceINFO: Sending files via AFC.INFO: Found version iPad1,1_3.2INFO: Read igor/map.plistINFO: Sending "install"INFO: Sending "one.dylib"INFO: Sending "freeze.tar.xz"INFO: Sending "bg.jpg"INFO: Sending files completeINFO: Creating lockdownd clientINFO: Starting MobileBackup serviceINFO: Beginning restore processINFO: Read resources/overrides.plistDEBUG: add_fileDEBUG: Data size 922:DEBUG: add_fileDEBUG: Data size 0:DEBUG: start_restoreDEBUG: Sending fileDEBUG: Sending fileINFO: Completed restoreINFO: Completed successfully The device will reboot, and if all went well, you'll see a new app called Cydia on the home screen. This is the app that allows you to install other apps. Open Cydia. Cydia will ask you to choose what kind of user you are. There's no harm in choosing Developer; you'll just see more information. Also, if you choose the bottom level (User) console packages like OpenSSH will be hidden from you. You'll also receive some updates; install them. Interestingly, Cydia uses the deb package format, just like Ubuntu: That's it! Wasn't that quick?
Read more
  • 0
  • 0
  • 11759

article-image-internationalization-localization-node-js-app
Packt Editorial Staff
07 May 2018
15 min read
Save for later

How to implement Internationalization and localization in your Node.js app

Packt Editorial Staff
07 May 2018
15 min read
Internationalization, often abbreviated as i18n, implies a particular software design capable of adapting to the requirements of target local markets. In other words if we want to distribute our application to the markets other than USA we need to take care of translations, formatting of datetime, numbers, addresses, and such. In today’s post we will cover the concept of implementing Internationalization and localization in the Node.js app and look at context menu and system clipboard in detail. Date format by country Internationalization is a cross-cutting concern. When you are changing the locale it usually affects multiple modules. So I suggest going with the observer pattern that we already examined while working on DirService'. The ./js/Service/I18n.jsfile contains the following code: constEventEmitter=require("events"); classI18nServiceextendsEventEmitter{ constructor(){ super(); this.locale="en-US"; } notify(){ this.emit("update"); } } exports.I18nService=I18nService; As you see, we can change the locale by setting a new value to localeproperty. As soon as we call notifymethod, then all the subscribed modules immediately respond. But localeis a public property and therefore we have no control on its access and mutation. We can fix it by using overloading. The ./js/Service/I18n.jsfile contains the following code: //...constructor(){ super(); this._locale="en-US"; } getlocale(){ returnthis._locale; } setlocale(locale){ //validatelocale...this._locale=locale; } //... Now if we access localeproperty of I18ninstance it gets delivered by the getter (getlocale). When setting it a value, it goes through the setter (setlocale). Thus we can add extra functionality such as validation and logging on property access and mutation. Remember we have in the HTML, a combobox for selecting language. Why not give it a view? The ./js/View/LangSelector.jfile contains the following code: classLangSelectorView{ constructor(boundingEl,i18n){ boundingEl.addEventListener("change",this.onChanged.bind(this),false); this.i18n=i18n; } onChanged(e){ constselectEl=e.target;this.i18n.locale=selectEl.value;this.i18n.notify(); } } exports.LangSelectorView=LangSelectorView; In the preceding code, we listen for change events on the combobox. When the event occurs we change localeproperty of the passed in I18ninstance and call notifyto inform the subscribers. The ./js/app.jsfile contains the following code: consti18nService=newI18nService(), {LangSelectorView}=require("./js/View/LangSelector"); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); Well, we can change the locale and trigger the event. What about consuming modules? In FileListview we have static method formatTimethat formats the passed in timeStringfor printing. We can make it formated in accordance with currently chosen locale. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService){ //... this.i18n=i18nService; //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.update( dirService.getFileList())); } staticformatTime(timeString,locale){ constdate=newDate(Date.parse(timeString)),options={ year:"numeric",month:"numeric",day:"numeric",hour:"numeric",minute:"numeric",second:"numeric",hour12:false }; returndate.toLocaleString(locale,options); } update(collection){ //... this.el.insertAdjacentHTML("beforeend",`<liclass="file-listli"data-file="${fInfo.fileName}"> <spanclass="file-listliname">${fInfo.fileName}</span> <spanclass="file- listlisize">${filesize(fInfo.stats.size)}</span> <spanclass="file-listlitime">${FileListView.formatTime( fInfo.stats.mtime,this.i18n.locale)}</span> </li>`); //... } //... In the constructor, we subscribe for I18nupdate event and update the file list every time the locale changes. Static method formatTimeconverts passed in string into a Dateobject and uses Date.prototype.toLocaleString()method to format the datetime according to a given locale. This method belongs to so called ECMAScript Internationalization API (http://norbertlindenberg.com/2012/12/ecmascript-internationalization-api/index.html). The API describes methods of built-in object String, Dateand Numberdesigned to format and compare localized data. But what it really does is formatting a Dateinstance with toLocaleStringfor the English (United States) locale ("en-US") and it returns the date as follows: 3/17/2017, 13:42:23 However if we feed to the method German locale ("de-DE") we get quite a different result: 17.3.2017, 13:42:23 To put it into action we set an identifier to the combobox. The ./index.htmlfile contains the following code: .. <selectclass="footerselect"data-bind="langSelector"> .. And of course, we have to create an instance of I18nservice and pass it in LangSelectorViewand FileListView: ./js/app.js //... const{I18nService}=require("./js/Service/I18n"), {LangSelectorView}=require("./js/View/LangSelector"),i18nService=newI18nService(); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); //... newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService); Now we start the application. Yeah! As we change the language in the combobox the file modification dates adjust accordingly: Multilingual support Localization dates and number is a good thing, but it would be more exciting to provide translation to multiple languages. We have a number of terms across the application, namely the column titles of the file list and tooltips (via titleattribute) on windowing action buttons. What we need is a dictionary. Normally it implies sets of token translation pairs mapped to language codes or locales. Thus when you request from the translation service a term, it can correlate to a matching translation according to currently used language/locale. Here I have suggested making the dictionary as a static module that can be loaded with the required function. The ./js/Data/dictionary.jsfile contains the following code: exports.dictionary={"en-US":{ NAME:"Name",SIZE:"Size",MODIFIED:"Modified", MINIMIZE_WIN:"Minimizewindow", RESTORE_WIN:"Restorewindow",MAXIMIZE_WIN:"Maximizewindow",CLOSE_WIN:"Closewindow" }, "de-DE":{ NAME:"Dateiname",SIZE:"Grösse", MODIFIED:"Geändertam",MINIMIZE_WIN:"Fensterminimieren",RESTORE_WIN:"Fensterwiederherstellen",MAXIMIZE_WIN:"Fenstermaximieren", CLOSE_WIN:"Fensterschliessen" } }; So we have two locales with translations per term. We are going to inject the dictionary as a dependency into our I18nservice. The ./js/Service/I18n.jsfile contains the following code: //... constructor(dictionary){ super(); this.dictionary=dictionary; this._locale="en-US"; } translate(token,defaultValue){ constdictionary=this.dictionary[this._locale]; returndictionary[token]||defaultValue; } //... We also added a new method translate that accepts two parameters: tokenand defaulttranslation. The first parameter can be one of the keys from the dictionary like NAME. The second one is guarding value for the case when requested token does not yet exist in the dictionary. Thus we still get a meaningful text at least in English. Let's see how we can use this new method. The ./js/View/FileList.jsfile contains the following code: //... update(collection){ this.el.innerHTML=`<liclass="file-listlifile-listhead"> <spanclass="file-listliname">${this.i18n.translate("NAME","Name")}</span> <spanclass="file-listlisize">${this.i18n.translate("SIZE", "Size")}</span> <spanclass="file-listlitime">${this.i18n.translate("MODIFIED","Modified")}</span> </li>`; //... We change in FileListview hardcoded column titles with calls for translatemethod of I18ninstance, meaning that every time view updates it receives the actual translations. We shall not forget about TitleBarActionsview where we have windowing action buttons. The ./js/View/TitleBarActions.jsfile contains the following code: constructor(boundingEl,i18nService){ this.i18n=i18nService; //... //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.translate()); } translate(){ this.unmaximizeEl.title=this.i18n.translate("RESTORE_WIN","Restorewindow"); this.maximizeEl.title=this.i18n.translate("MAXIMIZE_WIN","Maximizewindow"); this.minimizeEl.title=this.i18n.translate("MINIMIZE_WIN","Minimizewindow"); this.closeEl.title=this.i18n.translate("CLOSE_WIN","Closewindow"); } Here we add method translate, which updates button title attributes with actual translations. We subscribe for i18nupdate event to call the method every time user changes locale: Context menu Well, with our application we can already navigate through the file system and open files. Yet, one might expect more of a File Explorer. We can add some file related actions like delete, copy/paste. Usually these tasks are available via the context menu, what gives us a good opportunity to examine how to make it with NW.js. With the environment integration API we can create an instance of system menu (http://docs.nwjs.io/en/latest/References/Menu/). Then we compose objects representing menu items and attach them to the menu instance (http://docs.nwjs.io/en/latest/References/MenuItem/). This menucan be shown in an arbitrary position: constmenu=newnw.Menu(),menutItem=newnw.MenuItem({ label:"Sayhello", click:()=>console.log("hello!") }); menu.append(menu); menu.popup(10,10); Yet our task is more specific. We have to display the menu on the right mouse click in the position of the cursor. That is, we achieve by subscribing a handler to contextmenuDOM event: document.addEventListener("contextmenu",(e)=>{ console.log(`Showmenuinposition${e.x},${e.y}`); }); Now whenever we right-click within the application window the menu shows up. It's not exactly what we want, isn't it? We need it only when the cursor resides within a particular region. For an instance, when it hovers a file name. That means we have to test if the target element matches our conditions: document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(elinstanceofHTMLElement&&el.parentNode.dataset.file){ console.log(`Showmenuinposition${e.x},${e.y}`); } }); Here we ignore the event until the cursor hovers any cell of file table row, given every row is a list item generated by FileListview and therefore provided with a value for data file attribute. This passage explains pretty much how to build a system menu and how to attach it to the file list. But before starting on a module capable of creating menu, we need a service to handle file operations. The ./js/Service/File.jsfile contains the following code: constfs=require("fs"), path=require("path"), //Copyfilehelper cp=(from,toDir,done)=>{ constbasename=path.basename(from),to=path.join(toDir,basename),write=fs.createWriteStream(to); fs.createReadStream(from) .pipe(write); write .on("finish",done); }; classFileService{ constructor(dirService){this.dir=dirService;this.copiedFile=null; } remove(file){ fs.unlinkSync(this.dir.getFile(file)); this.dir.notify(); } paste(){ constfile=this.copiedFile; if(fs.lstatSync(file).isFile()){ cp(file,this.dir.getDir(),()=>this.dir.notify()); } } copy(file){ this.copiedFile=this.dir.getFile(file); } open(file){ nw.Shell.openItem(this.dir.getFile(file)); } showInFolder(file){ nw.Shell.showItemInFolder(this.dir.getFile(file)); } }; exports.FileService=FileService; What's going on here? FileServicereceives an instance of DirServiceas a constructor argument. It uses the instance to obtain the full path to a file by name ( this.dir.getFile(file)). It also exploits notifymethod of the instance to request all the views subscribed to DirServiceto update. Method showInFoldercalls the corresponding method of nw.Shellto show the file in the parent folder with the system file manager. As you can recon method removedeletes the file. As for copy/paste we do the following trick. When user clicks copy we store the target file path in property copiedFile. So when user next time clicks paste we can use it to copy that file to the supposedly changed current location. Method openevidently opens file with the default associated program. That is what we do in FileListview directly. Actually this action belongs to FileService. So we rather refactor the view to use the service. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService,fileService){ this.file=fileService; //... } bindUi(){ //... this.file.open(el.dataset.file); //... } Now we have a module to handle context menu for a selected file. The module will subscribe for contextmenuDOM event and build a menu when user right clicks on a file. This menu will contain items Show Item in the Folder, Copy, Paste, and Delete. Whereas copy and paste are separated from other items with delimiters. Besides, Paste will be disabled until we store a file with copy. Further goes the source code. The ./js/View/ContextMenu.jsfile contains the following code: classConextMenuView{ constructor(fileService,i18nService){ this.file=fileService;this.i18n=i18nService;this.attach(); } getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ { label:this.i18n.translate("SHOW_FILE_IN_FOLDER","ShowItemintheFolder"), enabled:Boolean(fileName), click:()=>file.showInFolder(fileName) }, { type:"separator" }, { label:this.i18n.translate("COPY","Copy"),enabled:Boolean(fileName), click:()=>file.copy(fileName) }, { label:this.i18n.translate("PASTE","Paste"),enabled:isCopied, click:()=>file.paste() }, { type:"separator" }, { label:this.i18n.translate("DELETE","Delete"),enabled:Boolean(fileName), click:()=>file.remove(fileName) } ]; } render(fileName){ constmenu=newnw.Menu(); this.getItems(fileName).forEach((item)=>menu.append(newnw.MenuItem(item))); returnmenu; } attach(){ document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(!(elinstanceofHTMLElement)){ return; } if(el.classList.contains("file-list")){ e.preventDefault(); this.render() .popup(e.x,e.y); } //Ifachildofanelementmatching[data-file] if(el.parentNode.dataset.file){ e.preventDefault(); this.render(el.parentNode.dataset.file) .popup(e.x,e.y); } }); } } exports.ConextMenuView=ConextMenuView; So in ConextMenuViewconstructor, we receive instances of FileServiceand I18nService. During the construction we also call attach method that subscribes for contextmenuDOM event, creates the menu and shows it in the position of the mouse cursor. The event gets ignored unless the cursor hovers a file or resides in empty area of the file list component. When user right clicks the file list, the menu still appears, but with all items disable except paste (in case a file was copied before). Method render create an instance of menu and populates it with nw.MenuItemscreated by getItemsmethod. The method creates an array representing menu items. Elements of the array are object literals. Property labelaccepts translation for item caption. Property enableddefines the state of item depending on our cases (whether we have copied file or not, whether the cursor on a file or not). Finally property clickexpects the handler for click event. Now we need to enable our new components in the main module. The ./js/app.jsfile contains the following code: const{FileService}=require("./js/Service/File"), {ConextMenuView}=require("./js/View/ConextMenu"),fileService=newFileService(dirService); newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService,fileService); newConextMenuView(fileService,i18nService); Let's now run the application, right-click on a file and voilà! We have the context menu and new file actions. System clipboard Usually Copy/Paste functionality involves system clipboard. NW.jsprovides an API to control it (http://docs.nwjs.io/en/latest/References/Clipboard/). Unfortunately it's quite limited, we cannot transfer an arbitrary file between applications, what you may expect of a file manager. Yet some things we are still available to us. Transferring text In order to examine text transferring with the clipboard we modify the method copy of FileService: copy(file){ this.copiedFile=this.dir.getFile(file);constclipboard=nw.Clipboard.get();clipboard.set(this.copiedFile,"text"); } What does it do? As soon as we obtained file full path, we create an instance of nw.Clipboardand save the file path there as a text. So now, after copying a file within the File Explorer we can switch to an external program (for example, a text editor) and paste the copied path from the clipboard. Transferring graphics It doesn't look very handy, does it? It would be more interesting if we could copy/paste a file. Unfortunately NW.jsdoesn't give us many options when it comes to file exchange. Yet we can transfer between NW.jsapplication and external programs PNG and JPEG images. The ./js/Service/File.jsfile contains the following code: //... copyImage(file,type){ constclip=nw.Clipboard.get(), //loadfilecontentasBase64 data=fs.readFileSync(file).toString("base64"), //imageasHTML html=`<imgsrc="file:///${encodeURI(data.replace(/^//,"") )}">`; //writebothoptions(rawimageandHTML)totheclipboardclip.set([ {type,data:data,raw:true}, {type:"html",data:html} ]); } copy(file){ this.copiedFile=this.dir.getFile(file); constext=path.parse(this.copiedFile).ext.substr(1); switch(ext){case"jpg":case"jpeg": returnthis.copyImage(this.copiedFile,"jpeg"); case"png": returnthis.copyImage(this.copiedFile,"png"); } } //... We extended our FileServicewith private method copyImage. It reads a given file, converts its contents in Base64 and passes the resulting code in a clipboard instance. In addition, it creates HTML with image tag with Base64-encoded image in data Uniform Resource Identifier (URI). Now after copying an image (PNG or JPEG) in the File Explorer, we can paste it in an external program such as graphical editor or text processor. Receiving text and graphics We've learned how to pass a text and graphics from our NW.jsapplication to external programs. But how can we receive data from outside? As you can guess it is accessible through get method of nw.Clipboard. Text can be retrieved that simple: constclip=nw.Clipboard.get(); console.log(clip.get("text")); When graphic is put in the clipboard we can get it with NW.js only as Base64-encoded content or as HTML. To see it in practice we add a few methods to FileService. The ./js/Service/File.jsfile contains the following code: //...hasImageInClipboard(){ constclip=nw.Clipboard.get(); returnclip.readAvailableTypes().indexOf("png")!==-1; } pasteFromClipboard(){ constclip=nw.Clipboard.get(); if(this.hasImageInClipboard()){ constbase64=clip.get("png",true), binary=Buffer.from(base64,"base64"),filename=Date.now()+"--img.png"; fs.writeFileSync(this.dir.getFile(filename),binary); this.dir.notify(); } } //... Method hasImageInClipboardchecks if the clipboard keeps any graphics. Method pasteFromClipboardtakes graphical content from the clipboard as Base64-encoded PNG. It converts the content into binary code, writes into a file and requests DirServicesubscribers to update. To make use of these methods we need to edit ContextMenuview. The ./js/View/ContextMenu.jsfile contains the following code: getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ //... { label:this.i18n.translate("PASTE_FROM_CLIPBOARD","Pasteimagefromclipboard"), enabled:file.hasImageInClipboard(), click:()=>file.pasteFromClipboard() }, //... ]; } We add to the menu a new item Pasteimagefromclipboard, which is enabled only when there is any graphic in the clipboard. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book Cross Platform Desktop Application Development: Electron, Node, NW.js and React written by Dmitry Sheiko. This book will help you build powerful cross-platform desktop applications with web technologies such as Node, NW.JS, Electron, and React.[/box] Read More: How to deploy a Node.js application to the web using Heroku How is Node.js Changing Web Development? With Node.js, it’s easy to get things done  
Read more
  • 0
  • 0
  • 11719
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-ibm-websphere-mq-commands
Packt
25 Aug 2010
10 min read
Save for later

IBM WebSphere MQ commands

Packt
25 Aug 2010
10 min read
(For more resources on IBM, see here.) After reading this article, you will not be a WebSphere MQ expert, but you will have brought your knowledge of MQ to a level where you can have a sensible conversation with your site's MQ administrator about what the Q replication requirements are. An introduction to MQ In a nutshell, WebSphere MQ is an assured delivery mechanism, which consists of queues managed by Queue Managers. We can put messages onto, and retrieve messages from queues, and the movement of messages between queues is facilitated by components called Channels and Transmission Queues. There are a number of fundamental points that we need to know about WebSphere MQ: All objects in WebSphere MQ are case sensitive We cannot read messages from a Remote Queue (only from a Local Queue) We can only put a message onto a Local Queue (not a Remote Queue) It does not matter at this stage if you do not understand the above points, all will become clear in the following sections. There are some standards regarding WebSphere MQ object names: Queue names, processes and Queue Manager names can have a maximum length of 48 characters Channel names can have a maximum length of 20 characters The following characters are allowed in names: A-Z,a-z,0-9, and . / _ % symbols There is no implied structure in a name — dots are there for readability Now let's move on to look at MQ queues in a little more detail. MQ queues MQ queues can be thought of as conduits to transport messages between Queue Managers. There are four different types of MQ queues and one related object. The four different types of queues are: Local Queue (QL), Remote Queue (QR), Transmission Queue (TQ), and Dead Letter Queue, and the related object is a Channel (CH). One of the fundamental processes of WebSphere MQ is the ability to move messages between Queue Managers. Let's take a high-level look at how messages are moved, as shown in the following diagram: When the application Appl1 wants to send a message to application Appl2, it opens a queue - the local Queue Manager (QM1) determines if it is a Local Queue or a Remote Queue. When Appl1 issues an MQPUT command to put a message onto the queue, then if the queue is local, the Queue Manager puts the message directly onto that queue. If the queue is a Remote Queue, then the Queue Manager puts the message onto a Transmission Queue. The Transmission Queue sends the message using the Sender Channel on QM1 to the Receiver Channel on the remote Queue Manager (QM2). The Receiver Channel puts the message onto a Local Queue on QM2. Appl2 issues a MQGET command to retrieve the message from this queue. Now let's move on to look at the queues used by Q replication and in particular, unidirectional replication, as shown in the following diagram. What we want to show here is the relationship between Remote Queues, Transmission Queues, Channels, and Local Queues. As an example, let's look at the path a message will take from Q Capture on QMA to Q Apply on QMB. (Move the mouse over the image to enlarge.) Note that in this diagram the Listeners are not shown. Q Capture puts the message onto a remotely-defned queue on QMA (the local Queue Manager for Q Capture). This Remote Queue (CAPA.TO.APPB.SENDQ.REMOTE) is effectively a "place holder" and points to a Local Queue (CAPA.TO.APPB.RECVQ) on QMB and specifes the Transmission Queue (QMB.XMITQ) it should use to get there. The Transmission Queue has, as part of its defnition, the Channel (QMA.TO.QMB) to use. The Channel QMA.TO.QMB has, as part of its defnition, the IP address and Listening port number of the remote Queue Manager (note that we do not name the remote Queue Manager in this defnition—it is specifed in the defnition for the Remote Queue). The defnition for unidirectional Replication Queue Map (circled queue names) is: SENDQ: CAPA.TO.APPB.SENDQ.REMOTE on the source RECVQ: CAPA.TO.APPB.RECVQ on the target ADMINQ: CAPA.ADMINQ.REMOTE on the target Let's look at the Remote Queue defnition for CAPA.TO.APPB.SENDQ.REMOTE, shown next. On the left-hand side are the defnitions on QMA, which comprise the Remote Queue, the Transmission Queue, and the Channel defnition. The defnitions on QMB are on the right-hand side and comprise the Local Queue and the Receiver Channel. Let's break down these defnitions to the core values to show the relationship between the different parameters, as shown next: We defne a Remote Queue by matching up the superscript numbers in the defnitions in the two Queue Managers: For defnitions on QMA, QMA is the local system and QMB is the remote system. For defnitions on QMB, QMB is the local system and QMA is the remote system. Remote Queue Manager name Name of the queue on the remote system Transmission Queue name Port number that the remote system is listening on The IP address of the Remote Queue Manager Local Queue Manager name Channel name Queue names: QMB: Decide on the Local Queue name on QMB—CAPA.TO.APPB.RECVQ. QMA: Decide on the Remote Queue name on QMB—CAPA.TO.APPB.SENDQ.REMOTE. Channels: QMB: Defne a Receiver Channel on QMB, QMA.TO.QMB—make sure the channel type (CHLTYPE) is RCVR. The Channel names on QMA and QMB have to match: QMA.TO.QMB. QMA: Defne a Sender Channel, which takes the messages from the Transmission Queue QMB.XMITQ and which points to the IP address and Listening port number of QMB. The Sender Channel name must be QMA.TO.QMB. Let's move on from unidirectional replication to bidirectional replication. The bidirectional queue diagram is shown next, which is a cut-down version of the full diagram of the The WebSphere MQ layer and just shows the queue names and types without the details. The principles in bidirectional replication are the same as for unidirectional replication. There are two Replication Queue Maps—one going from QMA to QMB (as unidirectional replication) and one going from QMB to QMA. MQ queue naming standards The naming of the WebSphere MQ queues is an important part of Q replication setup. It may be that your site already has a naming standard for MQ queues, but if it does not, then here are some thoughts on the subject (WebSphere MQ naming standards were discussed in the Introduction to MQ section): Queues are related to Q Capture and Q Apply programs, so it would be useful to have that fact refected in the name of the queues. A Q Capture needs a local Restart Queue and we use the name CAPA.RESTARTQ. Each Queue Manager can have a Dead Letter Queue. We use the prefx DEAD.LETTER.QUEUE with a suffx of the Queue Manager name giving DEAD.LETTER.QUEUE.QMA. Receive Queues are related to Send Queues. For every Send Queue, we need a Receive Queue. Our Send Queue names are made up of where they are coming from, Q Capture on QMA (CAPA), and where they are going to, Q Apply on QMB (APPB), and we also want to put in that it is a Send Queue and that it is a remote defnition, so we end up with CAPA.TO.APPB.SENDQ.REMOTE. The corresponding Receive Queue will be called CAPA.TO.APPB.RECVQ. Transmission Queues should refect the names of the "to" Queue Manager. Our Transmission Queue on QMA is called QMB.XMITQ, refecting the Queue Manager that it is going to, and that it is a Transmission Queue. Using this naming convention on QMB, the Transmission Queue is called QMA.XMITQ. Channels should refect the names of the "from" and "to" Queue Managers. Our Sender Channel defnition on QMA is QMA.TO.QMB refecting that it is a channel from QMA to QMB and the Receiver Channel on QMB is also called QMA.TO.QMB. The Receiver Queue on QMA is called QMB.TO.QMA for a Sender Channel of the same name on QMB. A Replication Queue Map defnition requires a local Send Queue, and a remote Receive Queue and a remote Administration Queue. The Send Queue is the queue that Q Capture writes to, the Receive Queue is the queue that Q Apply reads from, and the Administration Queue is the queue that Q Apply writes messages back to Q Capture with. MQ queues required for different scenarios This section lists the number of Local and Remote Queues and Channels that are needed for each type of replication scenario. The queues and channels required for unidirectional replication (including replicating to a Stored Procedure) and Event Publishing are shown in the following tables. Note that the queues and channels required for Event Publishing are a subset of those required for unidirectional replication, but creating extra queues and not using them is not a problem. The queues and channels required for unidirectional (including replicating to a Stored Procedure) are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Remote Queue: CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ   The queues required for Event Publishing are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Receiver Channel: QMA.TO.QMB The queues and channels required for bidirectional/P2P two-way replication are shown in the following table: QMA (10) QMB (10) 4 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA CAPB.TO.APPA.RECVQ 2 Remote Queues: CAPA.TO.APPB.SENDQ.REMOTE CAPB.ADMINQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL.MODELQ 4 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE.QMB CAPA.TO.APPB.RECVQ 2 Remote Queues: CAPB.TO.APPA.SENDQ.REMOTE CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ The queues and channels required for P2P three-way replication are shown in the following table: QMA (16) QMB (16) QMC (16) 5 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE. QMA CAPB.TO.APPA.RECVQ CAPC.TO.APPA.RECVQ 4 Remote Queues: CAPA.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE CAPA.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMB.XMITQ QMC.XMITQ 2 Sender Channels: QMA.TO.QMC QMA.TO.QMB 2 Receiver Channels: QMC.TO.QMA QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE. QMB CAPA.TO.APPB.RECVQ CAPC.TO.APPB.RECVQ 4 Remote Queues: CAPB.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPB.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMC.XMITQ 2 Sender Channels: QMB.TO.QMA QMB.TO.QMC 2 Receiver Channels: QMA.TO.QMB QMC.TO.QMB 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPC.ADMINQ CAPC.RESTARTQ DEAD.LETTER. QUEUE.QMC CAPA.TO.APPC.RECVQ CAPB.TO.APPC.RECVQ 4 Remote Queues: CAPC.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPC.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMB.XMITQ 2 Sender Channels: QMC.TO.QMA QMC.TO.QMB 2 Receiver Channels: QMA.TO.QMC QMB.TO.QMC 1 Model Queue: IBMQREP.SPILL. MODELQ
Read more
  • 0
  • 0
  • 11699

article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 11574

article-image-configuring-manage-out-directaccess-clients
Packt
14 Oct 2013
14 min read
Save for later

Configuring Manage Out to DirectAccess Clients

Packt
14 Oct 2013
14 min read
In this article by Jordan Krause, the author of the book Microsoft DirectAccess Best Practices and Troubleshooting, we will have a look at how Manage Out is configured to DirectAccess clients. DirectAccess is obviously a wonderful technology from the user's perspective. There is literally nothing that they have to do to connect to company resources; it just happens automatically whenever they have Internet access. What isn't talked about nearly as often is the fact that DirectAccess is possibly of even greater benefit to the IT department. Because DirectAccess is so seamless and automatic, your Group Policy settings, patches, scripts, and everything that you want to use to manage and manipulate those client machines is always able to run. You no longer have to wait for the user to launch a VPN or come into the office for their computer to be secured with the latest policies. You no longer have to worry about laptops being off the network for weeks at a time, and coming back into the network after having been connected to dozens of public hotspots while someone was on a vacation with it. While many of these management functions work right out of the box with a standard DirectAccess configuration, there are some functions that will need a couple of extra steps to get them working properly. That is our topic of discussion for this article. We are going to cover the following topics: Pulls versus pushes What does Manage Out have to do with IPv6 Creating a selective ISATAP environment Setting up client-side firewall rules RDP to a DirectAccess client No ISATAP with multisite DirectAccess (For more resources related to this topic, see here.) Pulls versus pushes Often when thinking about management functions, we think of them as the software or settings that are being pushed out to the client computers. This is actually not true in many cases. A lot of management tools are initiated on the client side, and so their method of distributing these settings and patches are actually client pulls. A pull is a request that has been initiated by the client, and in this case, the server is simply responding to that request. In the DirectAccess world, this kind of request is handled very differently than an actual push, which would be any case where the internal server or resource is creating the initial outbound communication with the client, a true outbound initiation of packets. Pulls typically work just fine over DirectAccess right away. For example, Group Policy processing is initiated by the client. When a laptop decides that it's time for a Group Policy refresh, it reaches out to Active Directory and says "Hey AD, give me my latest stuff". The Domain Controllers then simply reply to that request, and the settings are pulled down successfully. This works all day, every day over DirectAccess. Pushes, on the other hand, require some special considerations. This scenario is what we commonly refer to as DirectAccess Manage Out, and this does not work by default in a stock DirectAccess implementation. What does Manage Out have to do with IPv6? IPv6 is essentially the reason why Manage Out does not work until we make some additions to your network. "But DirectAccess in Server 2012 handles all of my IPv6 to IPv4 translations so that my internal network can be all IPv4, right?" The answer to that question is yes, but those translations only work in one direction. For example, in our previous Group Policy processing scenario, the client computer calls out for Active Directory, and those packets traverse the DirectAccess tunnels using IPv6. When the packets get to the DirectAccess server, it sees that the Domain Controller is IPv4, so it translates those IPv6 packets into IPv4 and sends them on their merry way. Domain Controller receives the said IPv4 packets, and responds in kind back to the DirectAccess server. Since there is now an active thread and translation running on the DirectAccess server for that communication, it knows that it has to take those IPv4 packets coming back (as a response) from the Domain Controller and spin them back into IPv6 before sending across the IPsec tunnels back to the client. This all works wonderfully and there is absolutely no configuration that you need to do to accomplish this behavior. However, what if you wanted to send packets out to a DirectAccess client computer from inside the network? One of the best examples of this is a Helpdesk computer trying to RDP into a DirectAccess-connected computer. To accomplish this, the Helpdesk computer needs to have IPv6 routability to the DirectAccess client computer. Let's walk through the flow of packets to see why this is necessary. First of all, if you have been using DirectAccess for any duration of time, you might have realized by now that the client computers register themselves in DNS with their DirectAccess IP addresses when they connect. This is a normal behavior, but what may not look "normal" to you is that those records they are registering are AAAA (IPv6) records. Remember, all DirectAccess traffic across the internet is IPv6, using one of these three transition technologies to carry the packets: 6to4, Teredo, or IP-HTTPS. Therefore, when the clients connect, whatever transition tunnel is established has an IPv6 address on the adapter (you can see it inside ipconfig /all on the client), and those addresses will register themselves in DNS, assuming your DNS is configured to allow it. When the Helpdesk personnel types CLIENT1 in their RDP client software and clicks on connect, it is going to reach out to DNS and ask, "What is the IP address of CLIENT1?" One of two things is going to happen. If that Helpdesk computer is connected to an IPv4-only network, it is obviously only capable of transmitting IPv4 packets, and DNS will hand them the DirectAccess client computer's A (IPv4) record, from the last time the client was connected inside the office. Routing will fail, of course, because CLIENT1 is no longer sitting on the physical network. The following screenshot is an example of pinging a DirectAccess connected client computer from an IPv4 network: How to resolve this behavior? We need to give that Helpdesk computer some form of IPv6 connectivity on the network. If you have a real, native IPv6 network running already, you can simply tap into it. Each of your internal machines that need this outbound routing, as well as the DirectAccess server or servers, all need to be connected to this network for it to work. However, I find out in the field that almost nobody is running any kind of IPv6 on their internal networks, and they really aren't interested in starting now. This is where the Intra-Site Automatic Tunnel Addressing Protocol, more commonly referred to as ISATAP, comes into play. You can think of ISATAP as a virtual IPv6 cloud that runs on top of your existing IPv4 network. It enables computers inside your network, like that Helpdesk machine, to be able to establish an IPv6 connection with an ISATAP router. When this happens, that Helpdesk machine will get a new network adapter, visible via ipconfig / all, named ISATAP, and it will have an IPv6 address. Yes, this does mean that the Helpdesk computer, or any machine that needs outbound communications to the DirectAccess clients, has to be capable of talking IPv6, so this typically means that those machines must be Windows 7, Windows 8, Server 2008, or Server 2012. What if your switches and routers are not capable of IPv6? No problem. Similar to the way that 6to4, Teredo, and IP-HTTPS take IPv6 packets and wrap them inside IPv4 so they can make their way across the IPv4 internet, ISATAP also takes IPv6 packets and encapsulates them inside IPv4 before sending them across the physical network. This means that you can establish this ISATAP IPv6 network on top of your IPv4 network, without needing to make any infrastructure changes at all. So, now I need to go purchase an ISATAP router to make this happen? No, this is the best part. Your DirectAccess server is already an ISATAP router; you simply need to point those internal machines at it. Creating a selective ISATAP environment All of the Windows operating systems over the past few years have ISATAP client functionality built right in. This has been the case since Vista, I believe, but I have yet to encounter anyone using Vista in a corporate environment, so for the sake of our discussion, we are generally talking about Windows 7, Windows 8, Server 2008, and Server 2012. For any of these operating systems, out of the box all you have to do is give it somewhere to resolve the name ISATAP, and it will go ahead and set itself up with a connection to that ISATAP router. So, if you wanted to immediately enable all of your internal machines that were ISATAP capable to suddenly be ISATAP connected, all you would have to do is create a single host record in DNS named ISATAP and point it at the internal IP address of your DirectAccess server. To get that to work properly, you would also have to tweak DNS so that ISATAP was no longer part of the global query block list, but I'm not even going to detail that process because my emphasis here is that you should not set up your environment this way. Unfortunately, some of the step-by-step guides that are available on the web for setting up DirectAccess include this step. Even more unfortunately, if you have ever worked with UAG DirectAccess, you'll remember on the IP address configuration screen that the GUI actually told you to go ahead and set ISATAP up this way. Please do not create a DNS host record named ISATAP! If you have already done so, please consider this article to be a guide on your way out of danger. The primary reason why you should stay away from doing this is because Windows prefers IPv6 over IPv4. Once a computer is setup with connection to an ISATAP router, it receives an IPv6 address which registers itself in DNS, and from that point onward whenever two ISATAP machines communicate with each other, they are using IPv6 over the ISATAP tunnel. This is potentially problematic for a couple of reasons. First, all ISATAP traffic default routes through the ISATAP router for which it is configured, so your DirectAccess server is now essentially the default gateway for all of these internal computers. This can cause performance problems and even network flooding. The second reason is that because these packets are now IPv6, even though they are wrapped up inside IPv4, the tools you have inside the network that you might be using to monitor internal traffic are not going to be able to see this traffic, at least not in the same capacity as it would do for normal IPv4 traffic. It is in your best interests that you do not follow this global approach for implementing ISATAP, and instead take the slightly longer road and create what I call a "Selective ISATAP environment", where you have complete control over which machines are connected to the ISATAP network, and which ones are not. Many DirectAccess installs don't require ISATAP at all. Remember, this is only used for those instances where you need true outbound reach to the DirectAccess clients. I recommend installing DirectAccess without ISATAP first, and test all of your management tools. If they work without ISATAP, great! If they don't, then you can create your selective ISATAP environment. To set ourselves up for success, we need to create a simple Active Directory security group, and a GPO. The combination of these things is going to allow us to decisively grant or deny access to the ISATAP routing features on the DirectAccess server. Creating a security group and DNS record First, let's create a new security group in Active Directory. This is just a normal group like any other, typically a global or universal, whichever you prefer. This group is going to contain the computer accounts of the internal computers to which we want to give that outbound reaching capability. So, typically the computer accounts that we will eventually add into this group are Helpdesk computers, SCCM servers, maybe a management "jump box" terminal server, that kind of thing. To keep things intuitive, let's name the group DirectAccess – ISATAP computers or something similar. We will also need a DNS host record created. For obvious reasons we don't want to call this ISATAP, but perhaps something such as Contoso_ISATAP, swapping your name in, of course. This is just a regular DNS A record, and you want to point it at the internal IPv4 address of the DirectAccess server. If you are running a clustered array of DirectAccess servers that are configured for load balancing, then you will need multiple DNS records. All of the records have the same name,Contoso_ISATAP, and you point them at each internal IP address being used by the cluster. So, one gets pointed at the internal Virtual IP (VIP), and one gets pointed at each of the internal Dedicated IPs (DIP). In a two-node cluster, you will have three DNS records for Contoso_ISATAP. Creating the GPO Now go ahead and follow these steps to create a new GPO that is going to contain the ISATAP connection settings: Create a new GPO, name it something such as DirectAccess – ISATAP settings. Place the GPO wherever it makes sense to you, keeping in mind that these settings should only apply to the ISATAP group that we created. One way to manage the distribution of these settings is a strategically placed link. Probably the better way to handle distribution of these settings is to reflect the same approach that the DirectAccess GPOs themselves take. This would mean linking this new GPO at a high level, like the top of the domain, and then using security filtering inside the GPO settings so that it only applies to the group that we just created. This way you ensure that in the end the only computers to receive these settings are the ones that you add to your ISATAP group. I typically take the Security Filtering approach, because it closely reflects what DirectAccess itself does with GPO filtering. So, create and link the GPO at a high level, and then inside the GPO properties, go ahead and add the group (and only the group, remove everything else) to the Security Filtering section, like what is shown in the following screenshot: Then move over to the Details tab and set the GPO Status to User configuration settings disabled. Configuring the GPO Now that we have a GPO which is being applied only to our special ISATAP group that we created, let's give it some settings to apply. What we are doing with this GPO is configuring those computers which we want to be ISATAP-connected with the ISATAP server address with which they need to communicate , which is the DNS name that we created for ISATAP. First, edit the GPO and set the ISATAP Router Name by configuring the following setting: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP Router Name = Enabled (and populate your DNS record). Second, in the same location within the GPO, we want to enable your ISATAP state with the following configuration: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP State = Enabled State. Adding machines to the group All of our settings are now squared away, time to test! Start by taking a single computer account of a computer inside your network from which you want to be able to reach out to DirectAccess client computers, and add the computer account to the group that we created. Perhaps pause for a few minutes to ensure Active Directory has a chance to replicate, and then simply restart that computer. The next time it boots, it'll grab those settings from the GPO, and reach out to the DirectAccess server acting as the ISATAP router, and have an ISATAP IPv6 connection. If you generate an ipconfig /all on that internal machine, you will see the ISATAP adapter and address now listed as shown in the following screenshot: And now if you try to ping the name of a client computer that is connected via DirectAccess, you will see that it now resolves to the IPv6 record. Perfect! But wait, why are my pings timing out? Let's explore that next.
Read more
  • 0
  • 2
  • 11388

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 11307
article-image-api-gateway-and-its-need
Packt
21 Feb 2018
9 min read
Save for later

API Gateway and its Need

Packt
21 Feb 2018
9 min read
 In this article by Umesh R Sharma, author of the book Practical Microservices, we will cover API Gateway and its need with simple and short examples. (For more resources related to this topic, see here.) Dynamic websites show a lot on a single page, and there is a lot of information that needs to be shown on the page. The common success order summary page shows the cart detail and customer address. For this, frontend has to fire a different query to the customer detail service and order detail service. This is a very simple example of having multiple services on a single page. As a single microservice has to deal with only one concern, in result of that to show much information on page, there are many API calls on the same page. So, a website or mobile page can be very chatty in terms of displaying data on the same page. Another problem is that, sometimes, microservice talks on another protocol, then HTTP only, such as thrift call and so on. Outer consumers can't directly deal with microservice in that protocol. As a mobile screen is smaller than a web page, the result of the data required by the mobile or desktop API call is different. A developer would want to give less data to the mobile API or have different versions of the API calls for mobile and desktop. So, you could face a problem such as this: each client is calling different web services and keeping track of their web service and developers have to give backward compatibility because API URLs are embedded in clients like in mobile app. Why do we need the API Gateway? All these preceding problems can be addressed with the API Gateway in place. The API Gateway acts as a proxy between the API consumer and the API servers. To address the first problem in that scenario, there will only be one call, such as /successOrderSummary, to the API Gateway. The API Gateway, on behalf of the consumer, calls the order and user detail, then combines the result and serves to the client. So basically, it acts as a facade or API call, which may internally call many APIs. The API Gateway solves many purposes, some of which are as follows. Authentication API Gateways can take the overhead of authenticating an API call from outside. After that, all the internal calls remove security check. If the request comes from inside the VPC, it can remove the check of security, decrease the network latency a bit, and make the developer focus more on business logic than concerning about security. Different protocol Sometimes, microservice can internally use different protocols to talk to each other; it can be thrift call, TCP, UDP, RMI, SOAP, and so on. For clients, there can be only one rest-based HTTP call. Clients hit the API Gateway with the HTTP protocol and the API Gateway can make the internal call in required protocol and combine the results in the end from all web service. It can respond to the client in required protocol; in most of the cases, that protocol will be HTTP. Load-balancing The API Gateway can work as a load balancer to handle requests in the most efficient manner. It can keep a track of the request load it has sent to different nodes of a particular service. Gateway should be intelligent enough to load balances between different nodes of a particular service. With NGINX Plus coming into the picture, NGINX can be a good candidate for the API Gateway. It has many of the features to address the problem that is usually handled by the API Gateway. Request dispatching (including service discovery) One main feature of the gateway is to make less communication between client and microservcies. So, it initiates the parallel microservices if that is required by the client. From the client side, there will only be one hit. Gateway hits all the required services and waits for the results from all services. After obtaining the response from all the services, it combines the result and sends it back to the client. Reactive microservice designs can help you achieve this. Working with service discovery can give many extra features. It can mention which is the master node of service and which is the slave. Same goes for DB in case any write request can go to the master or read request can go to the slave. This is the basic rule, but users can apply so many rules on the basis of meta information provided by the API Gateway. Gateway can record the basic response time from each node of service instance. For higher priority API calls, it can be routed to the fastest responding node. Again, rules can be defined on the basis of the API Gateway you are using and how it will be implemented. Response transformation Being a first and single point of entry for all API calls, the API Gateway knows which type of client is calling a mobile, web client, or other external consumer; it can make the internal call to the client and give the data to different clients as per needs and configuration. Circuit breaker To handle the partial failure, the API Gateway uses a technique called circuit breaker pattern. A service failure in one service can cause the cascading failure in the flow to all the service calls in stack. The API Gateway can keep an eye on some threshold for any microservice. If any service passes that threshold, it marks that API as open circuit and decides not to make the call for a configured time. Hystrix (by Netflix) served this purpose efficiently. Default value in this is failing of 20 requests in 5 seconds. Developers can also mention the fall back for this open circuit. This fall back can be of dummy service. Once API starts giving results as expected, then gateway marks it as a closed service again. Pros and cons of API Gateway Using the API Gateway itself has its own pros and cons. In the previous section, we have described the advantages of using the API Gateway already. I will still try to make them in points as the pros of the API Gateway. Pros Microservice can focus on business logic Clients can get all the data in a single hit Authentication, logging, and monitoring can be handled by the API Gateway Gives flexibility to use completely independent protocols in which clients and microservice can talk It can give tailor-made results, as per the clients needs It can handle partial failure Addition to the preceding mentioned pros, some of the trade-offs are also to use this pattern. Cons It can cause performance degrade due to lots of happenings on the API Gateway With this, discovery service should be implemented Sometimes, it becomes the single point of failure Managing routing is an overhead of the pattern Adding additional network hope in the call Overall. it increases the complexity of the system Too much logic implementation in this gateway will lead to another dependency problem So, before using the API Gateway, both of the aspects should be considered. Decision of including the API Gateway in the system increases the cost as well. Before putting effort, cost, and management in this pattern, it is recommended to analysis how much you can gain from it. Example of API Gateway In this example, we will try to show only sample product pages that will fetch the data from service product detail to give information about the product. This example can be increased in many aspects. Our focus of this example is to only show how the API Gateway pattern works; so we will try to keep this example simple and small. This example will be using Zuul from Netflix as an API Gateway. Spring also had an implementation of Zuul in it, so we are creating this example with Spring Boot. For a sample API Gateway implementation, we will be using http://start.spring.io/ to generate an initial template of our code. Spring initializer is the project from Spring to help beginners generate basic Spring Boot code. A user has to set a minimum configuration and can hit the Generate Project button. If any user wants to set more specific details regarding the project, then they can see all the configuration settings by clicking on the Switch to the full version button, as shown in the following screenshot: Let's create a controller in the same package of main application class and put the following code in the file: @SpringBootApplication @RestController public class ProductDetailConrtoller { @Resource ProductDetailService pdService; @RequestMapping(value = "/product/{id}") public ProductDetail getAllProduct( @PathParam("id") String id) { return pdService.getProductDetailById(id); } }   In the preceding code, there is an assumption of the pdService bean that will interact with Spring data repository for product detail and get the result for the required product ID. Another assumption is that this service is running on port 10000. Just to make sure everything is running, a hit on a URL such as http://localhost:10000/product/1 should give some JSON as response. For the API Gateway, we will create another Spring Boot application with Zuul support. Zuul can be activated by just adding a simple @EnableZuulProxy annotation. The following is a simple code to start the simple Zuul proxy: @SpringBootApplication @EnableZuulProxy public class ApiGatewayExampleInSpring { public static void main(String[] args) { SpringApplication.run(ApiGatewayExampleInSpring.class, args); } }   Rest all the things are managed in configuration. In the application.properties file of the API Gateway, the content will be something as follows: zuul.routes.product.path=/product/** zuul.routes.produc.url=http://localhost:10000 ribbon.eureka.enabled=false server.port=8080  With this configuration, we are defining rules such as this: for any request for a URL such as /product/xxx, pass this request to http://localhost:10000. For outer world, it will be like http://localhost:8080/product/1, which will internally be transferred to the 10000 port. If we defined a spring.application.name variable as product in product detail microservice, then we don't need to define the URL path property here (zuul.routes.product.path=/product/** ), as Zuul, by default, will make it a URL/product. The example taken here for an API Gateway is not very intelligent, but this is a very capable API Gateway. Depending on the routes, filter, and caching defined in the Zuul's property, one can make a very powerful API Gateway. Summary In this article, you learned about the API Gateway, its need, and its pros and cons with the code example. Resources for Article:   Further resources on this subject: What are Microservices? [article] Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 11305

article-image-cors-nodejs
Packt
20 Jun 2017
14 min read
Save for later

CORS in Node.js

Packt
20 Jun 2017
14 min read
In this article by Randall Goya, and Rajesh Gunasundaram the author of the book CORS Essentials, Node.js is a cross-platform JavaScript runtime environment that executes JavaScript code at server side. This enables to have a unified language across the web application development. JavaScript becomes the unified language that runs both on client side and server side. (For more resources related to this topic, see here.) In this article we will learn about: Node.js is a JavaScript platform for developing server-side web applications. Node.js can provide the web server for other frameworks including Express.js, AngularJS, Backbone,js, Ember.js and others. Some other JavaScript frameworks such as ReactJS, Ember.js and Socket.IO may also use Node.js as the web server. Isomorphic JavaScript can add server-side functionality for client-side frameworks. JavaScript frameworks are evolving rapidly. This article reviews some of the current techniques, and syntax specific for some frameworks. Make sure to check the documentation for the project to discover the latest techniques. Understanding CORS concepts, you may create your own solution, because JavaScript is a loosely structured language. All the examples are based on the fundamentals of CORS, with allowed origin(s), methods, and headers such as Content-Type, or preflight, that may be required according to the CORS specification. JavaScript frameworks are very popular JavaScript is sometimes called the lingua franca of the Internet, because it is cross-platform and supported by many devices. It is also a loosely-structured language, which makes it possible to craft solutions for many types of applications. Sometimes an entire application is built in JavaScript. Frequently JavaScript provides a client-side front-end for applications built with Symfony, Content Management Systems such as Drupal, and other back-end frameworks. Node.js is server-side JavaScript and provides a web server as an alternative to Apache, IIS, Nginx and other traditional web servers. Introduction to Node.js Node.js is an open-source and cross-platform library that enables in developing server-side web applications. Applications will be written using JavaScript in Node.js can run on many operating systems, including OS X, Microsoft Windows, Linux, and many others. Node.js provides a non-blocking I/O and an event-driven architecture designed to optimize an application's performance and scalability for real-time web applications. The biggest difference between PHP and Node.js is that PHP is a blocking language, where commands execute only after the previous command has completed, while Node.js is a non-blocking language where commands execute in parallel, and use callbacks to signal completion. Node.js can move files, payloads from services, and data asynchronously, without waiting for some command to complete, which improves performance. Most JS frameworks that work with Node.js use the concept of routes to manage pages and other parts of the application. Each route may have its own set of configurations. For example, CORS may be enabled only for a specific page or route. Node.js loads modules for extending functionality via the npm package manager. The developer selects which packages to load with npm, which reduces bloat. The developer community creates a large number of npm packages created for specific functions. JXcore is a fork of Node.js targeting mobile devices and IoTs (Internet of Things devices). JXcore can use both Google V8 and Mozilla SpiderMonkey as its JavaScript engine. JXcore can run Node applications on iOS devices using Mozilla SpiderMonkey. MEAN is a popular JavaScript software stack with MongoDB (a NoSQL database), Express.js and AngularJS, all of which run on a Node.js server. JavaScript frameworks that work with Node.js Node.js provides a server for other popular JS frameworks, including AngularJS, Express.js. Backbone.js, Socket.IO, and Connect.js. ReactJS was designed to run in the client browser, but it is often combined with a Node.js server. As we shall see in the following descriptions, these frameworks are not necessarily exclusive, and are often combined in applications. Express.js is a Node.js server framework Express.js is a Node.js web application server framework, designed for building single-page, multi-page, and hybrid web applications. It is considered the "standard" server framework for Node.js. The package is installed with the command npm install express –save. AngularJS extends static HTML with dynamic views HTML was designed for static content, not for dynamic views. AngularJS extends HTML syntax with custom tag attributes. It provides model–view–controller (MVC) and model–view–viewmodel (MVVM) architectures in a front-end client-side framework.  AngularJS is often combined with a Node.js server and other JS frameworks. AngularJS runs client-side and Express.js runs on the server, therefore Express.js is considered more secure for functions such as validating user input, which can be tampered client-side. AngularJS applications can use the Express.js framework to connect to databases, for example in the MEAN stack. Connect.js provides middleware for Node.js requests Connect.js is a JavaScript framework providing middleware to handle requests in Node.js applications. Connect.js provides middleware to handle Express.js and cookie sessions, to provide parsers for the HTML body and cookies, and to create vhosts (virtual hosts) and error handlers, and to override methods. Backbone.js often uses a Node.js server Backbone.js is a JavaScript framework with a RESTful JSON interface and is based on the model–view–presenter (MVP) application design. It is designed for developing single-page web applications, and for keeping various parts of web applications (for example, multiple clients and the server) synchronized. Backbone depends on Underscore.js, plus jQuery for use of all the available fetures. Backbone often uses a Node.js server, for example to connect to data storage. ReactJS handles user interfaces ReactJS is a JavaScript library for creating user interfaces while addressing challenges encountered in developing single-page applications where data changes over time. React handles the user interface in model–view–controller (MVC) architecture. ReactJS typically runs client-side and can be combined with AngularJS. Although ReactJS was designed to run client-side, it can also be used server-side in conjunction with Node.js. PayPal and Netflix leverage the server-side rendering of ReactJS known as Isomorphic ReactJS. There are React-based add-ons that take care of the server-side parts of a web application. Socket.IO uses WebSockets for realtime event-driven applications Socket.IO is a JavaScript library for event-driven web applications using the WebSocket protocol ,with realtime, bi-directional communication between web clients and servers. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js. Although it can be used as simply a wrapper for WebSocket, it provides many more features, including broadcasting to multiple sockets, storing data associated with each client, and asynchronous I/O. Socket.IO provides better security than WebSocket alone, since allowed domains must be specified for its server. Ember.js can use Node.js Ember is another popular JavaScript framework with routing that uses Moustache templates. It can run on a Node.js server, or also with Express.js. Ember can also be combined with Rack, a component of Ruby On Rails (ROR). Ember Data is a library for  modeling data in Ember.js applications. CORS in Express.js The following code adds the Access-Control-Allow-Origin and Access-Control-Allow-Headers headers globally to all requests on all routes in an Express.js application. A route is a path in the Express.js application, for example /user for a user page. app.all sets the configuration for all routes in the application. Specific HTTP requests such as GET or POST are handled by app.get and app.post. app.all('*', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); app.get('/', function(req, res, next) { // Handle GET for this route }); app.post('/', function(req, res, next) { // Handle the POST for this route }); For better security, consider limiting the allowed origin to a single domain, or adding some additional code to validate or limit the domain(s) that are allowed. Also, consider limiting sending the headers only for routes that require CORS by replacing app.all with a more specific route and method. The following code only sends the CORS headers on a GET request on the route/user, and only allows the request from http://www.localdomain.com. app.get('/user', function(req, res, next) { res.header("Access-Control-Allow-Origin", "http://www.localdomain.com"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); Since this is JavaScript code, you may dynamically manage the values of routes, methods, and domains via variables, instead of hard-coding the values. CORS npm for Express.js using Connect.js middleware Connect.js provides middleware to handle requests in Express.js. You can use Node Package Manager (npm) to install a package that enables CORS in Express.js with Connect.js: npm install cors The package offers flexible options, which should be familiar from the CORS specification, including using credentials and preflight. It provides dynamic ways to validate an origin domain using a function or a regular expression, and handler functions to process preflight. Configuration options for CORS npm origin: Configures the Access-Control-Allow-Origin CORS header with a string containing the full URL and protocol making the request, for example http://localdomain.com. Possible values for origin: Default value TRUE uses req.header('Origin') to determine the origin and CORS is enabled. When set to FALSE CORS is disabled. It can be set to a function with the request origin as the first parameter and a callback function as the second parameter. It can be a regular expression, for example /localdomain.com$/, or an array of regular expressions and/or strings to match. methods: Sets the Access-Control-Allow-Methods CORS header. Possible values for methods: A comma-delimited string of HTTP methods, for example GET, POST An array of HTTP methods, for example ['GET', 'PUT', 'POST'] allowedHeaders: Sets the Access-Control-Allow-Headers CORS header. Possible values for allowedHeaders: A comma-delimited string of  allowed headers, for example "Content-Type, Authorization'' An array of allowed headers, for example ['Content-Type', 'Authorization'] If unspecified, it defaults to the value specified in the request's Access-Control-Request-Headers header exposedHeaders: Sets the Access-Control-Expose-Headers header. Possible values for exposedHeaders: A comma-delimited string of exposed headers, for example 'Content-Range, X-Content-Range' An array of exposed headers, for example ['Content-Range', 'X-Content-Range'] If unspecified, no custom headers are exposed credentials: Sets the Access-Control-Allow-Credentials CORS header. Possible values for credentials: TRUE—passes the header for preflight FALSE or unspecified—omit the header, no preflight maxAge: Sets the Access-Control-Allow-Max-Age header. Possible values for maxAge An integer value in milliseconds for TTL to cache the request If unspecified, the request is not cached preflightContinue: Passes the CORS preflight response to the next handler. The default configuration without setting any values allows all origins and methods without preflight. Keep in mind that complex CORS requests other than GET, HEAD, POST will fail without preflight, so make sure you enable preflight in the configuration when using them. Without setting any values, the configuration defaults to: { "origin": "*", "methods": "GET,HEAD,PUT,PATCH,POST,DELETE", "preflightContinue": false } Code examples for CORS npm These examples demonstrate the flexibility of CORS npm for specific configurations. Note that the express and cors packages are always required. Enable CORS globally for all origins and all routes The simplest implementation of CORS npm enables CORS for all origins and all requests. The following example enables CORS for an arbitrary route " /product/:id" for a GET request by telling the entire app to use CORS for all routes: var express = require('express') , cors = require('cors') , app = express(); app.use(cors()); // this tells the app to use CORS for all re-quests and all routes app.get('/product/:id', function(req, res, next){ res.json({msg: 'CORS is enabled for all origins'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80'); }); Allow CORS for dynamic origins for a specific route The following example uses corsOptions to check if the domain making the request is in the whitelisted array with a callback function, which returns null if it doesn't find a match. This CORS option is passed to the route "product/:id" which is the only route that has CORS enabled. The allowed origins can be dynamic by changing the value of the variable "whitelist." var express = require('express') , cors = require('cors') , app = express(); // define the whitelisted domains and set the CORS options to check them var whitelist = ['http://localdomain.com', 'http://localdomain-other.com']; var corsOptions = { origin: function(origin, callback){ var originWhitelisted = whitelist.indexOf(origin) !== -1; callback(null, originWhitelisted); } }; // add the CORS options to a specific route /product/:id for a GET request app.get('/product/:id', cors(corsOptions), function(req, res, next){ res.json({msg: 'A whitelisted domain matches and CORS is enabled for route product/:id'}); }); // log that CORS is enabled on the server app.listen(80, function(){ console.log(''CORS is enabled on the web server listening on port 80''); }); You may set different CORS options for specific routes, or sets of routes, by defining the options assigned to unique variable names, for example "corsUserOptions." Pass the specific configuration variable to each route that requires that set of options. Enabling CORS preflight CORS requests that use a HTTP method other than GET, HEAD, POST (for example DELETE), or that use custom headers, are considered complex and require a preflight request before proceeding with the CORS requests. Enable preflight by adding an OPTIONS handler for the route: var express = require('express') , cors = require('cors') , app = express(); // add the OPTIONS handler app.options('/products/:id', cors()); // options is added to the route /products/:id // use the OPTIONS handler for the DELETE method on the route /products/:id app.del('/products/:id', cors(), function(req, res, next){ res.json({msg: 'CORS is enabled with preflight on the route '/products/:id' for the DELETE method for all origins!'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); You can enable preflight globally on all routes with the wildcard: app.options('*', cors()); Configuring CORS asynchronously One of the reasons to use NodeJS frameworks is to take advantage of their asynchronous abilities, handling multiple tasks at the same time. Here we use a callback function corsDelegateOptions and add it to the cors parameter passed to the route /products/:id. The callback function can handle multiple requests asynchronously. var express = require('express') , cors = require('cors') , app = express(); // define the allowed origins stored in a variable var whitelist = ['http://example1.com', 'http://example2.com']; // create the callback function var corsDelegateOptions = function(req, callback){ var corsOptions; if(whitelist.indexOf(req.header('Origin')) !== -1){ corsOptions = { origin: true }; // the requested origin in the CORS response matches and is allowed }else{ corsOptions = { origin: false }; // the requested origin in the CORS response doesn't match, and CORS is disabled for this request } callback(null, corsOptions); // callback expects two parameters: error and options }; // add the callback function to the cors parameter for the route /products/:id for a GET request app.get('/products/:id', cors(corsDelegateOptions), function(req, res, next){ res.json({msg: ''A whitelisted domain matches and CORS is enabled for route product/:id'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); Summary We have learned important stuffs of applying CORS in Node.js. Let us have a qssuick recap of what we have learnt: Node.js provides a web server built with JavaScript, and can be combined with many other JS frameworks as the application server. Although some frameworks have specific syntax for implementing CORS, they all follow the CORS specification by specifying allowed origin(s) and method(s). More robust frameworks allow custom headers such as Content-Type, and preflight when required for complex CORS requests. JavaScript frameworks may depend on the jQuery XHR object, which must be configured properly to allow Cross-Origin requests. JavaScript frameworks are evolving rapidly. The examples here may become outdated. Always refer to the project documentation for up-to-date information. With knowledge of the CORS specification, you may create your own techniques using JavaScript based on these examples, depending on the specific needs of your application. https://en.wikipedia.org/wiki/Node.js  Resources for Article: Further resources on this subject: An Introduction to Node.js Design Patterns [article] Five common questions for .NET/Java developers learning JavaScript and Node.js [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 11277

article-image-troubleshooting-freenas-server
Packt
28 Oct 2009
9 min read
Save for later

Troubleshooting FreeNAS server

Packt
28 Oct 2009
9 min read
Where to Look for Log Information The first place to head whenever you have a configuration problem with FreeNAS is to the related configuration section and check that it is configured as expected. If, having double checked the settings, the problem persists, the next port of call is the log and information files in the Diagnostics: section of the web interface. Keep Diagnostics Section ExpandedBy default, the menu tree in the Diagnostics section of the web interface is collapsed, meaning the menu items aren't visible. To see the menu items, you need to click the word Diagnostics and the tree will expand. During initial setup and if you are doing lots of troubleshooting, you can save yourself a click by having the Diagnostics section permanently expanded. To set this option, go to System: Advanced and click on the Navigation - Keep diagnostics in navigation expanded tick box. The Diagnostics sections has five sections, the first two are logs and information pages about the status of the FreeNAS server. The other three are networking diagnostic tools and information. Diagnostics: Logs This section collates all the different log files that are generated by the FreeNAS server into one convenient place. There are several tabs, one for each different service to log file type. Some of the information can be very technical, especially in the System tab. However, with some key information they can become more readable. The tabs are as follows: Tab Meaning System When FreeBSD (the underlying OS of FreeNAS) boots, various log entries are recorded here about the hardware of the server and various messages about the boot process. FTP This shows the activity on the FTP server including successful logins and failed logins. RSYNC The log information for the RSYNC server is divided into three sections: Server, Client, and Local. Depending on which type of RSYNC operation you are interested, click the appropriate tab. SSHD Here you will find log entries from the SSH server including some limited startup information and records of logins and failed login attempts. SMARTD This tab logs the output of the S.M.A.R.T daemon. Daemon Any other minor system service like the built-in HTTP server, the Apple Filing Protocol server and Windows networking server (Samab) will log information to this page. UPnP The log information from the FreeNAS UPnP server called "MediaTomb" is displayed here. The logging can be quite verbose so careful attention is needed when reading it. Don't be distracted by entires such as "INFO: Config: option not found:" as this is just the server logging that it will be using a default value for that particular attribute. Settings The settings tab allows you to change how the log information is displayed including the sort order and the number of entries shown. What is a Daemon?In UNIX speak, a Daemon is a system service. It is a program that runs in the background performing certain tasks. The Daemons in FreeNAS don't work with the users in an interactive mode (via the monitor, mouse, and keyboard) and as such need a place to log the results (or problems)of their actives. The FreeNAS Daemons are launched automatically by FreeBSD when it boots and some are dependent on being enabled in the web interface. Understanding Diagnostics Logs: System The most complicated of all the log pages is the System log page. Here, FreeBSD logs information about the system, its hardware, and the startup process. At first, this page can seem intimidating but with a little help, this page can be very helpful particularly in tracking down hardware or driver related problems. 50 Log Entries Might Not be EnoughThe default number of log entries shown on the Diagnostics: Logs page is 50. For most situations, this will be sufficient but there can be times when it is not enough. For example in the Diagnostics: Logs: System tab, the total number of log entries made during the boot up process is more than 50. If you want to see how much system memory has been recognized by FreeBSD, you won't find it within the standard 50 entries. The solution is to increase the Number of log entries to show parameter on the Diagnostics: Logs: Setting tab. The best way to learn to read the Diagnostics: Logs: System page is by example, below are several different log entry examples including logs about the CPU, memory, disks, and disk controllers: kernel: FreeBSD 6.2-RELEASE-p11 #0: Wed Mar 12 18:17:49 CET 2008 This first entry shows the heritage of the FreeNAS server. It is based on FreeBSD and in this particular case, we see that this version of FreeNAS is using FreeBSD 6.2. There are plans (which may have already become reality) to use FreeBSD version 7.0 as the base for FreeNAS. kernel: CPU: Intel(R) Xeon(TM) CPU 1.70GHz (1680.52-MHz 686-class CPU) Here, the type of CPU that was detected by the FreeBSD is displayed. In this case, it is an Intel Xeon CPU running at 1.7GHz. kernel: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs If your system has more than one CPU or is a dual core machine then you will see an entry in the log file (like the one above) recognizing the second CPU. If your machine has Hyper-threading technology, then the second logical processor will be reported like this: Logical CPUs per core:2 Apr 1 11:06:00 kernel: real memory = 268435456 (256 MB)Apr 1 11:06:00 kernel: avail memory = 252907520 (241 MB) These log entries show how much memory the system has detected. The difference in size between real memory and available memory is the difference between the amount of RAM physically installed in the computer and the amount of memory left over after the FreeBSD kernel is loaded. kernel: atapci0: <Intel PIIX4 UDMA33 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1050-0x105f at device 7.1 on pci0 kernel: ata0: <ATA channel 0> on atapci0 kernel: ata1: <ATA channel 1> on atapci0 For disks to work on your FreeNAS server, a disk controller is needed and it will be either a standard ATA/IDE controller, a SATA controller or a SCSI controller. Above are the log entries for a standard ATA controller built into the motherboard. You can see that it is an Intel controller and that two channels have been seen (the primary and the secondary). kernel: atapci1: <SiS 181 SATA150 controller> irq 17 at device 5.0 on pci0kernel: ata2: <ATA channel 0> on atapci1kernel: ata3: <ATA channel 1> on atapci1 Like the ATA controller listed a moment ago, SATA controllers are all recognized at boot up. Here is a SiS 181 SATA 150 controller with two channels. They are listed as devices ata2 and ata3 as ata0 and ata1 are used by the standard ATA/IDE controller. kernel: mpt0: <LSILogic 1030 Ultra4 Adapter> irq 17 at device 16.0 on pci0 Like IDE and SATA controllers, all recognized SCSI drivers are listed in the boot up system log. Here, the controller is an LSILogic 1030 Ultra4. kernel: ad0: 476940MB <WDC WD5000AAJB-00YRA0 12.01C02> at ata0-master UDMA100kernel: ad4: 476940MB <Seagate ST3500320AS SD04> at ata2-master SATA150 Once the disk controllers are recognized by the system, FreeBSD can search to see which disks are attached. Above is an example of a Western Digital 500GB hard drive using the standard ATA100 interface at 100MB/s. There is also a 500GB Seagate drive connected using the SATA interface. acd0: CDROM <TOSHIBA CD-ROM XM-7002B/1005> at ata1 as master UDMA33 When the CDROM (which is normally attached to an ATA/IDE controller) is recognized, it will look like the above. kernel: da0 at ahd0 bus 0 target 0 lun 0kernel: da0: <MAXTOR ATLAS10K4_73WLS DFL0> Fixed Direct Access SCSI-3 devicekernel: da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabledkernel: da0: 70149MB (143666192 512 byte sectors: 255H 63S/T 8942C) SCSI addressing is a little more complicated than that of ATA/IDE. In SCSI land, you have a controller, a channel (bus), a disk (target), and the Logical Unit Number (LUN). The example above shows that a disk (which has been assigned the device name da0) is found on the controller ahd0 on bus 0, as target 0 with the LUN 0. SCSI controllers can have multiple buses and multiple targets. Further down, you can see that the disk is a MAXTOR 73GB SCSI-3 disk. kernel: da0 at umass-sim0 bus 0 target 0 lun 0kernel: da0: <Verbatim Store 'n' Go 1.30> Removable Direct Access SCSI-2 devicekernel: da0: 40.000MB/s transferskernel: da0: 963MB (1974271 512 byte sectors: 64H 32S/T 963C) If you are using a USB flash disk for storing the configuration information, it will most likely appear in the log file as a type of SCSI disk. The above example shows a 1GB Verbatim Store 'n' Go disk. kernel: lnc0: <PCNet/PCI Ethernet adapter> irq 18 at device 17.0 on pci0kernel: lnc0: Ethernet address: 00:0c:29:a5:9a:28 Another important device that needs to work correctly on your system is the network interface card. Like disk controllers and disks, it will be logged in the log file when FreeBSD recognizes it. Above is an example of an AMD Lance/PCNet-based Ethernet adapter. Each Ethernet card has a unique address know as the Ethernet address or the MAC address. It is made up of 6 numbers specified using a colon notation. Once found, FreeBSD queries the card to find its MAC address and logs the result. In the above example, it is "00:0c:29:a5:9a:28". Converting between Device Names and the Real World In the SCSI example above, the SCSI controller listed is ahd0. The trick to understanding these log entries better is to know how to interpret the device name ahd0. First of all ahd0 means it is a device using the ahd driver and it is the first one in the system (with numbering starting from 0). So what is a ahd? The first place to look is further up in the log file. There should be an entry like: kernel: ahd0: <Adaptec 39320 Ultra320 SCSI adapter> irq 11 at device 1.0 on pci2 This shows that the particular device is an Adaptec 39320 SCSI 3 controller. You can also find out more about the the ahd driver (and all FreeBSD drivers) at: http://www.freebsd.org/releases/6.2R/hardware-i386.html Search for ahd and you will find which controllers this driver supports (in this case, they are all controllers from Adaptec. If you click on the link provided, you will be taken to a specific help page about this driver. When FreeNAS moves to FreeBSD 7, then the relevant web page will be: http://www.freebsd.org/releases/7.0R/hardware.html.
Read more
  • 0
  • 0
  • 11206
article-image-getting-started-zombiejs
Packt
22 May 2013
9 min read
Save for later

Getting Started with Zombie.js

Packt
22 May 2013
9 min read
(For more resources related to this topic, see here.) A brief history of software and user interface testing Software testing is a necessary activity for gathering information about the quality of a certain product or a service. In the traditional software development cycle, this activity had been delegated to a team whose sole job was to find problems in the software. This type of testing would be required if a generic product was being sold to a domestic end user or if a company was buying a licensed operating system. In most custom-built pieces of software, the testing team has the responsibility of manually testing the software, but often the client has to do the acceptance testing in which he or she has to make sure that the software behaves as expected. Every time someone in these teams finds a new problem in the software, the development team has to fix the software and put it back in the testing loop one more time. This implies that the cost and time required to deliver a final version of the software increases every time a bug is found. Furthermore, the later in the development process the problem is found, the more it will impact the final cost of the product. Also, the way software is delivered has changed in the last few years; the Web has enabled us to make the delivery of software and its upgrade easy, shortening the time between when new functionality is developed and when it is put in use. But once you have delivered the first version of a product and have a few customers using it, you can face a dilemma; fewer updates can mean the product quickly becomes obsolete. On the other hand, introducing many changes in the software increases the chance of something going wrong and your software becoming faulty, which may drive customers away. There are many versions and iterations over how a development process can mitigate the risk of shipping a faulty product and increase the chances of new functionalities to be delivered on time, and for the overall product to meet a certain quality standard, but all people involved in building software must agree that the sooner you catch a bug, the better. This means that you should catch the problems early on, preferably in the development cycle. Unfortunately, completely testing the software by hand every time the software changes, would be costly. The solution here is to automate the tests in order to maximize the test coverage (the percentage of the application code that is tested and the possible input variations) and minimize the time it takes to run each test. If your tests take just a few seconds to run, you can afford to run them every time you make a single change in the code base. Enter the automation era Test automation has been around for some years, even before the Web was around. As soon as graphical user interfaces (GUIs) started to become mainstream, the tools that allowed you to record, build, and run automated tests against a GUI started appearing. Since there were many languages and GUI libraries for building applications, many tools that covered some of these started showing up. Generally they allowed you to record a testing session that you could later recreate automatically. In this session, you could automate the pointer to click on things (buttons, checkboxes, places on a window, and so on), select values (from a select box, for instance), and input keyboard actions and test the results. All of these tools were fairly complex to operate and, worst of all, most of them were technology-specific. But, if you're building a web-based application that uses HTML and JavaScript, you have better alternatives. The most well known of these is likely to be Selenium, which allows you to record, change, and run testing scripts against all the major browsers. You can run tests using Selenium, but you need at least one browser for Selenium to attach itself to, in order to load and run the tests. If you run the tests with as many browsers as you possibly can, you will be able to guarantee that your application behaves correctly across all of them. But since Selenium plugs into a browser and commands it, running all the tests for a considerably complex application in as many browsers as possible can take some time, and the last thing you want is to not run the tests as often as possible. Unit tests versus integration tests Generally you can divide automated tests into two categories, namely unit tests and integration tests. Unit tests: These tests are where you select a small subset of your application—such as a class or a specific object—and test the interface the class or object provides to the rest of the application. In this way, you can isolate a specific component and make sure it behaves as expected so that other components in the application can use it safely. Integration tests: These tests are where individual components are combined together and tested as a working group. During these tests, you interact and manipulate the user interface that in turn interacts with the underlying blocks of your application. The kind of testing you do with Zombie.js falls in this category. What Zombie.js is Zombie.js allows you to run these tests without a real web browser. Instead, it uses a simulated browser where it stores the HTML code and runs the JavaScript you may have in your HTML page. This means that an HTML page doesn't need to be displayed, saving precious time that would otherwise be occupied rendering it. You can then use Zombie.js to conduct this simulated browser into loading pages and, once a page is loaded, doing certain actions and observing the results. And you can do all this using JavaScript, never having to switch languages between your client code and your test scripts. Understanding the server-side DOM Zombie.js runs on top of Node.js (http://nodejs.org), a platform where you can easily build networking servers using JavaScript. It runs on top of Google's fast V8 JavaScript engine that also powers their Chrome browsers. At the time of writing, V8 implements the JavaScript ECMA 3 standard and part of the ECMA 5 standard. Not all browsers implement all the features of all the versions of the JavaScript standards equally. This means that even if your tests pass in Zombie.js, it doesn't mean they will pass for all the target browsers. On top of Node.js, there is a third-party module named JSDOM (https://npmjs.org/package/jsdom) that allows you to parse an HTML document and use an API on top of a representation of that document; this allows you to query and manipulate it. The API provided is the standard Document Object Model (DOM). All browsers implement a subset of the DOM standard, which has been dictated as a set of recommendations by a working group inside the World Wide Web Consortium (W3C). They have three levels of recommendations. JSDOM implements all three. Web applications, directly or indirectly (by using tools such as jQuery), use this browser-provided DOM API to query and manipulate the document, enabling you to create browser applications that have complex behavior. This means that by using JSDOM you automatically support any JavaScript libraries that most modern browsers support. Zombie.js is your headless browser On top of Node.js and JSDOM lies Zombie.js. Zombie.js provides browser-like functionality and an API you can use for testing. For instance, a typical use of Zombie.js would be to open a browser, ask for a certain URL to be loaded, fill some values on a form, and submit it, and then query the resulting document to see if a success message is present. To make it more concrete, here is a simple example of what the code for a simple Zombie.js test may look like: browser.visit('http://localhost:8080/form', function() {browser.fill('Name', 'Pedro Teixeira').select('Born', '1975').check('Agree with terms and conditions').pressButton('Submit', function() {assert.equal(browser.location.pathname, '/success');assert.equal(browser.text('#message'),'Thank you for submitting this form!');});}); Here you are making typical use of Zombie.js: to load an HTML page containing a form; filling that form and submitting it; and then verifying that the result is successful. Zombie.js may not only be used for testing your web app but also by applications that need to behave like browsers, such as HTML scrapers, crawlers, and all sorts of HTML bots. If you are going to use Zombie.js to do any of these activities, please be a good Web citizen and use it ethically. Summary Creating automated tests is a vital part of the development process of any software application. When creating web applications using HTML, JavaScript, and CSS, you can use Zombie.js to create a set of tests; these tests load, query, manipulate, and provide inputs to any given web page. Given that Zombie.js simulates a browser and does not depend on the actual rendering of the HTML page, the tests run much faster than they would if you instrumented a real browser. Thus it is possible for you to run these tests whenever you make any small changes to your application. Zombie.js runs on top of Node.js, uses JSDOM to provide a DOM API on top of any HTML document, and simulates browser-like functionalities with a simple API that you can use to create your tests using JavaScript Resources for Article : Further resources on this subject: Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article] Build iPhone, Android and iPad Applications using jQTouch [Article]
Read more
  • 0
  • 0
  • 11033

article-image-how-to-integrate-a-medium-editor-in-angular-8
Guest Contributor
05 Sep 2019
5 min read
Save for later

How to integrate a Medium editor in Angular 8

Guest Contributor
05 Sep 2019
5 min read
In the world of text editing, there is a new era of WYSIWYG (What You See Is What You Get). We all know how styling and formatting become the important elements of your website but most of the times it is tough to pick a simple, easy-to-use and powerful editor. Currently, the good days are coming back with the new Medium Editor! Medium Editor is an independent Javascript library to make a coasting content manager bar which springs up when you select a bit of content of your page which is enlivened by the magnificence of Medium.com. You can turn every field from a message on your contact form to a whole article on the back-end into a professionally styled text paragraph contains quote blocks, heading, hyperlinks or just a few selected words. You can also try to incorporate text editor into Angular 8 for the ease of updates and edits of your content. Angular 8 has released its latest feature - beta 6 with the attractive new functionalities for testing your software and fixing bugs. One of them is Bazel - Google's open-source part of its internal build tool called Blaze which is capable of performing incremental builds and tests. Let us check how you can integrate a Medium editor in the Angular 8 platform. Also Read: Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more Steps to create an editor using Angular 8 Step 1: First thing first, Create a project in Angular and you can also make use of bootstrap for making it look pretty good by adding CDN links in the index.html After entering the above-given command line, it will generate an angular starter application after it has completed installing all the dependencies. Step 2: Install an npm package by entering the below-given line. And then, include the CSS and js in angular.json file Step 3: Create a component with your chosen name and then create the one with the name create Step 4: Click to the newly created component.html and make a div by giving it a template reference of the name Try the above-given code snippet under a few bootstrap classes just to give a basic stylings Step 5: Select your component class and make a variable editor to view the child property as listed below: Step 6: Then, we will make use of one lifecycle hook of angular which is ngAfterViewInit. Paste the above-given scrap and you may get a mistake like media supervisor that isn't characterized all things considered, in this way, we have to declare it on the top like In the wake of rolling out the above improvements, you can, for the most part, make a little medium-supervisor to utilize it for yourself. You can pick over to compose anything and simply select the content you have written to see the enchantment. Step 7: After this, you may require some more alternatives in your editorial manager toolbar. For doing as such, you have to pass an arrangement object in the MediumEditor Constructor. By making the selective changes, you will be able to see a load of available options. Step 8: So, now you have got an editor, you can easily get the data from it. If someone writes a post then you need to have an HTML write of the same. Once more, you have to partition the screen into two parts wherein one half, there will be a supervisor and the other half will show the see of the post. [box type="shadow" align="" class="" width=""]In the second half of the screen, you need to assign the value of inner HTML as given above[/box] Wrap Up Every system is prone to pros and cons and so does the Angular too. Angular offers a clean code development along with the high-performance framework that manages to route, providing seamless updates using Command Line Interface and retrieving the state of location services. Also, you can debug the templates in Angular 8 and supports multiple applications in one domain. Contrary to this, Angular might be confusing for the newcomers as there is no accurate manual which includes the proper documentation of the framework. Also, it lacks the developer community and there is limited scope to debug Limited Routing. However, Angular 8 supports multiple applications in one domain and user-friendly for all the versions of the operating system. So, here we come to the end of the article. We hope you have gained information on how to integrate the latest medium editor to Angular 8. Do give it a try! Till then - keep learning! Author Bio Dave Jarvis is working as a Business Development Executive at eTatvaSoft.com, an enterprise-level mobile & web application development company. He aims to sharpen his analytical skills, deepening his data understanding and broaden his business knowledge in these years of his career. Click here to find more information about the company. Follow him on Twitter. Other interesting news in Web development Google Chrome 76 now supports native lazy-loading Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 11031