Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 7598

article-image-getting-started-leaflet
Packt
14 Jun 2013
9 min read
Save for later

Getting started with Leaflet

Packt
14 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready First, we need to get an Internet browser, if we don't have one already installed. Leaflet is tested with modern desktop browsers: Chrome, Firefox, Safari 5+, Opera 11.11+, and Internet Explorer 7-10. Internet Explorer 6 support is stated as not perfect but accessible. We can pick one of them, or all of them if we want to be thorough. Then, we need an editor. Editors come in many shapes and flavors: free or not free, with or without syntax highlighting, or remote file editing. A quick search on the Internet will provide thousands of capable editors. Notepad++ (http://notepad-plus-plus.org/) for Windows, Komodo Edit (http://www.activestate.com/komodo-edit) for Mac OS, or Vim (http://www.vim.org/) for Linux are among them. We can download Leaflet's latest stable release (v0.5.1 at the time of writing) and extract the content of the ZIP file somewhere appropriate. The ZIP file contains the sources as well as a prebuilt version of the library that can be found in the dist directory. Optionally, we can build from the sources included in the ZIP file; see this article's Building Leaflet from source section. Finally, let's create a new project directory on our hard drive and copy the dist folder from the extracted Leaflet package to it, ensuring we rename it to leaflet. How to do it... Note that the following code will constitute our code base throughout the rest of the article. Create a blank HTML file called index.html in the root of our project directory. Add the code given here and use the browser installed previously to execute it: <!DOCTYPE html> <html> <head> <link rel="stylesheet" type="text/css" href="leaflet/ leaflet.css" /> <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href=" leaflet/ leaflet.ie.css" /> <![endif]--> <script src = "leaflet/leaflet.js"></script> <style> html, body, #map { height: 100%; } body { padding: 0; margin: 0; } </style> <title>Getting Started with Leaflet</title> </head> <body> <div id="map"></div> <script type="text/javascript"> var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/ {x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); </script> </body> </html> The following screenshot is of the first map we have created: How it works... The index.html file we created is a standardized file that all Internet browsers can read and display the contents. Our file is based on the HTML doctype standard produced by the World Wide Web Consortium (W3C), which is only one of many that can be used as seen at http://www.w3.org/QA/2002/04/valid-dtd-list.html. Our index file specifies the doctype on the first line of code as required by the W3C, using the <!DOCTYPE HTML> markup. We added a link to Leaflet's main CSS file in the head section of our code: <link rel="stylesheet" type="text/css" href="leaflet/leaflet.css" /> We also added a conditional statement to link an Internet Explorer 8 or lower only stylesheet when these browsers interpret the HTML code: <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href="leaflet/leaflet.ie.css" /> <![endif]--> This stylesheet mainly addresses Internet Explorer specific issues with borders and margins. Leaflet's JavaScript file is then referred to using a script tag: <script src = "leaflet/leaflet.js"></script> We are using the compressed JavaScript file that is appropriate for production but very inefficient for debugging. In the compressed version, every white space character has been removed, as shown in the following bullet list, which is a straight copy-paste from the source of both files for the function onMouseClick: compressed: _onMouseClick:function(t){!this._loaded||this.dragging&& this.dragging.moved()||(this.fire("preclick"),this._ fireMouseEvent(t))}, uncompressed: _onMouseClick: function (e) { if (!this._loaded || (this.dragging && this.dragging.moved())) { return; } this.fire('preclick'); this._fireMouseEvent(e); }, To make things easier, we can replace leaflet.js with leaflet-src.js—an uncompressed version of the library. We also added styles to our document to make the map fit nicely in our browser window: html, body, #map { height: 100%; } body { padding: 0; margin: 0; } The <div> tag with the id attribute map in the document's body is the container of our map. It must be given a height otherwise the map won't be displayed: <div id="map" style="height: 100%;" ></div> Finally, we added a script section enclosing the map's initialization code, instantiating a Map object using the L.map(…) constructor and a TileLayer object using the L.tileLayer(…) constructor. The script section must be placed after the map container declaration otherwise Leaflet will be referencing an element that does not yet exist when the page loads. When instantiating a Map object, we pass the id of the container of our map and an array of Map options: var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); There are a number of Map options affecting the state, the interactions, the navigation, and the controls of the map. See the documentation to explore those in detail at http://leafletjs.com/reference.html#map-options. Next, we instantiated a TileLayer object using the L.tileLayer(…) constructor and added to the map using the TileLayer.addTo(…) method: L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Here, the first parameter is the URL template of our tile provider—that is OpenStreetMap— and the second a noncompulsory array of TileLayer options including the recommended attribution text for our map tile's source. The TileLayer options are also numerous. Refer to the documentation for the exhaustive list at http://leafletjs.com/reference.html#tilelayer-options. There's more... Let's have a look at some of the Map options, as well as how to build Leaflet from source or use different tile providers. More on Map options We have encountered a few Map options in the code for this recipe, namely center and zoom. We could have instantiated our OpenStreetMap TileLayer object before our Map object and passed it as a Map option using the layers option. We also could have specified a minimum and maximum zoom or bounds to our map, using minZoom and maxZoom (integers) and maxBounds, respectively. The latter must be an instance of LatLngBounds: var bounds = L.latLngBounds([ L.latLng([52.312, -2.186]), L.latLng([52.663, -1.594]) ]); We also came across the TileLayer URL template that will be used to fetch the tile images, replacing { s} by a subdomain and { x}, {y}, and {z} by the tiles coordinate and zoom. The subdomains can be configured by setting the subdomains property of a TileLayer object instance. Finally, the attribution property was set to display the owner of the copyright of the data and/or a description. Building Leaflet from source A Leaflet release comes with the source code that we can build using Node.js. This will be a necessity if we want to fix annoying bugs or add awesome new features. The source code itself can be found in the src directory of the extracted release ZIP file. Feel free to explore and look at how things get done within a Leaflet. First things first, go to http://nodejs.org and get the install file for your platform. It will install Node.js along with npm, a command line utility that will download and install Node Packaged Modules and resolve their dependencies for us. Following is the list of modules we are going to install: Jake: A JavaScript build program similar to make JSHint: It will detect potential problems and errors in JavaScript code UglifyJS: A mangler and compressor library for JavaScript Hopefully, we won't need to delve into the specifics of these tools to build Leaflet from source. So let's open a command line interpreter— cmd.exe on Windows, or a terminal on Mac OSX or Linux—and navigate to the Leaflet's src directory using the cd command, then use npm to install Jake, JSHint and UglifyJS: cd leaflet/src npm install –g jake npm install jshint npm install uglify-js We can now run Jake in Leaflet's directory: jake What about tile providers? We could have chosen a different tile provider as OpenStreetMap is free of charge but has its limitations in regard of a production environment. A number of web services provide tiles but might come at a price depending on your usage: CloudMade, MapQuest. These three providers serve tiles use the OpenStreetMap tile scheme described at http://wiki.openstreetmap.org/wiki/Slippy_map_tilenames. Remember the way we added the OpenStreetMap layer to the map? L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Remember the way we added the OpenStreetMap layer to the map? Cloudmade: L.tileLayer(' http://{s}.tile.cloudmade.com/API-key/997/256/{z}/ {x}/{y}.png', { attribution: ' Map data © <a href="http://openstreetmap. org">OpenStreetMap</a> contributors, <a href="http:// creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery © <a href="http://cloudmade.com">CloudMade</a>' }).addTo(map); MapQuest: L.tileLayer('http://{s}.mqcdn.com/tiles/1.0.0/map/{z}/{x}/{y}. png', { attribution: ' Tiles Courtesy of <a href="http://www.mapquest. com/" target="_blank">MapQuest</a> <img src = "http://developer.mapquest.com/content/osm/mq_logo.png">', subdomains: ['otile1', 'otile2', 'otile3', 'otile4'] }).addTo(map); You will learn more about the Layer URL template and subdomains option in the documentation at http://leafletjs.com/reference.html#tilelayer. Leaflet also supports Web Map Service (WMS) tile layers—read more about it at http://leafletjs.com/reference.html#tilelayer-wms—and GeoJSON layers in the documentation at http://leafletjs.com/reference.html#geojson. Summary In this article we have learned how to create map using Leaflet and created our first map. We learned about different map options and also how to build a leaflet from source. Resources for Article : Further resources on this subject: Using JavaScript Effects with Joomla! [Article] Getting Started with OpenStreetMap [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 7538

article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 7536
Visually different images

article-image-creating-controllers-blueprints
Packt
21 Sep 2015
8 min read
Save for later

Creating Controllers with Blueprints

Packt
21 Sep 2015
8 min read
In this article by Jack Stouffer, author of the book Mastering Flask, the more complex and powerful versions will be introduced, and we will turn our disparate view functions in cohesive wholes. We will also discuss the internals of how Flask handles the lifetime of an HTTP request and advanced ways to define Flask views. (For more resources related to this topic, see here.) Request setup, teardown, and application globals In some cases, a request-specific variable is needed across all view functions and needs to be accessed from the template as well. To achieve this, we can use Flask's decorator function @app.before_request and the object g. The function @app.before_request is executed every time before a new request is made. The Flask object g is a thread-safe store of any data that needs to be kept for each specific request. At the end of the request, the object is destroyed, and a new object is spawned at the start of a new request. For example, this code checks whether the Flask session variable contains an entry for a logged in user; if it exists, it adds the User object to g: from flask import g, session, abort, render_template @app.before_request def before_request(): if 'user_id' in session: g.user = User.query.get(session['user_id']) @app.route('/restricted') def admin(): if g.user is None: abort(403) return render_template('admin.html') Multiple functions can be decorated with @app.before_request, and they all will be executed before the requested view function is executed. There also exists a decorator @app.teardown_request, which is called after the end of every request. Keep in mind that this method of handling user logins is meant as an example and is not secure. Error pages Displaying browser's default error pages to the end user is jarring as the user loses all context of your app, and they must hit the back button to return to your site. To display your own templates when an error is returned with the Flask abort() function, use the errorhandler decorator function: @app.errorhandler(404) def page_not_found(error): return render_template('page_not_found.html'), 404 The errorhandler is also useful to translate internal server errors and HTTP 500 code into user friendly error pages. The app.errorhandler() function may take either one or many HTTP status code to define which code it will act on. The returning of a tuple instead of just an HTML string allows you to define the HTTP status code of the Response object. By default, this is set to 200. Class-based views In most Flask apps, views are handled by functions. However, when many views share common functionality or there are pieces of your code that could be broken out into separate functions, it would be useful to implement our views as classes to take advantage of inheritance. For example, if we have views that render a template, we could create a generic view class that keeps our code DRY: from flask.views import View class GenericView(View): def __init__(self, template): self.template = template super(GenericView, self).__init__() def dispatch_request(self): return render_template(self.template) app.add_url_rule( '/', view_func=GenericView.as_view( 'home', template='home.html' ) ) The first thing to note about this code is the dispatch_request() function in our view class. This is the function in our view that acts as the normal view function and returns an HTML string. The app.add_url_rule() function mimics the app.route() function as it ties a route to a function call. The first argument defines the route of the function, and the view_func parameter defines the function that handles the route. The View.as_view() method is passed to the view_func parameter because it transforms the View class into a view function. The first argument defines the name of the view function, so functions such as url_for() can route to it. The remaining parameters are passed to the __init__ function of the View class. Like the normal view functions, HTTP methods other than GET must be explicitly allowed for the View class. To allow other methods, a class variable containing the list of methods named methods must be added: class GenericView(View): methods = ['GET', 'POST'] … def dispatch_request(self): if request.method == 'GET': return render_template(self.template) elif request.method == 'POST': … Method class views Often, when functions handle multiple HTTP methods, the code can become difficult to read due to large sections of code nested within if statements: @app.route('/user', methods=['GET', 'POST', 'PUT', 'DELETE']) def users(): if request.method == 'GET': … elif request.method == 'POST': … elif request.method == 'PUT': … elif request.method == 'DELETE': … This can be solved with the MethodView class. MethodView allows each method to be handled by a different class method to separate concerns: from flask.views import MethodView class UserView(MethodView): def get(self): … def post(self): … def put(self): … def delete(self): … app.add_url_rule( '/user', view_func=UserView.as_view('user') ) Blueprints In Flask, a blueprint is a method of extending an existing Flask app. They provide a way of combining groups of views with common functionality and allow developers to break their app down into different components. In our architecture, the blueprints will act as our controllers. Views are registered to a blueprint; a separate template and static folder can be defined for it, and when it has all the desired content on it, it can be registered on the main Flask app to add blueprints' content. A blueprint acts much like a Flask app object, but is not actually a self-contained app. This is how Flask extensions provide views function. To get an idea of what blueprints are, here is a very simple example: from flask import Blueprint example = Blueprint( 'example', __name__, template_folder='templates/example', static_folder='static/example', url_prefix="/example" ) @example.route('/') def home(): return render_template('home.html') The blueprint takes two required parameters—the name of the blueprint and the name of the package—which are used internally in Flask, and passing __name__ to it will suffice. The other parameters are optional and define where the blueprint will look for files. Because templates_folder was specified, the blueprint will not look in the default template folder, and the route will render templates/example/home.html and not templates/home.html. The url_prefix option automatically adds the provided URI to the start of every route in the blueprint. So, the URL for the home view is actually /example/. The url_for() function will now have to be told which blueprint the requested route is in: {{ url_for('example.home') }} Also, the url_for() function will now have to be told whether the view is being rendered from within the same blueprint: {{ url_for('.home') }} The url_for() function will also look for static files in the specified static folder as well. To add the blueprint to our app: app.register_blueprint(example) Let's transform our current app to one that uses blueprints. We will first need to define our blueprint before all of our routes: blog_blueprint = Blueprint( 'blog', __name__, template_folder='templates/blog', url_prefix="/blog" ) Now, because the templates folder was defined, we need to move all of our templates into a subfolder of the templates folder named blog. Next, all of our routes need to have the @app.route function changed to @blog_blueprint.route, and any class view assignments now need to be registered to blog_blueprint. Remember that the url_for() function calls in the templates will also have to be changed to have a period prepended to then to indicate that the route is in the same blueprint. At the end of the file, right before the if __name__ == '__main__': statement, add the following: app.register_blueprint(blog_blueprint) Now all of our content is back on the app, which is registered under the blueprint. Because our base app no longer has any views, let's add a redirect on the base URL: @app.route('/') def index(): return redirect(url_for('blog.home')) Why blog and not blog_blueprint? Because blog is the name of the blueprint and the name is what Flask uses internally for routing. blog_blueprint is the name of the variable in the Python file. Summary We now have our app working inside a blueprint, but what does this give us? Let's say that we wanted to add a photo sharing function to our site, we would be able to group all the view functions into one blueprint with its own templates, static folder, and URL prefix without any fear of disrupting the functionality of the rest of the site. Resources for Article: Further resources on this subject: More about Julia [article] Optimization in Python [article] Symbolizers [article]
Read more
  • 0
  • 0
  • 7506

article-image-securing-moodle-instance
Packt
14 Feb 2011
7 min read
Save for later

Securing a Moodle Instance

Packt
14 Feb 2011
7 min read
Moodle Security Moodle is an open source CMS (Course Management System)/LMS (Learning Management System)/VLE (Virtual Learning Environment). Its primary purpose is to enable educational institutions and individuals to create and publish learning content in a coherent and pedagogically valuable manner, so that it can be used for successful knowledge transfer towards students. That sounds harmless enough. Why would anybody want to illegally access an educational platform? There are various motives of computer criminals. In general, they are people committed to the circumvention of computer security. This primarily concerns unauthorized remote computer break-ins via a communication network such as the Internet. Some of the motives could be: Financial: Stealing user and/or course information and selling it to other third-parties Personal: Personal grudge, infantile display of power, desire to alter assigned grades, and so on Weak points Moodle is a web application and as such must be hosted on a computer connected to some kind of network (private or public—Internet / Intranet). This computer must have the following components: Operating System (OS) Web server PHP Database server Moodle Each of these pieces can be used as a point of attack by a malicious user(s) in order to obtain access to the protected information. Therefore, it is our task to make all of them as secure as possible. The main focus will be directed towards our Moodle and PHP configuration. The secure installation of Moodle In this section we follow a secure installation of Moodle. In case you do not already have an installed instance of Moodle, we will show you the quickest way to do that, and at the same time focus on security. If you already have Moodle installed, go to the following section where you will see how to secure an existing installation of Moodle Starting from scratch In order to install Moodle on your server you need to install and configure the web server with support for PHP and the database server. We will not go into the specifics of setting up a particular web server, PHP, and/or database server right now, since it depends on the OS your server has installed. Also we will not explain in detail tasks like creating directories, setting up file permissions, etc as they are OS specific and out of the scope of this article. This section assumes you already know about your OS and have already configured your web server with an empty database. Every installation of Moodle must have: Web server with PHP support Dedicated database Two dedicated directories—one for Moodle and another for platform data We assume that your web server is Apache (Linux) or IIS (Windows), and that you use PHP 5.1.x or later and MySQL 5.0 or later. Installation checklist The following checklist will guide you through the basic installation procedure for Moodle. Download the latest stable version of Moodle from http://download. moodle.org/. (At the time of writing this article it is 1.9.8+). You have two options available on the download page—moodle-weekly-19.tgz or moodle-weekly-19.zip archive. In case you use Linux you can choose either. In case of Windows, ZIP file is the preferred choice. The reason for this is simple. Every Windows server comes, by default, with installed support for managing Zip archives. On the other hand, TGZ is readily available on every Linux distribution. Unpack the compressed file you just downloaded. This will produce a directory with the name moodle which contains all of the platform files. Move that directory to the web-root of your web server. After doing that it is recommended to make all files read-only for safety reasons. Create a directory called moodledata somewhere on the disk. Make sure that it is not in the web-root of your web server since that would incur a serious security breach. Doing that might expose all platform files submitted by course participants and teachers together with the course content to the outside world. Create an empty database (we suggest the name moodle or moodledb). The default database character set must be configured to utf8 and collation set to utf8_general_ci. It is recommended to have a special user for accessing this database with limited permissions. In case of credentials theft, a malicious user could only operate on data from one database, minimizing the potential damage. That database user account will need permissions for creating, altering, and deleting the tables, creating/dropping the indexes and reading/writing the data. Here is what you need to execute in your MySQL console for creating a database and user: CREATE DATABASE moodle CHARSET 'utf8' COLLATION 'utf8_general_ ci'; CREATE USER 'moodle'@'localhost' IDENTIFIED BY 'somepass'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON loomdb.* TO loom@localhost IDENTIFIED BY 'somepass'; FLUSH PRIVILEGES; Start the installation by opening the http://url to local installation of the moodle (for example http://localhost/moodle) in your browser. Make sure it is a more recent browser with pop ups and JavaScript enabled. We recommend Internet Explorer 8+ or Firefox 3.6+. You will see the following screenshot. On the next screen, we need to specify the web address of the platform and the location of the moodle directory on the disk. Now, we must configure database access. Choose MySQL as database type, localhost as host server, set the name of the database (moodle), database user, and its password (moodle/moodle). You should leave the table prefix as is. Moodle checks the server configuration on this screen and displays the outcome. We can proceed with the installation only if all of the minimal requirements are met. During installation, Moodle generates a configuration file within the moodle directory called config.php. It is important to make this file read-only after installation for security reasons. In case Moodle cannot save config.php it will offer to download or copy content of the file and manually place it in the appropriate location on the server. See the following screenshot: We are now presented with terms of usage and license agreement. To proceed click yes. We can now start the installation itself. During that process Moodle will create all of the tables in the database, session files in the moodledata directory, and load some initial information. Make sure you check Unattended operation at the bottom. That way, the process will be executed without user intervention. After the database setup is finished, we are offered a new screen where we must configure the administrative account. With this user you manage your platform, so be careful about disclosing this information to other users. Field name Description Recommended action Username Defines user name inside the Moodle. By default it is admin. We recommend leaving the default value unchanged. New password Defines user logon password. Must supply valid password. First name Defines name of the admin. Must supply valid name. Surname Defines surname of the admin. Must supply valid name. E-mail address Defines user e-mail address. Must supply valid e-mail. E-mail display Define the visibility of your e-mail address within the platform. We recommend leaving it as is (visible to all). E-mail active Defines whether e-mail is activated or not. Set it to enable. City/Town Defines name of the city where you live. Moodle requires this value. Select Country Name of your country. Set it to your country name. Timezone Sets your time zone so that server can display time calculated for your location in some reports. If not sure what your time zone is, leave it as is.   Preferred language Choose the platform language. By default, Moodle comes only with support for English language. If you want to add more languages visit http://download.moodle.org/ lang16/ and download and install the appropriate files.   After configuring administrative user there is just one more step to complete and that is setting up the site title and short name. In the Full site name field, place the long name you would like to set for your website; it can have multiple words. In the Short name for the site field put one word without spaces which will represent your website. In the Front Page Description field put a longer description (one paragraph) that explains in more detail the purpose of your site. This is optional and does not affect the Moodle functionality at all You have now finished installing Moodle and should see the following screenshot:
Read more
  • 0
  • 0
  • 7472

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 7150
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-creating-quiz-moodle
Packt
29 Mar 2011
16 min read
Save for later

Creating a quiz in Moodle

Packt
29 Mar 2011
16 min read
Getting started with Moodle tests To start with, we need to select a topic or theme for our test. We are going to choose general science, since the subject matter will be easy to incorporate each of the item types we have seen previously. Now that we have an idea of what our topic is going to be, we will get started in the creation of the test. We will be creating all new questions for this test, which will give us the added benefit of a bit more practice in item creation. So, let's get started and work on making our first real test! Let's open our Moodle course, go to the Activity drop-down, and select Create a new Quiz. Once it has been selected, we will be taken to the Quiz creation page and we'll be looking at the General section. The General section Here need to give the test a name that describes what the test is going to cover. Let's call it 'General Science Final Exam' as it describes what we will be doing in the test. The introduction is also important.this is a test students will take and an effective description of what they will be doing is an important point for them. It helps get their minds thinking about the topic at hand, which can help them prepare, and a person who is prepared can usually perform better. For our introduction, we will write the following, 'This test will see how much you learned in our science class this term. The test will cover all the topics we have studied, including, geology, chemistry, biology, and physics. In this test, there are a variety of question types (True/False, Matching, and others). Please look carefully at the sample questions before you move on. If you have any questions during the test, raise your hand. You will have 'x' attempts with the quiz. We have now given the test an effective name and we have given the students a description of what the test will cover. This will be shown in the Info tab to all the students before they take the test, and if we want in the days running up to the test. That's all we need to do in this section. Timing In this section, we need to make some decisions about when we are going to give the test to the students. We will also need to make a decision about how long we will give the students to complete the test. These are important decisions, and we need to make sure we give our students enough time to complete the test. The default Timing section is shown in the next screenshot: We probably know when our final exam will be. So, when we are creating the test, we can set the date that the test will be available to the students and the date it will stop being accessible to them. Because this is our final exam, we only want it to be available for one day, for a specified time period. We will start by clicking on the Disable checkboxes next to Open the Quiz and Close the Quiz dates. This step will enable the date/time drop-down menus and allow us to set them for the test. For us, our test will start on March 20, 2010 at 16:55 p.m. and it will end the same day, one hour later. So we will change the appropriate menus to reflect our needs. If these dates are not set, a student in the course will be able to take the quiz any time after you finish creating it. We will need to give the students time to get in class, settle down, and have their computers ready. However, we also need to make sure the students finish the test in our class, so we have decided to create a time limit of 45 minutes. This means that the test will be open for one hour, and in that one hour time frame, once they start the test, they will have 45 minutes to finish it. To do this, we need to click on the Enable checkbox next to the Time Limit (minutes) textbox. Clicking on this will enable the textbox, and in it we will enter 45. This value will limit the quiz time to 45 minutes, and will show a floating, count-down timer in the test, causing it to auto-submit 45 minutes after it is started. It is good to note that many students get annoyed by the floating timer and its placement on the screen. The other alternative is to have the test proctor have the students submit the quiz at a specified time. Now, we have decided to give a 45 minute time limit on the test, but without any open-ended questions, the test is highly unlikely to take that long. There is also going to be a big difference in the speed at which different students work. The test proctor should explain to the students how much time they should spend on each question and reviewing their answers. Under the Time Limit (minutes) we see the Time delay between first and second attempt and Time delay between later attempts menus. If we are going to offer the test more than once, we can set these, which would force the students to wait until they could try again. The time delays range from 30 minutes to 7 days, and the None setting will not require any waiting between attempts on the quiz. We are going to leave these set to None because this is a final exam and we are only giving it once. Once all the information has been entered into the Timing section, this dialog box is what we have, as shown in the next screenshot: Display Here, we will make some decisions about the way the quiz will look to the students. We will be dividing questions over several pages, which we will use to create divisions in the test. We will also be making decisions about the shuffle questions and shuffle within questions here. Firstly, as the test creators, we should already have a rough idea of how many questions we are going to have on the test. Looking at the Questions Per Page drop-down menu, we have the option of 1 to 50 questions per page. We have decided that we will be displaying six questions per page on the test. Actually, we will only have five questions the students will answer, but we also want to include a description and a sample question for the students to see how the questions look and how to answer them' thus we will have six on each page. We have the option to shuffle questions within pages and within questions. By default, Shuffle Questions is set to No and Shuffle within Questions is set to Yes. We have decided that we want to have our questions shuffled. But wait, we can't because we are using Description questions to give examples, and if we chose shuffle, these examples would not be where they need to be. So, we will leave the Shuffle Questions setting at the default No. However, we do want to shuffle the responses within the question, which will give each student a slightly different test using the same questions and answers. When the display settings are finished, we can see the output shown in the next screenshot: Attempts In this section, we will be setting the number of attempts possible and how further attempts are dealt with. We will also make a decision about the Adaptive Mode. Looking at the Attempts allowed drop-down menu, we have the option to set the number from 1 to 10 or we can set it to Unlimited attempts. For our test, we have already decided to set the value to 1 attempt, so we will select 1 from the drop-down menu. We have the option of setting the Each Attempt Builds on the Last drop-down menu to Yes or No. This feature does nothing now, because we have only set the test to have a single attempt. If we had decided to allow multiple attempts, a Yes setting would have shown the test taker all the previous answers, as if the student were taking the test again, as well as indicating whether he or she were correct or not. If we were giving our students multiple attempts on the test, but we did not want them to see their previous answers, we would set this to No. We are also going to be setting Adaptive mode to No. We do not want our students to be able to immediately see or correct their responses during the test; we want the students to review their answers before submitting anything. However, if we did want the students to check their answers and correct any mistakes during the test, we would set the Attempts Allowed to a number above 1 and the Adaptive Mode to Yes, which would give us the small Submit button where the students would check and correct any mistakes after each question. If multiple attempts are not allowed, the Submit button will be just that, a button to submit your answer. Here is what the Attempts section looks like after we have set our choices: Grades In this section, we will set the way Moodle will score the student. We see three choices in this section, Grading method, Apply penalties, and Decimal digits in grades; however, because we have only selected a single attempt, two of these options will not be used. Grading Method allows us to determine which of the scores we want to give our student after multiple tries. We have four options here: Highest Grade, Average Grade, First Attempt, and Last Attempt. Highest Grade uses the highest grade achieved from any attempt on any individual question. The Average Grade will take the total number of tries and grades and average them. The First Attempt will use the grade from the first attempt and the Last Attempt will use the grade from the final attempt. Since we are only giving one try on our test, this setting has no function and we will leave it set at its default, Highest Grade, because either option would give the same result. Apply penalties is similar to Grading method, in that it does not function because we have turned off Adaptive Mode. If we had set Adaptive Mode to Yes, then this feature would give us the option of applying penalties, which are set in the individual question setup pages. If we were using Adaptive Mode and this option feature set to No, then there would be no penalties for mistakes as in previous attempts. If it were set to Yes, the penalty amount decided on in the question would be subtracted for each incorrect response from the total points available on the question. However, our test is not set to Adaptive Mode, so we will leave it at the default setting, Yes. It is important to note here that no matter how often a student is penalized for an incorrect response, their grade will never go below zero. The Decimal digits in grades shows the final grade the student receives with the number of decimal places selected here. There are four choices available in this setting: 0, 1, 2, and 3. If, for example, the number is set to 1, the student will receive a score calculated to 1 decimal place, and the same follows for 2 and 3. If the number is set to 0, the final score will be rounded. We will set our Decimal digits in grades to 0. After we have finished, the Grades section appears as shown in the next screenshot: Review options This sectopm is where we set when and what our students will see when they look back at the test. There are three categories: Immediately after the attempt; Later, while quiz is still open; and After the quiz is closed. The first category, Immediately after the attempt, will allow students to see whatever feedback we have selected to display immediately after they click on the Submit all and finish button at the end of the test, or Submit, in the case of Adaptive mode. The second category, Later, while quiz is still open, allows students to view the selected review options any time after the test is finished, that is, when no more attempts are left, but before the test closes. Using the After the quiz is closed setting will allow the student to see the review options after the test closes, meaning that students are no longer able to access the test because a close date was set. The After the quiz is closed option is only useful if a time has been set for the test to close, otherwise the review never happens because the test doesn't ever close. Each of these three categories contains the same review options: Responses, Answers, Feedback, General feedback, Scores, and Overall feedback.Here is what these options do: Responses are the student's response to the question and whether he or she were wrong or correct. Answers are the correct response to the question. Feedback is the feedback you enter based on the answer the student gives. This feedback is different from the General quiz feedback they may receive. General feedback are the comments all students receive, regardless of their answers. Scores are the scores the student received on the questions. Overall feedback are the comments based on the overall grade on the test. We want to give our students all of this information, so they can look it over and find out where they made their mistakes, but we don't want someone who finishes early to have access to all the correct answers. So, we are going to eliminate all feedback on the test until after it closes. That way there is no possibility for the students to see the answers while other students might still be taking the test. To do remove such feedback, we simply unclick all the options available in the categories we don't want. Here is what we have when we are finished: Regardless of the options and categories we select in the Review options, students will always be able to see their overall scores. Looking at our settings, the only thing a student will be able to view immediately after the test is complete is the score. Only after the test closes, will the student be able to see the full range of review material we will be providing. If we had allowed multiple attempts, we would want to have different settings. So, instead of After the quiz is closed, we would want to set our Review options to Immediately after the attempt, because this setting would let the student know where he or she had problems and which areas of the quiz need to be focussed on. One final point here is that even a single checkbox in any of the categories will allow the student to open and view the test, giving the selected review information to the student. This option may or may not be what you want. Be careful to ensure that you have only selected the options and categories you want to use. Security This section is where we can increase quiz security, but it is important to note that these settings will not eliminate the ability of tech-savvy students to cheat. What this section does is provide a few options that make cheating a bit more difficult to do. We have three options in this section: Browser security, Require password, and Require network address. The Browser security drop-down has two options: None and Full screen popup with some JavaScript security. The None option is the default setting and is appropriate for most quizzes. This setting doesn't make any changes in browser security and is the setting you will most likely want to use for in-class quizzes, review quizzes, and others. Using the fullscreen option will create a browser that limits the options for students to fiddle things. This option will open a fullscreen browser window with limited navigation options. In addition to limiting the number of navigation options available, this option will also limit the keyboard and mouse commands available. This option is more appropriate for high-stakes type tests and shouldn't be used unless there is a reason. This setting also requires that JavaScript is used. Browser security is more a safety measure against students pressing the wrong button than preventing cheating, but can help reduce it. The Require password does exactly what you think it would. It requires the students to enter a password before taking the test. To keep all your material secure, I recommend using a password for all quizzes that you create. This setting is especially important if you are offering different versions of the quiz to different classes or different tests in the same class and you want to make sure only those who should be accessing the quiz can. There is also an Unmask checkbox next to the password textbox. This option will show you the password, just in case you forget! Finally, we have the Require network address option, which will only allow those at certain IP Addresses to access the test. These settings can be useful to ensure that only students in the lab or classroom are taking the test. This setting allows you to enter either complete IP Addresses (for example. 123.456.78.9), which require that specific address to begin the test; partial IP Addresses (for example 123.456), which will accept any address as long as it begins with the address prefixes; and what is known as Classless Inter-Domain Routing (CIDR) notation, (for example 123.456.78.9/10), which only allows specific subnets. You might want to consult with your network administrator if you want to use this security option. By combining these settings, we can attempt to cut down on cheating and improper access to our test. In our case here, we are only going to use the fullscreen option. We will be giving the test in our classroom, using our computers, so there is no need to turn on the IP Address function or require a password. When we have finished, the Security section appears as shown in the next screenshot:
Read more
  • 0
  • 0
  • 6991

article-image-understanding-and-developing-node-modules
Packt
11 Aug 2011
5 min read
Save for later

Understanding and Developing Node Modules

Packt
11 Aug 2011
5 min read
Node Web Development A practical introduction to Node, the exciting new server-side JavaScript web development stack What's a module? Modules are the basic building block of constructing Node applications. We have already seen modules in action; every JavaScript file we use in Node is itself a module. It's time to see what they are and how they work. The following code to pull in the fs module, gives us access to its functions: var fs = require('fs'); The require function searches for modules, and loads the module definition into the Node runtime, making its functions available. The fs object (in this case) contains the code (and data) exported by the fs module. Let's look at a brief example of this before we start diving into the details. Ponder over this module, simple.js: var count = 0; exports.next = function() { return count++; } This defines an exported function and a local variable. Now let's use it: The object returned from require('./simple') is the same object, exports, we assigned a function to inside simple.js. Each call to s.next calls the function next in simple.js, which returns (and increments) the value of the count variable, explaining why s.next returns progressively bigger numbers. The rule is that, anything (functions, objects) assigned as a field of exports is exported from the module, and objects inside the module but not assigned to exports are not visible to any code outside the module. This is an example of encapsulation. Now that we've got a taste of modules, let's take a deeper look. Node modules Node's module implementation is strongly inspired by, but not identical to, the CommonJS module specification. The differences between them might only be important if you need to share code between Node and other CommonJS systems. A quick scan of the Modules/1.1.1 spec indicates that the differences are minor, and for our purposes it's enough to just get on with the task of learning to use Node without dwelling too long on the differences. How does Node resolve require('module')? In Node, modules are stored in files, one module per file. There are several ways to specify module names, and several ways to organize the deployment of modules in the file system. It's quite flexible, especially when used with npm, the de-facto standard package manager for Node. Module identifiers and path names Generally speaking the module name is a path name, but with the file extension removed. That is, when we write require('./simple'), Node knows to add .js to the file name and load in simple.js. Modules whose file names end in .js are of course expected to be written in JavaScript. Node also supports binary code native libraries as Node modules. In this case the file name extension to use is .node. It's outside our scope to discuss implementation of a native code Node module, but this gives you enough knowledge to recognize them when you come across them. Some Node modules are not files in the file system, but are baked into the Node executable. These are the Core modules, the ones documented on nodejs.org. Their original existence is as files in the Node source tree but the build process compiles them into the binary Node executable. There are three types of module identifiers: relative, absolute, and top-level. Relative module identifiers begin with "./" or "../" and absolute identifiers begin with "/". These are identical with POSIX file system semantics with path names being relative to the file being executed. Absolute module identifiers obviously are relative to the root of the file system. Top-level module identifiers do not begin with "." , "..", or "/" and instead are simply the module name. These modules are stored in one of several directories, such as a node_modules directory, or those directories listed in the array require.paths, designated by Node to hold these modules. Local modules within your application The universe of all possible modules is split neatly into two kinds, those modules that are part of a specific application, and those modules that aren't. Hopefully the modules that aren't part of a specific application were written to serve a generalized purpose. Let's begin with implementation of modules used within your application. Typically your application will have a directory structure of module files sitting next to each other in the source control system, and then deployed to servers. These modules will know the relative path to their sibling modules within the application, and should use that knowledge to refer to each other using relative module identifiers. For example, to help us understand this, let's look at the structure of an existing Node package, the Express web application framework. It includes several modules structured in a hierarchy that the Express developers found to be useful. You can imagine creating a similar hierarchy for applications reaching a certain level of complexity, subdividing the application into chunks larger than a module but smaller than an application. Unfortunately there isn't a word to describe this, in Node, so we're left with a clumsy phrase like "subdivide into chunks larger than a module". Each subdivided chunk would be implemented as a directory with a few modules in it. In this example, the most likely relative module reference is to utils.js. Depending on the source file which wants to use utils.js it would use one of the following require statements: var utils = require('./lib/utils'); var utils = require('./utils'); var utils = require('../utils');  
Read more
  • 0
  • 0
  • 6950

article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install [email protected] How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 6930

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 6891
article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 6865

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 6854

article-image-creating-camel-project-simple
Packt
27 Aug 2013
8 min read
Save for later

Creating a Camel project (Simple)

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Getting ready For the examples in this article, we are going to use Apache Camel version 2.11 (http://maven.apache.org/) and Apache Maven version 2.2.1 or newer (http://maven.apache.org/) as a build tool. Both of these projects can be downloaded for free from their websites. The complete source code for all the examples in this article is available on github at https://github.com/bibryam/camel-message-routing-examples repository. It contains Camel routes in Spring XML and Java DSL with accompanying unit tests. The source code for this tutorial is located under the project: camel-message-routing-examples/creating-camel-project. How to do it... In a new Maven project add the following Camel dependency to the pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel-version}</version></dependency> With this dependency in place, creating our first route requires only a couple of lines of Java code: public class MoveFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file://source") .to("log://org.apache.camel.howto?showAll=true") .to("file://target"); }} Once the route is defined, the next step is to add it to CamelContext, which is the actual routing engine and run it as a standalone Java application: public class Main { public static void main(String[] args) throws Exception { CamelContext camelContext = new DefaultCamelContext(); camelContext.addRoutes(new MoveFileRoute()); camelContext.start(); Thread.sleep(10000); camelContext.stop(); }} That's all it takes to create our first Camel application. Now, we can run it using a Java IDE or from the command line with Maven mvn exec:java. How it works... Camel has a modular architecture; its core (camel-core dependency) contains all the functionality needed to run a Camel application—DSL for various languages, the routing engine, implementations of EIPs, a number of data converters, and core components. This is the only dependency needed to run this application. Then there are optional technology specific connector dependencies (called components) such as JMS, SOAP, JDBC, Twitter, and so on, which are not needed for this example, as the file and log components we used are all part of the camel-core. Camel routes are created using a Domain Specific Language (DSL), specifically tailored for application integration. Camel DSLs are high-level languages that allow us to easily create routes, combining various processing steps and EIPs without going into low-level implementation details. In the Java DSL, we create a route by extending RouteBuilder and overriding the configure method. A route represents a chain of processing steps applied to a message based on some rules. The route has a beginning defined by the from endpoint, and one or more processing steps commonly called "Processors" (which implement the Processor interface). Most of these ideas and concepts originate from the Pipes and Filters pattern from the Enterprise Integration Patterns articlee by Gregor Hohpe and Bobby Woolf. The article provides an extensive list of patterns, which are also available at http://www.enterpriseintegrationpatterns.com, and the majority of which are implemented by Camel. With the Pipes and Filters pattern, a large processing task is divided into a sequence of smaller independent processing steps (Filters) that are connected by channels (Pipes). Each filter processes messages received from the inbound channel, and publishes the result to the outbound channel. In our route, the processing steps are reading the file using a polling consumer, logging it and writing the file to the target folder, all of them piped by Camel in the sequence specified in the DSL. We can visualize the individual steps in the application with the following diagram: A route has exactly one input called consumer and identified by the keyword from. A consumer receives messages from producers or external systems, wraps them in a Camel specific format called Exchange , and starts routing them. There are two types of consumers: a polling consumer that fetches messages periodically (for example, reading files from a folder) and an event-driven consumer that listens for events and gets activated when a message arrives (for example, an HTTP server). All the other processor nodes in the route are either a type of integration pattern or producers used for sending messages to various endpoints. Producers are identified by the keyword to and they are capable of converting exchanges and delivering them to other channels using the underlying transport mechanism. In our example, the log producer logs the files using the log4J API, whereas the file producer writes them to a target folder. The route is not enough to have a running application; it is only a template that defines the processing steps. The engine that runs and manages the routes is called Camel Context. A high level view of CamelContext looks like the following diagram: CamelContext is a dynamic multithread route container, responsible for managing all aspects of the routing: route lifecycle, message conversions, configurations, error handling, monitoring, and so on. When CamelContext is started, it starts the components, endpoints and activates the routes. The routes are kept running until CamelContext is stopped again when it performs a graceful shutdown giving time for all the in-flight messages to complete processing. CamelContext is dynamic, it allows us to start, stop routes, add new routes, or remove running routes at runtime. In our example, after adding the MoveFileRoute, we start CamelContext and let it copy files for 10 seconds, and then the application terminates. If we check the target folder, we should see files copied from the source folder. There's more... Camel applications can run as standalone applications or can be embedded in other containers such as Spring or Apache Karaf. To make development and deployment to various environments easy, Camel provides a number of DSLs, including Spring XML, Blueprint XML, Groovy, and Scala. Next, we will have a look at the Spring XML DSL. Using Spring XML DSL Java and Spring XML are the two most popular DSLs in Camel. Both provide access to all Camel features and the choice is mostly a matter of taste. Java DSL is more flexible and requires fewer lines of code, but can easily become complicated and harder to understand with the use of anonymous inner classes and other Java constructs. Spring XML DSL, on the other hand, is easier to read and maintain, but it is too verbose and testing it requires a little more effort. My rule of thumb is to use Spring XML DSL only when Camel is going to be part of a Spring application (to benefit from other Spring features available in Camel), or when the routing logic has to be easily understood by many people. For the routing examples in the article, we are going to show a mixture of Java and Spring XML DSL, but the source code accompanying this article has all the examples in both DSLs. In order to use Spring, we also need the following dependency in our projects: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>${camel-version}</version></dependency> The same application for copying files, written in Spring XML DSL looks like the following: <beans xsi_schemaLocation=" http ://www.springframework.org/schema/beans http ://www.springframework.org/schema/beans/spring-beans.xsd http ://camel.apache.org/schema/spring http ://camel.apache.org/schema/spring/camel-spring.xsd"><camelContext > <route> <from uri="file://source"/> <to uri="log://org.apache.camel.howto?showAll=true"/> <to uri="file://target"/> </route></camelContext></beans> Notice that this is a standard Spring XML file with an additional CamelContext element containing the route. We can launch the Spring application as part of a web application, OSGI bundle, or as a standalone application: public static void main(String[] args) throws Exception { AbstractApplicationContext springContext = new ClassPathXmlApplicationContext("META-INF/spring/move-file-context.xml"); springContext.start(); Thread.sleep(10000); springContext.stop();} When the Spring container starts, it will instantiate a CamelContext, start it and add the routes without any other code required. That is the complete application written in Spring XML DSL. More information about Spring support in Apache Camel can be found at http://camel.apache.org/spring.html. Summary This article provides a high-level overview of Camel architecture, and demonstrates how to create a simple message driven application. Resources for Article: Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Drools Integration Modules: Spring Framework and Apache Camel [Article] Routing to an external ActiveMQ broker [Article]
Read more
  • 0
  • 0
  • 6824
article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "[email protected]", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "[email protected]", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "[email protected]", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "[email protected]", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "[email protected]", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "[email protected]", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 1
  • 6821

article-image-what-openlayers
Packt
13 May 2013
4 min read
Save for later

What is OpenLayers?

Packt
13 May 2013
4 min read
(For more resources related to this topic, see here.) As Christopher Schmidt, one of the main project developers, wrote on the OpenLayers users mailing list: OpenLayers is not designed to be usable out of the box. It is a library designed to help you to build applications, so it's your job as an OpenLayers user to build the box. Don't be scared! Building the box could be very easy and fun! The only two things you actually need to write your code and see it up and running are a text editor and a common web browser. With these tools you can create your Hello World web map, even without downloading anything and writing no more than a basic HTML template and a dozen line of JavaScript code. Going forward, step-by-step, you will realize that OpenLayers is not only easy to learn but also very powerful. So, whether you want to embed a simple web map in your website or you want to develop an advanced mash-up application by importing spatial data from different sources and in different formats, OpenLayers will probably prove to be a very good choice. The strengths of OpenLayers are many and reside, first of all, in its compliance with the Open Geospatial Consortium ( OGC ) standards, making it capable to work together with all major and most common spatial data servers. This means you can connect your client application to web services spread as WMS, WFS, or GeoRSS, add data from a bunch of raster and vector file formats such as GeoJSON and GML, and organize them in layers to create your original web mapping applications. From what has been said until now, it is clear that OpenLayers is incredibly flexible in reading spatial data, but another very important characteristic is that it is also very effective in helping you in the process of optimizing the performances of your web maps by easily defining the strategies with which spatial data are requested and (for vectors) imported on the client side. FastMap and OpenLayers make it possible to obtain them! As we already said at the beginning, web maps created with OpenLayers are interactive, so users can (and want to) do more than simply looking at your creation. To build this interactivity, OpenLayers provides you with a variety of controls that you can make available to your users. Tools to pan, zoom, or query the map give users the possibility to actually explore the content of the map and the spatial data displayed on it. We could say that controls bring maps to life and you will learn how to take advantage from them in a few easy steps. Fast loading and interactivity are important, but in many cases a crucial aspect in the process of developing a web map is to make it instantly readable. Isn't it useful to build web maps if the users they are dedicated to need to spend too much time before understanding what they are looking at? Fortunately, OpenLayers comes with a wide range of possibilities to styling features in vector layers. You can choose between different vector features, rendering strategies, and customize every aspect of their graphics to make your maps expressive, actually "talking" and—why not?—cool! Finally, as you probably remember, OpenLayers is pure JavaScript, and JavaScript is also the language of a lot of fantastic Rich Internet Application ( RIA) frameworks. Mixing OpenLayers and one of these frameworks opens a wide range of possibilities to obtain very advanced and attractive web mapping applications Resources for Article : Further resources on this subject: Getting Started with OpenLayers [Article] OpenLayers: Overview of Vector Layer [Article] Getting Started with OpenStreetMap [Article]
Read more
  • 0
  • 0
  • 6730
Modal Close icon
Modal Close icon