Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-improving-proximity-filtering-knn
Packt
24 Jan 2014
7 min read
Save for later

Improving proximity filtering with KNN

Packt
24 Jan 2014
7 min read
(For more resources related to this topic, see here.) The basic question that we seek to answer in this article is the fundamental distance question, "Which are the closest (name what you are searching for) to me?", for example, "Which are the five coffee shops closest to me?" It turns out that while it is a fundamental question, it's not always easy to answer, though we will make this possible in this article. We will approach this with two approaches. The first way in which we'll approach this is in a simple heuristic, which will allow us to come to a solution quickly. Then, we'll take advantage of the deeper PostGIS functionality to make the solution faster and more general with a K-Nearest Neighbor (KNN) approach. A concept that we need to understand from the outset is that of a spatial index. A spatial index, like other database indexes, functions like a book index. It is a special construct to make looking for things inside our table easier, much in the way a book index helps us find content in a book faster. In the case of a spatial index, it helps us find faster where things are in space. Therefore, by using a spatial index in our geographic searches, we can speed up our searches by many orders of magnitude. To learn more about spatial indexes, see http://en.wikipedia.org/wiki/Spatial_index#Spatial_index. Getting ready We will start by loading our data. Our data are the address records from Cuyahoga County, Ohio, USA. shp2pgsql -s 3734 -d -i -I -W LATIN1 -g the_geom CUY_ADDRESS_POINTS chp04.knn_addresses | psql -U me -d postgis_cookbook As this dataset may take a while to load, you can alternatively load a subset. shp2pgsql -s 3734 -d -i -I -W LATIN1 -g the_geom CUY_ADDRESS_POINTS_ subset chp04.knn_addresses | psql -U me -d postgis_cookbook We specified the -I flag in order to request a spatial index be created upon the import of this data. Let us start by seeing how many records we are dealing with: SELECT COUNT(*) FROM chp04.knn_addresses; --484958 We have, in this address table, almost half a million address records, which is not an insubstantial number against which to perform a query. How to do it... KNN is an approach to searching for an arbitrary number of points closest to a given point. Without the right tools, this can be a very slow process that requires testing the distance between the point of interest and all the possible neighbors. The problem with this approach is that the search becomes exponentially slower as the number of points increases. Let's start with this naïve approach and then improve upon it. Suppose we were interested in finding 10 records closest to the geographic location, -81.738624, 41.396679. The naïve approach would be to transform this value into our local coordinate system and compare the distance to each point in the database from the search point, order those values by distance, and limit the search to the first 10 closest records (it is not recommended that you run the following query—it could run indefinitely) SELECT ST_Distance(searchpoint.the_geom, addr.the_geom) AS dist, * FROM chp04.knn_addresses addr, (SELECT ST_Transform(ST_SetSRID(ST_MakePoint(-81.738624, 41.396679), 4326), 3734) AS the_geom) searchpoint ORDER BY ST_Distance(searchpoint.the_geom, addr.the_geom) LIMIT 10; This is a fine approach for smaller datasets. This is a logical, simple, fast approach for relatively small numbers of records. This approach scales very poorly, however, getting exponentially slower with the addition of records, and with 500,000 points, this would take a very long time. An alternative is to only compare my point to the ones I know are close by setting a search distance. So, for example, in the following diagram, we have a star that represents my current location, and I want to know the 10 closest addresses. The grid in the diagram is a 100 foot grid, so I can search for the points within 200 feet, then measure the distance to each of these points, and return the closest 10 points to my search location. So far, our approach to answering this question is to limit the search using the ST_DWithin operator to only search for records within a certain distance. ST_DWithin uses our spatial index, so the initial distance search is fast and the list of returned records should be short enough to do the same pair-wise distance comparison we did earlier in this section. In our case here, we could limit the search to within 200 feet as follows. SELECT ST_Distance(searchpoint.the_geom, addr.the_geom) AS dist, * FROM chp04.knn_addresses addr, (SELECT ST_Transform(ST_SetSRID(ST_MakePoint(-81.738624, 41.396679), 4326), 3734) AS the_geom) searchpoint WHERE ST_DWithin(searchpoint.the_geom, addr.the_geom, 200) ORDER BY ST_Distance(searchpoint.the_geom, addr.the_geom) LIMIT 10; This approach performs well so long as our search window, ST_DWithin, is the right size for the data. The problem with this approach is that, in order to optimize it, we need to know how to set a search window that is about the right size. Any larger than the right size and the query will run more slowly than we'd like. Any smaller than the right size and we might not get all the points back that we need. Inherently, we don't know this ahead of time, so we can only hope for the best guess. In this same dataset, if we apply the same query in another location, the output will return no points because the 10 closest points are further than 200 feet away. We can see this in the following diagram: Fortunately, for PostGIS 2.0, we can leverage the distance operators (<-> and <#>) to do indexed nearest-neighbor searches. This makes for very fast KNN searches that don't require us to guess ahead of time how far away we need to search. Why are the searches fast? The spatial index helps, of course, but in the case of the distance operator, we are using the structure of the index itself, which is hierarchical, to very quickly sort our neighbors. When used in an ORDER BY clause, the distance operator uses the index: SELECT ST_Distance(searchpoint.the_geom, addr.the_geom) AS dist, * FROM chp04.knn_addresses addr, (SELECT ST_Transform(ST_SetSRID(ST_MakePoint(-81.738624, 41.396679), 4326), 3734) AS the_geom) searchpoint ORDER BY addr.the_geom <-> searchpoint.the_geom LIMIT 10; This approach requires no a priori knowledge of how far the nearest neighbors might be. It also scales very well, returning thousands of records in not more than the time it takes to return a few records. It is sometimes slower than using ST_DWithin, depending on how small our search distance is and how large the dataset we are dealing with. But the tradeoff is that we don't need to make a guess as to our correct search distance and for large queries, it can be much faster than the naïve approach. How it works... What makes this magic possible is that PostGIS uses an R-Tree index. This means that the index itself is sorted hierarchically based on spatial information. As demonstrated, we can leverage the structure of the index in sorting distances from a given arbitrary location and, thus, use the index to directly return the sorted records. This means that the structure of the spatial index itself helps us answer such fundamental questions quickly and inexpensively. More information about KNN and R-tree can be found at http://workshops.boundlessgeo.com/postgis-intro/knn.html and https://en.wikipedia.org/wiki/R-tree. Summary This article covered the introduction and learning of KNN filters to increase the performance of proximity queries. Now you must be quite familiar with the usage of KNN filters to increase the performance of proximity queries. Resources for Article: Further resources on this subject: PostgreSQL: Tips and Tricks [article] Backup in PostgreSQL 9 [article] Python Libraries for Geospatial Development [article]
Read more
  • 0
  • 0
  • 1441

article-image-organizing-backbone-applications-structure-optimize-and-deploy
Packt
21 Jan 2014
9 min read
Save for later

Organizing Backbone Applications - Structure, Optimize, and Deploy

Packt
21 Jan 2014
9 min read
(For more resources related to this topic, see here.) Creating application architecture The essential premise at the heart of Backbone has always been to try and discover the minimal set of data-structuring (Models and Collections) and user interface (Views and URLs) primitives that are useful when building web applications with JavaScript. Jeremy Ashkenas, creator of Backbone.js, Underscore.js, and CoffeeScript As Jeremy mentioned, Backbone.js has no intention, at least in the near future, to raise its bar to provide application architecture. Backbone will continue to be a lightweight tool to produce the minimal features required for web development. So, should we blame Backbone.js for not including such functionality even though there is a huge demand for this in the developer community? Certainly not! Backbone.js only yields the set of components that are necessary to create the backbone of an application and gives us complete freedom to build the app architecture in whichever way we want. If working on a significantly large JavaScript application, remember to dedicate sufficient time to planning the underlying architecture that makes the most sense. It's often more complex than you may initially imagine. Addy Osmani, author of Patterns For Large-Scale JavaScript Application Architecture So, as we start digging into more detail on creating an application architecture, we are not going to talk about trivial applications or something similar to a to-do-list app. Rather, we will investigate how to structure a medium- or large-level application. After discussions with a number of developers, we found that the main issue they face here is that there are several methodologies the online blog posts and tutorials offer to structure an application. While most of these tutorials talk about good practices, it becomes difficult to choose exactly one from them. Keeping that in mind, we will explore a number of steps that you should follow to make your app robust and maintainable in the long run. Managing a project directory This is the first step towards creating a solid app architecture. We have already discussed this in detail in the previous sections. If you are comfortable using another directory layout, go ahead with it. The directory structure will not matter much if the rest of your application is organized properly. Organizing code with AMD We will use RequireJS for our project. As discussed earlier, it comes with a bunch of facilities such as the following: Adding a lot of script tags in one HTML file and managing all of the dependencies on your own may work for a medium-level project, but will gradually fail for a large-level project. Such a project may have thousands of lines of code; managing a code base of that size requires small modules to be defined in each individual file. With RequireJS, you do not need to worry about how many files you have—you just know that if the standard is followed properly, it is bound to work. The global namespace is never touched and you can freely give the best names to something that matches with it the most. Debugging the RequireJS modules is a lot easier than other approaches because you know what the dependencies and path to each of them are in every module definition. You can use r.js, an optimization tool for RequireJS that minifies all the JavaScript and CSS files, to create the production-ready build. Setting up an application For a Backbone app, there must be a centralized object that will hold together all the components of the application. In a simple application, most people generally just make the main router work as the central object. But that will surely not work for a large application and you need an Application object that should work as the parent component. This object should have a method (mostly init()) that will work as the entry point to your application and initialize the main router along with the Backbone history. In addition, either your Application class should extend Backbone.Events or it should include a property that points to an instance of the Backbone.Events class. The benefit of doing this is that the app or Backbone.Events instance can act as a central event aggregator, and you can trigger application-level events on it. A very basic Application class will look like the following code snippet: // File: application.js define([ 'underscore', 'backbone', 'router' ], function (_, Backbone, Router) { // the event aggregator var PubSub = _.extend({}, Backbone.Events); var Application = function () { // Do useful stuff here } _.extend(Application.prototype, { pubsub: new PubSub(), init: function () { Backbone.history.start(); } }); return Application; }); Application is a simple class with an init() method and a PubSub instance. The init() method acts as the starting point of the application and PubSub works as the application-level event manager. You can add more functionality to the Application class, such as starting and stopping modules and adding a region manager for view layout management. It is advisable to keep this class as short as you can. Using the module pattern We often see that intermediate-level developers find it a bit confusing to initially use a module-based architecture. It can be a little difficult for them to make the transition from a simple MVC architecture to a modular MVC architecture. While the points we are discussing in this article are valid for both these architectures, we should always prefer to use a modular concept in nontrivial applications for better maintainability and organization. In the directory structure section, we saw how the module consists of a main.js file, its views, models, and collections all together. The main.js file will define the module and have different methods to manage the other components of that module. It works as the starting point of the module. A simple main.js file will look like the following code: // File: main.js define([ 'app/modules/user/views/userlist', 'app/modules/user/views/userdetails' ], function (UserList, UserDetails) { var myVar; return { initialize: function () { this.showUserList(); }, showUsersList: function () { var userList = new UserList(); userList.show(); }, showUserDetails: function (userModel) { var userDetails = new UserDetails({ model: userModel }); userDetails.show(); } }; }); As you can see, the responsibility of this file is to initiate the module and manage the components of that module. We have to make sure that it handles only parent-level tasks; it shouldn't contain a method that one of its views should ideally have. The concept is not very complex, but you need to set it up properly in order to use it for a large application. You can even go for an existing app and module setup and integrate it with your Backbone app. For instance, Marionette provides an application infrastructure for Backbone apps. You can use its inbuilt Application and Module classes to structure your application. It also provides a general-purpose Controller class—something that doesn't come with the Backbone library but can be used as a mediator to provide generic methods and work as a common medium among the modules. You can also use AuraJS (https://github.com/aurajs/aura), a framework-agonistic event-driven architecture developed by Addy Osmani (http://addyosmani.com) and many others; it works quite well with Backbone.js. A thorough discussion on AuraJS is beyond the scope of this book, but you can grab a lot of useful information about it from its documentation and examples (https://github.com/aurajs/todomvc). It is an excellent boilerplate tool that gives your app a kick-start and we highly recommend it, especially if you are not using the Marionette application infrastructure. The following are a few benefits of using AuraJS ; they may help you choose this framework for your application: AuraJS is framework-agnostic. Though it works great with Backbone.js, you can use it for your JavaScript module architecture even if you aren't using Backbone.js. It utilizes the module pattern, application-level and module-level communication using the facade (sandbox) and mediator patterns. It abstracts away the utility libraries that you use (such as templating and DOM manipulation) so you can swap alternatives anytime you want. Managing objects and module communication One of the most important ways to keep the application code maintainable is to reduce the tight coupling between modules and objects. If you are following the module pattern, you should never let one module communicate with another directly. Loose coupling adds a level of restriction in your code, and a change in one module will never enforce a change in the rest of the application. Moreover, it lets you re-use the same modules elsewhere. But how can we communicate if there is no direct relationship? The two important patterns we use in this case are the observer and mediator patterns. Using the observer/PubSub pattern The PubSub pattern is nothing but the event dispatcher. It works as a messaging channel between the object (publisher) that fires the event and another object (subscriber) that receives the notification. We mentioned earlier that we can have an application-level event aggregator as a property of the Application object. This event aggregator can work as the common channel via which the other modules can communicate, and that too without interacting directly. Even at the module-level, you may need a common event dispatcher only for that module; the views, models, and collections of that module can use it to communicate with each other. However, publishing too many events via a dispatcher sometimes makes it difficult to manage them and you must be careful enough to understand which events you should publish via a generic dispatcher and which ones you should fire on a certain component only. Anyhow, this pattern is one of the best tools to design a decoupled system, and you should always have one ready for use in your module-based application. Summary This article dealt with one of the most important topics of Backbone.js-based application development. At the framework level, learning Backbone is quite easy and developers get a complete grasp over it in a very short period of time. Resources for Article: Further resources on this subject: Building an app using Backbone.js [article] Testing Backbone.js Application [article] Understanding Backbone [article]
Read more
  • 0
  • 0
  • 2310

article-image-article-dart-server-with-dartling-and-mongodb
Packt
21 Jan 2014
10 min read
Save for later

Dart Server with Dartling and MongoDB

Packt
21 Jan 2014
10 min read
(For more resources related to this topic, see here.) Server Side Dart Creating a server in Dart is surprisingly simple, once the asynchronous programming with Futures is understood. Starting a server To start a server run the main function in todo_mongodb/todo_server_dartling_mongodb/bin/server.dart. void main() { db = new TodoDb(); db.open().then((_) { start(); }); } An access to a MongoDB database is prepared in the TodoDb constructor in todo_mongodb/todo_server_dartling_mongodb/lib/persistence/mongodb.dart. The database is opened, then the server is started. start() { HttpServer.bind(HOST, PORT) .then((server) { server.listen((HttpRequest request) { switch (request.method) { case "GET": handleGet(request); break; case 'POST': handlePost(request); break; case 'OPTIONS': handleOptions(request); break; default: defaultHandler(request); } }); }) .catchError(print) .whenComplete(() => print('Server at http://$HOST:$PORT')); } If there are no problems, the following message is displayed in the console of Dart Editor. Server at http://127.0.0.1:8080 The server accepts either GET or POST requests. void handleGet(HttpRequest request) { HttpResponse res = request.response; print('${request.method}: ${request.uri.path}'); addCorsHeaders(res); res.headers.contentType = new ContentType("application", "json", charset: 'utf-8'); List<Map> jsonList = db.tasks.toJson(); String jsonString = convert.JSON.encode(jsonList); print('JSON list in GET: ${jsonList}'); res.write(jsonString); res.close(); } The server, through a GET request, sends to a client CORS headers to allow a browser to send requests to different servers. void handlePost(HttpRequest request) { print('${request.method}: ${request.uri.path}'); request.listen((List<int> buffer) { var jsonString = new String.fromCharCodes(buffer); List<Map> jsonList = convert.JSON.decode(jsonString); print('JSON list in POST: ${jsonList}'); _integrateDataFromClient(jsonList); }, onError: print); } The POST request integrates data from a client to the model. _integrateDataFromClient(List<Map> jsonList) { var clientTasks = new Tasks.fromJson(db.tasks.concept, jsonList); var serverTaskList = db.tasks.toList(); for (var serverTask in serverTaskList) { var clientTask = clientTasks.singleWhereAttributeId('title', serverTask.title); if (clientTask == null) { new RemoveAction(db.session, db.tasks, serverTask).doit(); } } for (var clientTask in clientTasks) { var serverTask = db.tasks.singleWhereAttributeId('title', clientTask.title); if (serverTask != null) { if (serverTask.updated.millisecondsSinceEpoch < clientTask.updated.millisecondsSinceEpoch) { new SetAttributeAction( db.session, serverTask, 'completed', clientTask.completed).doit(); } } else { new AddAction(db.session, db.tasks, clientTask).doit(); } } } MongoDB database MongoDB is used to load all data from the database into the model of Dartling. In general, there may be more than one domain in a repository of Dartling. Also, there may be more than one model in a domain. A model has concepts with attributes and relationships between concepts. The TodoMVC model has only one concept - Task and no relationships. A model in Dartling may also be considered as an in-memory graphical database. It has actions, action pre and post validations, error handling, select data views, view update propagations, reaction events, transactions, sessions with the trans(action) past, so that undos and redos on the model may be done. You can add, remove, update, validate, find, select and order data. Actions or transactions may be used to support unrestricted undos and redos in a domain session. A transaction is an action that contains other actions. The domain allows any object to react to actions in its models. The empty Dartling model is prepared in the TodoDb constructor. TodoDb() { var repo = new TodoRepo(); domain = repo.getDomainModels('Todo'); domain.startActionReaction(this); session = domain.newSession(); model = domain.getModelEntries('Mvc'); tasks = model.tasks; } It is in the open method that the data are loaded into the model. Future open() { Completer completer = new Completer(); db = new Db('${DEFAULT_URI}${DB_NAME}'); db.open().then((_) { taskCollection = new TaskCollection(this); taskCollection.load().then((_) { completer.complete(); }); }).catchError(print); return completer.future; } In the MongoDB database there is one collection of tasks, where each task is a JSON document. This collection is defined in the TaskCollection class in mongodb.dart. The load method in this class transfers tasks from the database to the model. Future load() { Completer completer = new Completer(); dbTasks.find().toList().then((taskList) { taskList.forEach((taskMap) { var task = new Task.fromDb(todo.tasks.concept, taskMap); todo.tasks.add(task); }); completer.complete(); }).catchError(print); return completer.future; } There is only one concept in the model. Thus, the concept is entry and its entities are tasks (of the Tasks class). After the data are loaded, only the tasks entities may be used. The TodoDb class implements ActionReactionApi of Dartling. A reaction to an action in the model is defined in the react method of the TodoDb class. react(ActionApi action) { if (action is AddAction) { taskCollection.insert(action.entity); } else if (action is RemoveAction) { taskCollection.delete(action.entity); } else if (action is SetAttributeAction) { taskCollection.update(action.entity); } } } Tasks are inserted, deleted and updated in the mongoDB database in the following methods of the TaskCollection class. Future<Task> insert(Task task) { var completer = new Completer(); var taskMap = task.toDb(); dbTasks.insert(taskMap).then((_) { print('inserted task: ${task.title}'); completer.complete(); }).catchError(print); return completer.future; } Future<Task> delete(Task task) { var completer = new Completer(); var taskMap = task.toDb(); dbTasks.remove(taskMap).then((_) { print('removed task: ${task.title}'); completer.complete(); }).catchError(print); return completer.future; } Future<Task> update(Task task) { var completer = new Completer(); var taskMap = task.toDb(); dbTasks.update({"title": taskMap['title']}, taskMap).then((_) { print('updated task: ${task.title}'); completer.complete(); }).catchError(print); return completer.future; } } Dartling tasks The TodoMVC model is designed in Model Concepts. The graphical model is transformed into a JSON document. { "width":990, "height":580, "boxes":[ { "name":"Task", "entry":true, "x":85, "y":67, "width":80, "height":80, "items":[ { "sequence":10, "name":"title", "category":"identifier", "type":"String", "init":"", "essential":true, "sensitive":false }, { "sequence":20, "name":"completed", "category":"required", "type":"bool", "init":"false", "essential":true, "sensitive":false }, { "sequence":30, "name":"updated", "category":"required", "type":"DateTime", "init":"now", "essential":false, "sensitive":false } ] } ], "lines":[ ] } This JSON document is used in dartling_gen to generate the model in Dart. The lib/gen and lib/todo folders contain the generated model. The gen folder contains the generic code that should not be changed by a programmer. The todo folder contains the specific code that may be changed by a programmer. The specific code has Task and Tasks classes that are augmented by some specific code. class Task extends TaskGen { Task(Concept concept) : super(concept); Task.withId(Concept concept, String title) : super.withId(concept, title); // begin: added by hand Task.fromDb(Concept concept, Map value): super(concept) { title = value['title']; completed = value['completed']; updated = value['updated']; } Task.fromJson(Concept concept, Map value): super(concept) { title = value['title']; completed = value['completed'] == 'true' ? true : false; updated = DateTime.parse(value['updated']); } bool get left => !completed; bool get generate => title.contains('generate') ? true : false; Map toDb() { return { 'title': title, 'completed': completed, 'updated': updated }; } bool preSetAttribute(String name, Object value) { bool validation = super.preSetAttribute(name, value); if (name == 'title') { String title = value; if (validation) { validation = title.trim() != ''; if (!validation) { var error = new ValidationError('pre'); error.message = 'The title should not be empty.'; errors.add(error); } } if (validation) { validation = title.length <= 64; if (!validation) { var error = new ValidationError('pre'); error.message = 'The "${title}" title should not be longer than 64 characters.'; errors.add(error); } } } return validation; } // end: added by hand } class Tasks extends TasksGen { Tasks(Concept concept) : super(concept); // begin: added by hand Tasks.fromJson(Concept concept, List<Map> jsonList): super(concept) { for (var taskMap in jsonList) { add(new Task.fromJson(concept, taskMap)); } } Tasks get completed => selectWhere((task) => task.completed); Tasks get left => selectWhere((task) => task.left); Task findByTitleId(String title) { return singleWhereId(new Id(concept)..setAttribute('title', title)); } // end: added by hand } Client Side Dart The Todo web application may be run in the Dartium virtual machine within the Dart Editor, or as a JavaScript application run in any modern browser (todo_mongodb/todo_client_idb/web/app.html). The client application has both the model in todo_mongodb/todo_client_idb/lib/model and the view of the model in todo_mongodb/todo_client_idb/lib/view. The model has two Dart files, idb.dart for IndexedDB and model.dart for plain objects created from scratch without any model framework such as Dartling. The view is done in DOM. The application delegates the use of a local storage to the IndexedDB. A user of the application communicates with the Dart server by two buttons. The To server button sends local data to the server, while the From server button brings changes to the local data from the MongoDB database. ButtonElement toServer = querySelector('#to-server'); toServer.onClick.listen((MouseEvent e) { var request = new HttpRequest(); request.onReadyStateChange.listen((_) { if (request.readyState == HttpRequest.DONE && request.status == 200) { serverResponse = 'Server: ' + request.responseText; } else if (request.readyState == HttpRequest.DONE && request.status == 0) { // Status is 0...most likely the server isn't running. serverResponse = 'No server'; } }); var url = 'http://127.0.0.1:8080'; request.open('POST', url); request.send(_tasksStore.tasks.toJsonString()); }); ButtonElement fromServer = querySelector('#from-server'); fromServer.onClick.listen((MouseEvent e) { var request = new HttpRequest(); request.onReadyStateChange.listen((_) { if (request.readyState == HttpRequest.DONE && request.status == 200) { String jsonString = request.responseText; serverResponse = 'Server: ' + request.responseText; if (jsonString != '') { List<Map> jsonList = JSON.decode(jsonString); print('JSON list from the server: ${jsonList}'); _tasksStore.loadDataFromServer(jsonList) .then((_) { var tasks = _tasksStore.tasks; _clearElements(); loadElements(tasks); }) .catchError((e) { print('error in loading data into IndexedDB from JSON list'); }); } } else if (request.readyState == HttpRequest.DONE && request.status == 0) { // Status is 0...most likely the server isn't running. serverResponse = 'No server'; } }); var url = 'http://127.0.0.1:8080'; request.open('GET', url); request.send('update-me'); });> Server data are loaded in the loadDataFromServer method of the TasksStore class in todo_mongodb/todo_client_idb/lib/model/idb.dart. Future loadDataFromServer(List<Map> jsonList) { Completer completer = new Completer(); Tasks integratedTasks = _integrateDataFromServer(jsonList); clear() .then((_) { int count = 0; for (Task task in integratedTasks) { addTask(task) .then((_) { if (++count == integratedTasks.length) { completer.complete(); } }); } }); return completer.future; } The server data are integrated into the local data by the _integrateDataFromServer method of the TasksStore class. Tasks _integrateDataFromServer(List<Map> jsonList) { var serverTasks = new Tasks.fromJson(jsonList); var clientTasks = tasks.copy(); var clientTaskList = clientTasks.toList(); for (var clientTask in clientTaskList) { if (!serverTasks.contains(clientTask.title)) { clientTasks.remove(clientTask); } } for (var serverTask in serverTasks) { if (clientTasks.contains(serverTask.title)) { var clientTask = clientTasks.find(serverTask.title); clientTask.completed = serverTask.completed; clientTask.updated = serverTask.updated; } else { clientTasks.add(serverTask); } } return clientTasks; } Summary The TodoMVC client-server application is developed in Dart. The web application is done in DOM with local data from a simple model stored in an IndexedDB database. The local data are sent to the server as a JSON document. Data from the server are received also as a JSON document. Both on a client and on the server, data are integrated in the model. The server uses the Dartling domain framework for its model. The model is stored in the MongoDB database. The action events from Dartling are used to propagate changes from the model to the database. Resources for Article: Further resources on this subject: HTML5: Generic Containers [article] HTML5: Getting Started with Paths and Text [article] HTML5 Presentations - creating our initial presentation [article]
Read more
  • 0
  • 0
  • 957
Banner background image

article-image-article-prototyping-recipes
Packt
21 Jan 2014
16 min read
Save for later

Prototyping Recipes

Packt
21 Jan 2014
16 min read
(For more resources related to this topic, see here.) Sketching, scanning, and prototyping Most folks start the design process by developing quick sketches of the concepts. These sketches can be elaborate or rudimentary. Oftentimes, these sketches evolve into paper prototypes that illustrate the flow or steps a user would take to complete a task. By scanning your drawings, making adjustments with your favorite image editing software (Gimp, Adobe Photoshop, and so on), and Axure, you can quickly create a clickable prototype. Getting ready To go through this recipe, you will need to have digital scans of your sketches and access to the image editing software of your choice. How to do it... You will now create a carousel including thumbnails from digital scans of simple, freehanddrawn sketches. Using your image-editing tool, first organize your images and crop them appropriately. You will have to organize the images and visualize the user flow just as you would do for paper prototypes. Start Axure and under Create New select RP File. If you already have Axure open, select File in the main menu, and then click on New, in the drop-down menu to create a new RP document. In the Sitemap, add additional child or sibling pages as necessary to complete your flow by clicking on the Add Page button icon or by right-clicking on any page in the sitemap. In the menu that appears, mouse over Add, and then click on the Child or the Sibling page. Double-click on any page title in Sitemap to select that page. You will see the wireframe for the associated page shown. While holding down the mouse button, drag the Image widget, and place it on the wireframe. Double-click on the Image widget on the wireframe, and select the appropriate scanned sketch. While holding down the mouse button, drag the Hot Spot widget, and place it over the item you would like to make clickable. While holding down the mouse button, drag the corners of the Hot Spot widget on the wireframe to the desired size. With Hot Spot selected, in the Widget Interactions and Note spane, click on Create Link…. In the Sitemap pop up, click on the associated page in the user flow. Repeat steps 7 through 10 for each region on your wireframe that you would like to make clickable. Repeat steps 4 through 11 for each page in Sitemap that you would like to make a part of the prototype. You can now choose to preview or save a copy of the prototype. To preview the prototype, click on the Preview button in the toolbar. To save a copy of the prototype, click on the Publish button in the toolbar, and select Generate HTML Files…. You can also generate the prototype by going to the main menu, selecting Publish, and clicking on Generate HTML Files…. How it works... Using this recipe, you are able to convert your paper sketches into clickable digital prototypes. Each paper sketch becomes a page in the Sitemap through the use of the Image widget. To accomplish this, you opened the scanned image with the Image widget to display your paper sketch on the page. To create clickable regions, you used Hot Spot and associated the next page in the flow using Create Link…. You used as many image map regions as clickable elements needed for the interactions on a page. Creating a dynamic Breadcrumb Master Using Masters in Axure allows you to create reusable components. When you make a change to a Master, the change is applied to all wireframes that contain that Master. Leveraging Masters can ensure the consistency of elements across your prototypes. Getting ready In this recipe you will create a dynamic Breadcrumb Master. In Axure, verify that the Widget Manager and Page Properties panes are shown. To verify, click on View in the main menu and mouse over Panes. In the pop-up menu, make sure that a check mark is next to all items, including the Widget Manager and Page Properties panes. How to do it... To create a dynamic Breadcrumb Master, first you will create new pages in your sitemap and three empty Masters (for example, Template, Header, Menu, and BreadCrumb). Next, you will place widgets on the Header, Menu, and BreadCrumb Masters. You will then place the Header, Menu, and BreadCrumb Masters onto the Template Master. Finally, you will drag the Template Master to all of the pages in Sitemap. Start Axure and under Create New select RP File. In the Sitemap create pages as follows: In the Masters pane, create four individual Masters, titled: Template, Header, Menu, and BreadCrumb, respectively, as shown in the following screenshot: Right-click on each Master you created in step 3, mouse over Drop Behaviour, and click on Lock to Master Location. This will cause the widgets in each Master to maintain the xand ycoordinates no matter where the Master is placed in a wireframe. In the Masters pane, double-click on the Header Master to select it. While holding down the mouse button, drag the Placeholder widget, and place it on the wireframe. With the Placeholder widget selected, type Home, and change the x: 10, y: 12, w: 96, and h: 30(present at the top-left of the window). In the Widget Interactions and Notes pane, click on the Shape Name text field, and type HomeLink. While holding down the mouse button, drag the Label widget, and place it at the coordinates (130,18) on the wireframe. With the Label widget selected, perform the following steps: Type BreadCrumb Prototype. In the Widget Interactions and Notes pane, click on the Shape Name text field, and type HeaderLabel. In the Widget Properties and Style pane, click on the Style tab, and then scroll to the Font section. Increase the font size to 18 by clicking on the font size dropdown, mouse over 18, and click to select: In the Masters pane, double-click on the Menu Master to select it. While holding down the mouse button, drag the Classic Menu - Horizontal widget, and place it at the coordinates (10,52) on the wireframe. In the Widget Interactions and Notes pane, click on the Menu Name text field, and type MainMenu. To name and link the primary menu item, perform the following steps: Click on the first menu item labeled File to select it, and type Primary. In the Widget Interactions and Notes pane, click on the Menu Item Name text field, and type MenuPrimary. In the Widget Interactions and Notes pane, click on the Interactions tab, and then click on Add Case…. In the Case Editor (OnClick) pop up, in Case description, rename the case description OpenPrimaryPage. In Click to add actions, click on Open Link. In Organize actions, you will see the interaction description update to Open Link in Current Window. In Configure actions, click on the radio button next to Link to a page in this design, and then click on Primary Page. Click on OK. In the Widget Properties and Style pane, click on the Style tab, and then scroll to the Font section. Increase the font size to 16 by clicking the font size dropdown, mouse over 16, and then click on it to select. To name and link the category menu item, perform the following steps: Click on the second menu item, labeled Edit, to select it, and type Category. In the Widget Interactions and Notes pane, click on the Menu Item Name text field, and type MenuCategory. In the Widget Interactions and Notes pane, click on the Interactions tab, and then click on Add Case…. In the Case Editor (OnClick) pop up, in Case description, rename the case description OpenCategoryPage. In Click to add actions, click on Open Link. In Organize actions, you will see the interaction description update to Open Link in Current Window. In Configure actions, click on the radio button next to Link to a page in this design, and then click on Category page. Click on OK. In the Widget Properties and Style pane, click on the Style tab, and then scroll to the Font section. Increase the font size to 16 by clicking the font size dropdown, mouse over 16, and then click on it to select. To name and link the content menu item, perform the following steps: Click on the third menu item labelled View to select it, and type Content. In the Widget Interactions and Notes pane, click on the Menu Item Name text field, and type MenuContent. In the Widget Interactions and Notes pane, click on the Interactions tab, and then click on Add Case…. In the Case Editor (OnClick) pop up, in Case description, rename the case description OpenContentPage. In Click to add actions, click on Open Link. In Organize actions, you will see the interaction description update to Open Link in Current Window. In Configure actions, click on the radio button next to Link to a page in this design, and then click on Content Page. Click on OK. In the Widget Properties and Style pane, click on the Style tab, and then scroll to the Font section. Increase the font size to 16 by clicking the font size dropdown, mouse over 16, and click on it to select it. To add a submenu item, right-click on the Primary menu item, and click on Add Submenu: Click on the first submenu item, and enter Secondary, and then perform the following steps: In the Widget Interactions and Notes pane, click on the Menu Item Name text field, and type MenuSecondary. In the Widget Interactions and Notes pane, click on the Interactions tab, and then click on Add Case…. In the Case Editor (OnClick) pop up, in Case description, rename the case description OpenSecondaryPage. In Click to add actions, click on Open Link. In Organize actions, you will see the interaction description update to Open Link in Current Window. In Configure actions, click on the radio button next to Link to a page in this design, and then click on Secondary Page. Click on OK. Right-click on the second and third submenu items, and click on Delete Menu Item. In the Masters pane, double-click on the BreadCrumb Master to select it. While holding down the mouse button, drag the Dynamic Panel widget, and place it on the wireframe. Change the x: and y: coordinates and w: and h: to be: With the Dynamic Panel selected, perform the following steps: In the Widget Interactions and Notes pane, click on the Dynamic Panel Name text field, and then type BreadCrumb. In the Widget Manager, rename State1 Home. Add states to Dynamic Panel as follows: Primary, Secondary, Tertiary,Category, Product, and Content. With Dynamic Panel selected, perform the following steps: Double-click on the state labeled Primary in the Dynamic Panel Manager. While holding down the mouse button, drag a Label widget and place it on the wireframe at coordinates (0,6). Enter Homeas the text on the Label widget. In the Widget Interactions and Notes pane, click on the Shape Name text field, and then type HomeBreadCrumbLink. In the Widget Interactions and Notes, pane click on the Interactions tab, and then click on Add Case…. In the Case Editor (OnClick) pop up, in Case description, rename the case description OpenHomePage. In Click to add actions, click on Open Link. In Organize actions, you will see the interaction description update to Open Link in Current Window. In Configure actions, click on the radio button next to Link to a page in this design, and click on Home page. Click on OK. You will now focus on building the Dynamic Panel states Primary, Secondary, Tertiary, Category, Product, and Content. The following screenshot shows what the Primary state will look like: To build the Dynamic Panel states Primary, Secondary, Tertiary, Category, Product, and Content, with the Dynamic Panel selected perform the following step: Repeat step 27, changing the step each time with the following information: Panel State Coordinates for Label Widget Label Text Shape Name Case description Configure Actions Link to Primary (0,6) Home HomeBreadCrumbLink OpenHomePage Home Primary (55,6) Primary PrimaryBreadCrumbLink OpenPrimaryPage Primary Page Secondary (0,6) Home HomeBreadCrumbLink OpenHomePage Home Secondary (55,6) Primary PrimaryBreadCrumbLink OpenPrimaryPage Primary Page Secondary (115,6) Secondary SecondaryBreadCrumbLink OpenSecondaryPage Secondary Page Tertiary (0,6) Home HomeBreadCrumbLink OpenHomePage Home Tertiary (55,6) Primary PrimaryBreadCrumbLink OpenPrimaryPage Primary Page Tertiary (115,6) Secondary SecondaryBreadCrumbLink OpenSecondaryPage Secondary Page Tertiary (200,6) Tertiary TertiaryBreadCrumbLink OpenTertiaryPage Tertiary Page Category (0,6) Home HomeBreadCrumbLink OpenHomePage Home Category (55,6) Category CategoryBreadCrumbLink OpenCategoryPage Category Page Product (0,6) Home HomeBreadCrumbLink OpenHomePage Home Product (55,6) Category CategoryBreadCrumbLink OpenCategoryPage Category Page Product (125,6) Product Detail DetailBreadCrumbLink OpenDetailPage Product Detail Page Content (0,6) Home HomeBreadCrumbLink OpenHomePage Home Content (55,6) Content ContnetBreadCrumbLink OpenContentPage Content Page To populate the Template Master with the component masters (for example, Header, Menu, and BreadCrumb Masters), perform the following steps: In the Masters pane, double-click on the Template Master to select it. While holding down the mouse button, drag the Header Master, and place it anywhere on the wireframe. In step 4, you specified Lock to Master Location for the Drop Behaviour of each Master. This causes the widgets in each Master to maintain their x and y coordinates no matter where the Master is placed in a wireframe. While holding down the mouse button, drag the Menu Master, and place on the wireframe. While holding down the mouse button, drag the BreadCrumb Master, and place it on the wireframe. While holding down the mouse button, drag the Template Master, and place it anywhere on the wireframe. The Template Master will align to the fixed X and Y coordinates. Below the wireframe, click on the Page Interactions tab, and double-click on the OnPageLoad interaction. In the Case Editor (OnPageLoad) pop up, perform the following steps: In Case description, rename the case description SetBreadCrumbState. In Click to add actions, click on Dynamic Panels to expand, and then click on Set Panel State. In Organize actions, you will see the interaction description update to Set Panel to State. In Configure actions under Select the panels to set the state, click on the checkbox next to the Label for the Breadcrumb (Dynamic Panel). Click on the Select the state dropdown, and mouse over Home. Click on Home to select it. You will see the interaction description under Organize actions update to read. Set Template/BreadCrumb/BreadCrumb Home. Click on OK. Repeat steps 30 to 32 for the remaining pages in Sitemap, modifying each OnPageLoad case to set the BreadCrumb state to the appropriate panel state corresponding to the selected page in Sitemap(for example, for the Primary page, the corresponding state would be Primary, and so on). You can now choose to preview or save a copy of the prototype. To preview the prototype, click on the preview button in the toolbar. To save a copy of the prototype, click on the Publish button in the toolbar, and select Generate HTML Files…. You can also generate the prototype by going to the main menu, selecting Publish, and clicking on Generate HTML Files…. How it works... For this recipe, you used Masters, a dynamic panel, a menu widget, and text widgets to create a dynamic BreadCrumb Master. You created four Masters: Template, Header, Menu, and BreadCrumb and set the behavior of each to Lock to Master Location. This retained the coordinates of the widgets placed on each Master when used on a page. The Template Master contained the Header, Menu, and BreadCrumb Masters. The Menu was labeled and linked to the corresponding pages in Sitemap. The BreadCrumb Master contained a dynamic panel that had a corresponding panel state for each of the pages in Sitemap. For each individual panel state, label widgets were used and linked to the corresponding page in the sitemap. When a page is loaded, the Page Interaction OnPageLoad event sets the state of the BreadCrumb dynamic panel to show the correct BreadCrumb state. Generating a dynamic welcome message Using variables, you can set widget values and text dynamically. For example, when a page loads, you could show the user a welcome message based on the day of the week. Getting ready In this recipe, you are going to explore using built-in variables in expressions. You will show the user a welcome message using the DayOfWeek variable. How to do it... Perform the following steps: Start Axure and under Create New select RP File. If you already have Axure open, select File in the main menu, and then click on New in the drop-down menu to create a new RP document. While holding down the mouse button, drag the Label widget, and place it on the wireframe at (53,13). With the Label widget selected, in the Widget Interactions and Notes pane, click on the field below Shape Name, and then type WelcomeText. In the Page Properties pane, click on the Page Interactions tab and double-click on the OnPageLoad interaction. The page will appear as shown in the following screenshot: In the Case Editor (OnPageLoad) pop up, in Case description, type DisplayMessage. In Click to add actions, click on Set Text. In Organize actions, you will see the interaction description update to Set Text In Configure actions, under Select the widgets to set text, click in the checkbox next to Label for the Welcome Text (Shape). In Configure actions under Set text to, set the dropdown to value, and click on the fx button to bring up the Edit Text pop up. In the Edit Text popup, enter in the text field Welcome. Today is. Click on the Insert Variable or Function... link to open the drop-down menu; scroll to Date; click on Date to expand the selection; and click on getDayOfWeek(), as shown in the following screenshot: Click on OK. You can now choose to preview or save a copy of the prototype. To preview the prototype, click on the preview button in the toolbar. To save a copy of the prototype, click on the Publish button in the toolbar, and select Generate HTML Files…. You can also generate the prototype by going to the main menu, selecting Publish, and then clicking on Generate HTML Files…. How it works... For this recipe, you used the Label widget, and using the Widget Interactions and Notes pane, you applied a label to the widget. Next in the Page Properties pane, you set a case on the OnPageLoad interaction to Set Variable/Widget value(s). Finally, in the Edit Text popup, you used a built-in variable. There's more... At times you may find that your Label widget is not displaying the built-in variable. One possible cause is that the length of the Label widget is not long enough to display all of the characters. For a list of variables available in Axure 7, visit http://www.axure.com/forum/tips-tricks-examples/8030-v7-variables-list.html
Read more
  • 0
  • 0
  • 763

article-image-exploring-advanced-interactions-webdriver
Packt
21 Jan 2014
9 min read
Save for later

Exploring Advanced Interactions of WebDriver

Packt
21 Jan 2014
9 min read
(For more resources related to this topic, see here.) Understanding actions, build, and perform We know how to take some basic actions, such as clicking on a button and typing text into a textbox; however, there are many scenarios where we have to perform multiple actions at the same time. For example, keeping the Shift button pressed and typing text for uppercase letters, and the dragging and dropping mouse actions. Let's see a simple scenario here. Open the selectable.html file that is attached with this book. You will see tiles of numbers from 1 to 12. If you inspect the elements with Firebug, you will see an ordered list tag (<ol>) and 12 list items (<li>) under it, as shown in the following code: <ol id="selectable" class="ui-selectable"> <li class="ui-state-default ui-selectee" name="one">1</li> <li class="ui-state-default ui-selectee" name="two">2</li> <li class="ui-state-default ui-selectee" name="three">3</li> <li class="ui-state-default ui-selectee" name="four">4</li> <li class="ui-state-default ui-selectee" name="five">5</li> <li class="ui-state-default ui-selectee" name="six">6</li> <li class="ui-state-default ui-selectee" name="seven">7</li> <li class="ui-state-default ui-selectee" name="eight">8</li> <li class="ui-state-default ui-selectee" name="nine">9</li> <li class="ui-state-default ui-selectee" name="ten">10</li> <li class="ui-state-default ui-selectee" name="eleven">11</li> <li class="ui-state-default ui-selectee" name="twelve">12</li> </ol> If you click a number, it's background color changes to orange. Try selecting the 1, 3, and 5 numbered tiles. You do that by holding the Ctrl key + 1 numbered tile + 3 numbered tile + 5 numbered tile. So, this involves performing multiple actions, that is, holding the Ctrl key continuously and clicking on 1, 3, and 5 tiles. How do we perform these multiple actions using WebDriver? The following code demonstrates that: public class ActionBuildPerform {     public static void main(String... args) {       WebDriver driver = new FirefoxDriver();       driver.get("file://C:/selectable.html");       WebElement one = driver.findElement(By.name("one"));       WebElement three = driver.findElement(By.name("three"));      WebElement five = driver.findElement(By.name("five"));       // Add all the actions into the Actions builder.      Actions builder = new Actions(driver);         builder.keyDown( Keys.CONTROL )               .click(one)              .click(three)              .click(five)              .keyUp(Keys.CONTROL);        // Generate the composite action.        Action compositeAction = builder.build();        // Perform the composite action.        compositeAction.perform( );       }    } Now, if you see the code, line number 9 is where we are getting introduced to a new class named Actions. This Actions class is the one that is used to emulate all the complex user events. Using this, the developer of the test script could combine all the necessary user gestures into one composite action. From line 9 to line 14, we have declared all the actions that are to be executed to achieve the functionality of clicking on the numbers 1, 3, and 5. Once all the actions are grouped together, we build that into a composite action. This is contained on line 16. Action is an interface that has only the perform() method, which executes the composite action. Line 18 is where we are actually executing the action using the perform() method. So, to make WebDriver perform multiple actions at the same time, you need to follow a three-step process of using the user-facing API of the Actions class to group all the actions, then build the composite action, and then the perform the action. This process can be made into a two-step process as the perform() method internally calls the build() method. So the previous code will look as follows: public class ActionBuildPerform {     public static void main(String... args) {       WebDriver driver = new FirefoxDriver();       driver.get("file://C:/selectable.html");       WebElement one = driver.findElement(By.name("one"));       WebElement three = driver.findElement(By.name("three"));      WebElement five = driver.findElement(By.name("five"));       // Add all the actions into the Actions builder.     Actions builder = new Actions(driver);         builder.keyDown( Keys.CONTROL )               .click(one)              .click(three)              .click(five)              .keyUp(Keys.CONTROL);        // Perform the action.        builder.perform( );   } } In the preceding code, we have directly invoked the perform() method on the Actions instance, which internally calls the build() method to create a composite action before executing it. In the subsequent sections of this article, we will take a closer look at the Actions class. All the actions are basically divided into two categories: mouse-based actions and keyboard-based actions. In the following sections, we will discuss all the actions that are specific to the mouse and keyboard available in the Actions class. Learning mouse-based interactions There are around eight different mouse actions that can be performed using the Actions class. We will see each of their syntax and a working example. The moveByOffset action The moveByOffset method is used to move the mouse from its current position to another point on the web page. Developers can specify the X distance and Y distance the mouse has to be moved. When the page is loaded, generally the initial position of a mouse would be (0, 0), unless there is an explicit focus declared by the page. The API syntax for the moveByOffset method is as follows: public Actions moveByOffset(int xOffSet, int yOffSet) In the preceding code, xOffSet is the input parameter providing the WebDriver the amount of offset to be moved along the x axis. A positive value is used to move the cursor to the right, and a negative value is used to move the cursor to the left. yOffSet is the input parameter providing the WebDriver the amount of offset to be moved along the y axis. A positive value is used to move the cursor down along the y axis and a negative value is used to move the cursor toward the top. When the xOffSet and yOffSet values result in moving the cursor out of the document, a MoveTargetOutOfBoundsException is raised. Let's see a working example of it. The objective of the following code is to move the cursor on to the number 3 tile on the web page:  public class MoveByOffSet{   public static void main(String... args) {     WebDriver driver = new FirefoxDriver();     driver.get("file://C:/Selectable.html");     WebElement three = driver.findElement(By.name("three"));     System.out.println("X coordinate: "+three.getLocation().getX()+" Y coordinate: "+three.getLocation().getY());     Actions builder = new Actions(driver);     builder.moveByOffset(three.getLocation().getX()+1, three.getLocation().getY()+1);     builder.perform();   }  } We have added +1 to the coordinates, because if you observe the element in Firebug, we have a style border of 1 px. Border is a CSS-style attribute, which when applied to an element, will add a border of the specified color around the element with the specified amount of thickness. Though the previous code does move your mouse over tile 3, we don't realize it because we are not doing any action there. We will see that when we use this moveByOffset() method in combination with the click method shortly. The moveByOffset() method may not work in Mac OSX and may raise a JavaScript error when used independently like the previous code. The click at current location action The click method is used to simulate the left-click of your mouse at its current point of location. This method doesn't really realize where or on which element it is clicking. It just blindly clicks wherever it is at that point of time. Hence, this method is used in combination with some other action rather than independently, to create a composite action. The API syntax for the click method is as follows: public Actions click() The click method doesn't really have any context about where it is performing its action; hence, it doesn't take any input parameter. Let's see a code example of the click method: public class MoveByOffsetAndClick{   public static void main(String... args) {     WebDriver driver = new FirefoxDriver();     driver.get("file://C:/Selectable.html");     WebElement seven = driver.findElement(By.name("seven"));     System.out.println("X coordinate: "+seven.getLocation().getX()+" Y coordinate: "+seven.getLocation().getY());     Actions builder = new Actions(driver);     builder.moveByOffset( seven.getLocation ().getX()+1, seven.getLocation().getY()+1).click();     builder.perform();   } } Line 8 is where we have used a combination of the moveByOffset() and click() methods to move the cursor from point (0, 0) to the point of tile 7. Because the initial position of the mouse is (0, 0), the X, Y offset provided for the moveByOffset() method is nothing but the location of the tile 7 element. Now, lets try to move the cursor from tile 1 to tile 11 and from there to tile 5 and see how the code looks. Before we get into the code, let's inspect the selectable.html page using Firebug. The following is the style of each tile: #selectable li {     float: left;     font-size: 4em;     height : 80px;     text-align: center;     width : 100px; } .ui-state-default, .ui-widget-content .ui-state-default, .ui-widget-header .ui-state-default {     background: url("images/ui-bg_glass_75_e6e6e6_1x400.png") repeat-x scroll 50% 50% #E6E6E6;     border : 1px solid #D3D3D3;     color: #555555;     font-weight: normal; } The three elements with which we are concerned for our offset movement in the preceding style code are: height, width, and the border thickness. Here, the height value is 80px, width value is 100px, and border value is 1px. Use these three factors to calculate the offset to navigate from one tile to the other. Note that the border thickness between any two tiles will result in 2 px; that is, 1 px from each tile. The following is the code that uses the moveByOffset and click() methods to navigate from tile 1 to tile 11, and from there to tile 5: public class MoveByOffsetAndClick{   public static void main(String... args) {     WebDriver driver = new FirefoxDriver();     driver.get("file://C:/Selectable.html");     WebElement one = driver.findElement(By.name("one"));     WebElement eleven = driver.findElement(By.name("eleven"));     WebElement five = driver.findElement(By.name("five"));     int border = 1;     int tileWidth = 100;     int tileHeight = 80;     Actions builder = new Actions(driver);     //Click on One     builder.moveByOffset( one.getLocation ().getX()+border, one.getLocation().getY()+border).click();     builder.build().perform();     // Click on Eleven     builder.moveByOffset( 2*tileWidth+4*border, 2*tileHeight+4*border).click();     builder.build().perform();    //Click on Five     builder.moveByOffset( -2*tileWidth-4*border, -tileHeight-2*border).click();     builder.build().perform();    }  }
Read more
  • 0
  • 0
  • 1908

article-image-overview-architecture-and-modeling-cassandra
Packt
21 Jan 2014
5 min read
Save for later

An overview of architecture and modeling in Cassandra

Packt
21 Jan 2014
5 min read
(For more resources related to this topic, see here.) Cassandra uses a peer-to-peer architecture, unlike a master-slave architecture, which is prone to single point of failure (SPOF) problems. Cassandra is deployed on multiple machines with each machine acting as a node in a cluster. Data is autosharded, that is, automatically distributed across nodes using key-based sharding, which means that the keys are used to distribute the data across the cluster. Each key-value data element in Cassandra is replicated across the cluster on other nodes (the default replication is 3) for high availability and fault tolerance. If a node goes down, the data can be served from another node having a copy of the original data. Sharding is an old concept used for distributing data across different systems. Sharding can be horizontal or vertical. In horizontal sharding, in case of RDBMS, data is distributed on the basis of rows, with some rows residing on a single machine and the other rows residing on other machines. Vertical sharding is similar to columnar storage, where columns can be stored separately in different locations. Hadoop Distributed File Systems (HDFS) use data-volumes-based sharding, where a single big file is sharded and distributed across multiple machines using the block size. So, as an example, if the block size is 64 MB, a 640 MB file will be split into 10 chunks and placed in multiple machines. The same autosharding capability is used when new nodes are added to Cassandra, where the new node becomes responsible for a specific key range of data. The details of what node holds what key ranges is coordinated and shared across the cluster using the gossip protocol. So, whenever a client wants to access a specific key, each node locates the key and its associated data quickly within a few milliseconds. When the client writes data to the cluster, the data will be written to the nodes responsible for that key range. However, if the node responsible for that key range is down or not reachable, Cassandra uses a clever solution called Hinted Handoff that allows the data to be managed by another node in the cluster and to be written back on the responsible node once that node is back in the cluster. The replication of data raises the concern of data inconsistency when the replicas might have different states for the same data. Cassandra uses mechanisms such as anti-entropy and read repair for solving this problem and synchronizing data across the replicas. Anti-entropy is used at the time of compaction, where compaction is a concept borrowed from Google BigTable. Compaction in Cassandra refers to the merging of SSTable and helps in optimizing data storage and increasing read performance by reducing the number of seeks across SSTables. Another problem that compaction solves is handling deletion in Cassandra. Unlike traditional RDBMS, all deletes in Cassandra are soft deletes, which means that the records still exist in the underlying data store but are marked with a special flag so that these deleted records do not appear in query results. The records marked as deleted records are called tombstone records. Major compactions handle these soft deletes or tombstones by removing them from the SSTable in the underlying file stores. Cassandra, like Dynamo, uses a Merkle tree data structure to represent the data state at a column family level in a node. This Merkle tree representation is used during major compactions to find the difference in the data states across nodes and reconciled. The Merkle tree or Hash tree is a data structure in the form of a tree where every non-leaf node is labeled with the hash of children nodes, allowing the efficient and secure verification of the contents of the large data structure. Cassandra, like Dynamo, falls under the AP part of the CAP theorem and offers a tunable consistency level. Cassandra provides multiple consistency levels, as illustrated in the following table: Operation ZERO ANY ONE QUORUM ALL Read Not supported Not supported Reads from one node   Read from a majority of nodes with replicas Read from all the nodes with replicas Write Asynchronous write Writes on one node including hints Writes on one node with commit log and Memtable Writes on a majority of nodes with replicas Writes on all the nodes with replicas A summary of the features in Cassandra The following table summarizes the key features of Cassandra with respect to its origins in Google BigTable and Amazon Dynamo: Feature Cassandra implementation Google BigTable Amazon Dynamo Architecture Peer-to-peer architecture, ring-based deployment architecture No Yes   Data model Multidimensional map (row,column, timestamp) -> bytes Yes   No CAP theorem AP with tunable consistency No Yes   Storage architecture SSTable, Memtables Yes   No Storage layer Local filesystem storage No No Fast reads and efficient storage Bloom filters, compactions Yes   No Programming language Java No Yes   Client programming language Multiple languages supported: Java, PHP, Python, REST, C++, .NET, and so on. Not known Not known Scalability model Horizontal scalability; multiple nodes deployment than a single machine deployment Yes   Yes   Version conflicts Timestamp field (not a vector clock as usually assumed) No No Hard deletes/updates Data is always appended using the timestamp field—deletes/updates are soft appends and are cleaned asynchronously as part of major compactions Yes   No Summary Cassandra packs the best features of two technologies proven at scale—Google BigTable and Amazon Dynamo. However, today Cassandra has evolved beyond these origins with new unique and enterprise-ready features such as Cassandra Query Language (CQL), support for collection columns, lightweight transactions, and triggers. Resources for Article: Further resources on this subject: Basic Concepts and Architecture of Cassandra [Article] About Cassandra [Article] Getting Started with Apache Cassandra [Article]
Read more
  • 0
  • 0
  • 2502
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-server-logs
Packt
20 Jan 2014
9 min read
Save for later

Server Logs

Packt
20 Jan 2014
9 min read
(For more resources related to this topic, see here.) Monitoring a live system is crucial to maintaining stability and performance; not only to avoid potential failures but even for debugging and tracing back an event. That is why having a system record its activities results in a rich database of logs that can be used for investigation. Logfiles tell a fascinating story to those who can read it. They carry the history of all events narrated in thorough detail. ArcGIS for Server, like any other system, keeps logfiles for all events, from the basic "connection established" event to the severe "service failed to initiate" event. Logging levels Recording events on Server is done at different levels. You can tell Server to log every single event as it happens or filter to record only fatal errors. Consequently, recording fine events generates more logs than recording only those messages with errors. There are seven logging levels, and these are described in the following Esri table: Log level Description Severe This level logs serious problems that require immediate attention. It only includes severe messages. Warning This level logs moderate problems that require attention. It also includes severe-level messages. Info This level logs common administrative messages from Server, including messages about service creation and startup. It also includes severe and warning messages. Fine This level logs common messages from users of Server, such as names of operation requests received. It includes severe, warning, and info messages. Verbose This level logs messages providing more details about how Server completes an operation, such as whether each layer in a map service was drawn successfully, how fast the layer was drawn, and how long it took Server to access the layer's source data. This level includes severe, warning, info, and fine messages. Debug This level logs highly verbose messages designed for developers and support technicians who want to obtain a better understanding of the state of Server when troubleshooting. This level should not be used in a production environment as it may cause a significant decrease in Server performance. Off At this level, logging is turned off. Events are not logged with Server. As you can see, Debug is the finest level and keeps Server busy with logging events, making other important tasks suffer. Log analysis Logs can be viewed and refreshed actively from the ArcGIS for Server Manager window as they are written. To see your current logfiles, go to Manager and activate the Logs tab: Naturally, each GIS server generates its own logs and this is all saved by default at C:arcgisserverlogs. You cannot use a shared folder for this; each GIS server should generate its own logs in its directory, ArcGIS for Server aggregates those logs into the Server site, in a table view with filters options, which allows you to search through the logs. From the View Log Messages panel, click on Query to view the current logfiles as shown in the following screenshot. You might get messages different from mine. You might not have any messages if your current log level is set to only record errors, and there are no errors. To change the log level, click on Settings. From the Log Settings form, select Verbose from the Log Level drop-down list. You can set the logs to be cleared automatically if you want to. Keep the rest of the settings intact and click on the Save button. By default, the logs are kept on the GIS server for three months. If you are planning to keep the logs for longer than that, perhaps for offline analysis, you may want to archive them periodically and delete them. Generally, clearing the logs is better for performance. This will be discussed in the coming pages. Best practice Since logs are saved to disk frequently, they use high IO. It is recommended that you point the log path to a local directory, preferably on a Solid State Drive (SSD) for best performance. Now, let us see how the logs are being generated. First of all, let us clear all the logs to start afresh. To do that, click on Delete Logs from ArcGIS for Server Manager and then click on Yes, as shown in the following screenshot: Now that the logs are cleared, we will activate the parcels service by simply visiting the REST URL and then checking the log. Type the REST URL on a new browser page and press Enter. You should see something like the following screenshot if you have access to the service: Go back to your logs and click on Query to refresh the page. You should see one message in the table. You might see other messages from Server that happened to be executed at that particular time but look for this one: Level Time Message Source Machine User Name INFO Nov 17, 2013, 11:18:26 AM Request user: Anonymous user, Service: Parcels/MapServer Rest GIS-SERVER01 Anonymous user The level is INFO, which means a detailed event; it says a request user from REST consumed the Parcels Map Service and GIS-SERVER01 served that request. If you have security enabled, you would even know which user consumed that service. Now, let us take it to the next level. On the Parcels REST page, click on ArcGIS JavaScript to view the map with the service loaded. Go back to the log view and click on Query to refresh; make sure the Log Filter dropdown is set to Verbose. A fleet of messages was generated from our last action; we will take a look at each line and analyze it. There are many columns that can be displayed on the log table and you can show or hide them from the Columns button. For a better view, you can click on the Printer Friendly View link, which will display a text format version of this table in a new page. This is the log we are going to analyze; we will start from the first line: INFO, Nov 17, 2013, 11:29:17 AM, Request user: Anonymous user, Service: Parcels/MapServer Rest. This is a request to consume the service. You can use this identifier to measure how many times a service has been requested. FINE Nov 17, 2013, 11:29:17 AM REST request received. Request size is 178 characters. Parcels.MapServer The preceding line is appended if there is more work to be followed; it shows the request size in bytes. FINE Nov 17, 2013, 11:29:17 AM Begin ExportMapImage Parcels.MapServer The process is so fast that we are still in the same second. The preceding line of code tells us that the Export Map Image process just started. This is the big process where Server exports an image of the desired area; however, there is still more work to follow to create the actual image. You can start measuring the drawing time of a certain service from this line. VERBOSE Nov 17, 2013, 11:29:17 AM Begining of preparation. Parcels.MapServer VERBOSE Nov 17, 2013, 11:29:17 AM End of preparation. Parcels.MapServer The preceding two lines highlight the preparation of the export image process. They usually happen very fast. FINE Nov 17, 2013, 11:29:17 AM Extent:1467314.863829,2191233.084700,2574598.328396,2702665.79038; Size:1574,727; Scale:2658831.00Parcels.MapServer A map needs an initial extent coordinates for it to draw. At the first call of the service, Server implicitly sends the default full extent to draw the map. After that, the user will explicitly request a new extent, each time he/she zooms in or pans the map. VERBOSE Nov 17, 2013, 11:29:17 AM Beginning of layer draw: Parcels Parcels.MapServer Since we only have one layer, you will see one occurrence of this line; however, you will see these lines reappear with more layers and there will be more logs to follow. VERBOSE Nov 17, 2013, 11:29:17 AM Execute Query Parcels.MapServer I consider this one of the most important lines; this is where the database is advised and queried to get the actual features. You can make a good measurement here by monitoring how long an execute query takes. If this takes a long time to execute, you might want to consult your DBA to look into tuning the database. VERBOSE Nov 17, 2013, 11:29:17 AM Symbol Drawing Parcels.MapServer VERBOSE Nov 17, 2013, 11:29:17 AM Data Access Parcels.MapServer VERBOSE Nov 17, 2013, 11:29:17 AM Symbolizing Parcels.MapServer Symbology work, depending on the user, can be executed either on the server or on the client. Since we are running on a browser, the symbology drawing will be carried on the client's browser by JavaScript. Note that this is only the symbology drawing; the labeling is done in another step. VERBOSE Nov 17, 2013, 11:29:17 AM Number of features drawn: 10 Parcels.MapServer This message shows the number of features that have been drawn. This line is useful if you want to know how many features are retrieved for each request and monitor the performance. VERBOSE Nov 17, 2013, 11:29:17 AM End of layer draw: Parcels Parcels.MapServer This line signifies the end of layer drawing; you should now start seeing the map, but with no labels. VERBOSE Nov 17, 2013, 11:29:17 AM Beginning of labeling phase (labeling and label draw) Parcels.MapServer Now that the symbology work is done, the labeling will start. This will give you even more measurement indicators for performance. VERBOSE Nov 17, 2013, 11:29:17 AM Symbol Drawing Parcels.MapServer It draws the font symbol as described in the layer description which can be found in the layer properties. VERBOSE Nov 17, 2013, 11:29:17 AM Number of features drawn: 10 Parcels.MapServer The preceding line indicates that the features have been labeled successfully. VERBOSE Nov 17, 2013, 11:29:17 AM End of labeling phase (labeling and label draw) Parcels.MapServer The preceding line marks the end of the labeling phase. FINE Nov 17, 2013, 11:29:17 AM End ExportMapImage Parcels.MapServer The map image has been exported successfully; we will attempt to deliver it to the client after this. FINE Nov 17, 2013, 11:29:17 AM REST request successfully processed. Response size is 6364 characters. Parcels.MapServer The last message describes the response map, which is a 6K map image. My Server is so fast that the whole thing happened in the same second. This is not much by way of a log analysis. However, in the next topic we will attempt to analyze a much richer log and will try to answer some questions.
Read more
  • 0
  • 0
  • 1693

article-image-grunt-action
Packt
20 Jan 2014
5 min read
Save for later

Grunt in Action

Packt
20 Jan 2014
5 min read
(For more resources related to this topic, see here.) Step 4 – optimizing our build files At this point, we should have a structured set of source files and can now perform additional transformations on the result. Let's start by downloading the plugins from npm and saving them in our package.json file: $ npm install --save-dev grunt-contrib-uglify grunt-contrib-cssmin grunt-contrib-htmlmin Then, at the top of our Gruntfile.js file, where we have loaded our other Grunt plugins, we will load our new additions with: grunt.loadNpmTasks("grunt-contrib-uglify"); grunt.loadNpmTasks("grunt-contrib-cssmin"); grunt.loadNpmTasks("grunt-contrib-htmlmin"); Scripts We will start by compressing our scripts. In this example, we use the grunt-contrib-uglify plugin http://gswg.io#grunt-contrib-uglify), which is a wrapper around the popular UglifyJS library (http://gswg.io#uglifyjs). Now we have loaded the plugin, which provides the uglify task, we just need to configure it: uglify: { compress: { src: "<%= coffee.build.dest %>", dest: "<%= coffee.build.dest %>" } } Here, inside the uglify property, we have made a compress target, which has src and dest set to the same file. Instead of entering the actual filename, we are making use of Grunt templates to retrieve the value at the given configuration path (coffee.build.dest), which in this case, resolves to build/js/app.js. Grunt templates make it easy to have a single source of truth within our configuration. Therefore, if we ever want to change the file path of our JavaScript, we only need to change one configuration entry. Since we have set the source and destination to the same file path, in effect, we are overwriting our JavaScript with the compressed version of itself. However, if we were writing a JavaScript library instead of a web application, we'd most likely want to compress our app.js file into an app.min.js file, so its users could download an uncompressed and a compressed version. Running this uglify task with this basic configuration should result in the following app.js file: (function(){var a,b;a=function(a,b){return a+b},b=function(a,b){return a-b},alert(a(7,b(4,1)))}).call(this); Generally, this will suffice, however, UglifyJS also offers advanced features. For example, in some cases, we might have portions of code that are only used during development. We could remove this unnecessary code with the following technique. By defining a DEBUG variable and place our debug-related code inside an if block as follows: if(DEBUG) { //do things here } Then, if we used the following options object inside our uglify configuration as follows: options: { compress: { global_defs: { "DEBUG": false }, dead_code: true } } This would result in UglifyJS locking the value of DEBUG to false and also to remove the inaccessible code (dead code). Therefore, in addition to compressing code, we also have the ability to completely remove code from our builds. The documentation for this feature can be found at http://gswg.io#grunt-contrib-uglify-conditional-compilation. Styles To compress our styles, we use the grunt-contrib-cssmin plugin (http://gswg.io#grunt-contrib-cssmin), which is a wrapper around the clean-css library (http://gswg.io#clean-css). Since we have installed this plugin, we just need to include the cssmin task configuration: cssmin: { compress: { src: "<%= stylus.build.dest %>", dest: "<%= stylus.build.dest %>" } } Similar to our scripts configuration, we can see that the only real difference is that we point to the stylus task's output instead of pointing to the coffee task's output. When we run grunt cssmin, our css/app.css file should be modified to the following one: html,body{margin:0;padding:0}.content .middle{font-size:16pt}@media (max-width:768px){.content .middle{font-size:8pt}} Views Finally, to compress our views, we will use the grunt-contrib-htmlmin plugin (http://gswg.io#grunt-contrib-htmlmin), which is a wrapper around the html-minifier library (http://gswg.io#html-minifier). The htmlmin configuration has a little more to it: since its compression options are disabled by default, we need to enable the rules we wish to use: htmlmin: { options: { removeComments: true, collapseWhitespace: true, collapseBooleanAttributes: true, removeAttributeQuotes: true, removeRedundantAttributes: true, removeOptionalTags: true }, compress: { src: "<%= jade.build.dest %>", dest: "<%= jade.build.dest %>" } } Now our htmlmin task is configured, we can run it with grunt htmlmin, which should modify our build/app.html to the following: <!DOCTYPE html><html><head><link rel=stylesheet href=css/app. css><body><section class=header>this is the <b>amazing</b> header section</section><section class=content><div class=top>some content with this on top</div><div class=middle>and this in the middle</ div><div class=bottom>and this on the bottom</div></section><section class=footer>and this is the footer, with an awesome copyright symbol with the year next to it - © 2013</section><script src = js/app. js></script> In addition to the GitHub repository, we can read more about html-minifier on Juriy "Kangax" Zaytsev's blog at http://gswg.io#experimenting-with-html-minifier. Summary In this article we performed additional transformations on set of source files by using Grunt. Resources for Article: Further resources on this subject: Trapping Errors by Using Built-In Objects in JavaScript Testing [article] Developing Wiki Seek Widget Using Javascript [article] Working with JavaScript in Drupal 6: Part 1 [article]
Read more
  • 0
  • 0
  • 1057

article-image-working-common-architectures
Packt
17 Jan 2014
10 min read
Save for later

Working with common architectures

Packt
17 Jan 2014
10 min read
(For more resources related to this topic, see here.) Working with a case structure Case structure is equivalent to a conditional statement in a text-based programming language. We will create a few case structures that take different kinds of inputs, such as Boolean, numeric, string, enum, and error, to present different features of a case structure. How to do it... We will start with a Boolean case structure. The case structure in the following block diagram shows a case structure taking a Boolean input. It consists of False and True cases. The select node is also in the diagram to show that it can be used instead of a case structure when the input is Boolean. The select node will choose which input to output based on the Boolean input, similar to the case structure. The case structure in the following block diagram takes an integer as input. Keep in mind that when the input is a floating point value, it is converted into an integer. The ..-1 case will be executed when the input is less than or equal to 1. The 1, 2 case will be executed when the input is 1 or 2. The 3..5 case will be executed when the input value is between 3 and 5 inclusively. The 6.. case will be executed when the input is greater than or equal to 6. The 0, Default case will be executed when the input is 0 or does not meet the conditions of all the other cases, which is what Default means in this case. The following block diagram shows a case structure with a string input. The ''a''..''f'' case will be executed when the ASCII hex value of the input string is between a and f, including a, but excluding f. The ''f''..''j'' case will be executed when the ASCII hex value of the input string is between f and j, including f, but excluding j. If the input value does not meet the conditions of the previous states, the Default case will run. The following block diagram shows a case structure with enum input. These cases will be executed based on the input value. The Case 1 case is assigned as the default case. If the input does not meet the condition of Case 2 and Case 3, Case 1 will run by default. Enum is used for state machine, as it allows for self-documenting code. The value of an enum is also part of its type, so if we add a value in an enum type-def, the change will propagate to the rest of the block diagram. The following block diagram contains an error cluster input. It has two cases: No Error and Error. It is used extensively in a SubVI for bypassing input error, so that it doesn't get corrupted inside the SubVI. How it works... Case structure is the main way to make decisions in LabVIEW's code. It can take different types of input, such as Boolean, numeric, string, enum, and error cluster. For the Boolean case structure, sometimes it is more convenient to use the select node. It is important to note that the case structure should not be nested with too many layers and each case should be documented. Working with an event structure Event structure consists of one or more cases. Codes that are contained within a case are executed when a control event (mouse click, key stroke, and so on) or a user event (software-based event) occurs. How to do it... We will create an example that demonstrates using control event and user event for the event structure. The following example contains a numeric control (Input Num). When a number is entered, an event is triggered. For the Input Text string control, if a string is entered, an event is triggered, but no text will show up, as all the events (entering text) are discarded. When the Switch Boolean control is clicked, an event is triggered. If any event is triggered, the string indicator (Action) will update with a string that states what event has occurred. The following screenshot shows the front panel of the controls and indicator: The following screenshot shows the block diagram of the example. On the left, a Create User Event node is used to create a user event that can be generated within the code. The input user event data type is the data type used for data passing for a user event. The label of the data type in our example is Stop User, which will be used as the name of the user event. The while loop at the bottom iterates once every 500 ms, and it will generate a user event if the stop Boolean control is set to true. The event reference is registered with the Register for Events node and fed into the dynamic event terminal, which needs to be enabled by right-clicking on the frame of the event structure and then select Show Dynamic Event Terminals. In the top while loop, we see the event case that handles the event when the value of the Boolean changes for the Switch control. It is a good practice to put the control associated with the event case into the case, so that the control is easy to find and it is read by the program every time the event is triggered. When the Boolean value changes, the Action string indicator will update to show what event has occurred. The event case in the following screenshot will be executed when a key is pressed within the Input Num numeric control. The Action string indicator will update and show that the event has occurred. To create the previous event case, right-click on the event structure and select Add Event Case…. The following screenshot shows how to set up the case. Select the Input Num numeric control under Event Source and then choose which type of event to handle. The event case in the following screenshot will execute when a key is pressed within the string control, similar to the event case for the numeric control. However, notice ? behind the label of the Key Down event. This is a filter event which can discard the outcome of the event, contrary to all the previous event cases which use notify event. While our example runs as we enter values into the string control, we see that the key down event happened at the string control in the Action string indicator. The entered values do not appear in the string controls as the events are discarded. Filter events give us the ability to trigger based on an event while discarding the event as though it never happened. Notify events will trigger based on an event without interfering with the occurrence of the event. The event case in the following screenshot will execute when a timeout event occurs. In this example, the timeout event will occur in 10000 ms, if no other events occur. We can change the timeout value as we wish. If we do not want the timeout event to trigger, we can wire a -1 to the timeout input. The event case in the following screenshot will execute when a user event is generated at the bottom while loop (refer to screenshot of the complete example). Recall that the name of the user event is the label of the data type when we created the user event. The user event is generated by the bottom loop when the stop Boolean control is set to true. This way both loops can stop each other's execution. If we have to create thirty event cases manually, it can be a lot of work. The following screenshot shows an example with thirty Boolean controls. For the example, we don't have to create thirty event cases for each Boolean control. The example gets all the references of the controls on the front panel as an array and registers all the references as a dynamic event. In this event case, if any of the Boolean controls has a value change event, the case will trigger. To get more resolution, we get the reference of the control for which the event originated from and print out a text. How it works... Whenever we find ourselves wanting to use a while loop to poll user for data, we should use the event structure instead. When the event structure is waiting for an event, it does not consume any CPU resources. Working with loops Loop is a common element in programming. In LabVIEW, there is for loop, while loop, and timed loop with features that facilitate LabVIEW programming. We will go over the for loop. For the while loop, its features are very similar to the for loop, so it is omitted. How to do it... The for loop is used when a predetermined number of iteration is needed. For an undetermined number of iteration, use the while loop instead. In the following screenshot, on the left, all the features of a for loop are shown; on the right is shown the result of the example. The input of the for loop is an array with elements 3 and 6. The entry point where the array enters the for loop is a [] symbol, which means autoindexing. When the array is autoindexed, each iteration of the for loop will get an element of the array in order. Since the loop is autoindexed, the N symbol (number of iteration) at the upper-left hand corner does not need to be wired. The loop will iterate through each element of the array. In our case, the for loop will iterate two times. If multiple arrays with different lengths are wired into the for loop through autoindex, the number of times that the for loop will iterate is the size of the array with the least number of elements. The i would output the current iteration of the loop, and the stop symbol allows the program to stop the loop before completion. For enabling the conditional stop, right-click on for loop and enable the Conditional terminal. The example shows four output options. To select an option, right-click on the output terminal, select Tunnel Mode, and then select the desired option. For the last value option, the value at the very end of the array is outputted. For the Indexing option, the same number of elements as the input is outputted. For the Conditional option, we can create conditions for which elements are built into the output array. For the Concatenation option, we can concatenate to the end of a 1D array. How it works... The for loop iterates over the same code for a predetermined number of times. If the Conditional terminal is enabled, the for loop can be stopped prematurely. The for loop has many features, such as outputting the value of last iteration, indexing through an array (with and without a condition), and concatenating an array, that are useful for array processing.
Read more
  • 0
  • 0
  • 1717

article-image-adding-raster-layers
Packt
24 Dec 2013
9 min read
Save for later

Adding Raster Layers

Packt
24 Dec 2013
9 min read
(For more resources related to this topic, see here.) This article will cover about working with raster layers. The collection of sections is composed of most common use cases regarding the handling of raster layers in the Google Maps JavaScript API. Raster is one of the prime data types in the GIS world and Google Maps JavaScript API presents an extensive set of tools to integrate external sources of imagery. Also, the API enables the application developers to change the styling of its own base maps with a palette of practically unlimited array of choices. This article will introduce you to change the styling of base maps and then continue with the display of raster data, focusing on external Tiled Map Services (TMS) where the raster layer is composed of organized tiles in the map display.(ex: Openstreetmap). Lastly, there a number of raster layers (traffic, transit, weather, bicycle and Panoramio) that are to be presented within their integration with Google Maps JavaScript API. Styling of Google base maps Google base maps show a variety of details, such as water bodies (oceans, seas, rivers, lakes etc.), roads, parks, and built-up areas (residential, industrial and so on.). As you have observed in the first article, all these are shown in predefined cartographic parameters. With the styling capability of base maps, you have a virtually unlimited set of choices in terms of cartographic representation of base maps. In your web or mobile applications, it is very beneficial to have diversity of representations (in all different colour schemes with different emphasis) in terms of keeping your audience more involved and having maps blend in nicely into your website design. The following steps will guide you through the process of changing the styling of base maps. Getting ready… We can continue from the 1st part of Article 1 – Google Maps JavaScript API Basics – as we do not need to call back the basics of getting the map and so on. How to do it… Your result will look like bluish Google Maps, if you follow the steps below: Create an array of styles as follows: var bluishStyle = [ { stylers: [ { hue: "#009999" }, { saturation: -5 }, { lightness: -40 } ] },{ featureType: "road", elementType: "geometry", stylers: [ { lightness: 100 }, { visibility: "simplified" } ] }, { featureType: "water", elementType: "geometry", stylers: [ { hue: "#0000FF" }, {saturation:-40} ] }, { featureType: "administrative.neighborhood", elementType: "labels.text.stroke", stylers: [ { color: "#E80000" }, {weight: 1} ] },{ featureType: "road", elementType: "labels.text", stylers: [ { visibility: "off" } ] }, { featureType: "road.highway", elementType: "geometry.fill", stylers: [ { color: "#FF00FF" }, {weight: 2} ] } ] Add your styles array to the initMap() function. Within the initMap() function, create a styledMapType object with its name and referencing the your styles array: var bluishStyledMap = new google.maps.StyledMapType(bluishStyle, {name: "Bluish Google Base Maps with Pink Highways"}); Add mapTypeControlOptions object having mapTypeIds property to your original mapOptions object: var mapOptions = { center: new google.maps.LatLng(39.9078, 32.8252), zoom: 10, mapTypeControlOptions: {mapTypeIds: [google.maps.MapTypeId.ROADMAP, 'new_bluish_style']} }; Relate the new mapTypeId to your styledMapType object: map.mapTypes.set('new_bluish_style', bluishStyledMap); And lastly, set this new mapTypeId to be displayed: map.setMapTypeId('new_bluish_style'); You can observe the bluish styled Google base maps, as seen above. How it works... Firstly, let's observe the bluishStyle array consisting of one or more google.maps.MapTypeStyle objects arranged like this: var bluishStyle = [ { featureType: '', elementType: '', stylers: [ {hue: ''}, {saturation: ''}, {lightness: ''}, // etc... ] }, { featureType: '', // etc... } ] In this array, you can include several styles (specified in google.maps.MapTypeStyleElementType) for different map features (specified in google.maps.MapTypeStyleFeatureType) and their respective elements – their geometries, labels, and so on (specified in google.maps.MapTypeStyleElementType). Map features embrace the types of geographic representations that are found in the base maps. Administrative areas, landscape features, points of interest, roads and water bodies are examples of map features. In addition to these general definitions of map features, Google Maps JavaScript API enables you to specify subtypes of these features. For example, you may wish to be specific on which poi types to change the default style by giving the featureType property as follows: featureType: 'poi.school' Or, you can be more specific on landscape map features: featureType: 'landscape.man_made' More about google.maps.MapTypeStyleFeatureType object Complete listing on MapTypeStyleFeatureType object can be found at (https://developers.google.com/maps/documentation/javascript/reference#MapTypeStyleFeatureType) in details. Please note that, the first element of our styles array does not include any featureType property, making the styler options be valid for the entire base map: { stylers: [ { hue: "#009999" }, { saturation: -5 }, { lightness: -40 } ] } In addition to google.maps.MapTypeStyleFeatureType and their constants, you can also detail google.maps.MapTypeStyleElementType for each map feature: the geometries, geometry strokes and fills, labels, label texts (also, text fill and stroke) and label icons. Having this opportunity, you can style the geometries of roads in different settings than their related icons. In our article, you have disabled the visibility of all road label texts, not touching their geometry or label icons: { featureType: "road", elementType: "labels.text", stylers: [ { visibility: "off" } ] } More about google.maps.MapTypeStyleElementType object Complete listing on MapTypeStyleElementType object can be found at (https://developers.google.com/maps/documentation/javascript/reference#MapTypeStyleElementType) in details. For every map feature type and its element type, you can specify a google.maps.MapTypeStyler that covers the options of hue, lightness, saturation, gamma, inverse_lightness, visibility and weight options as an array. In our article, the styler options that make the highway road appear as pink are: { featureType: "road.highway", elementType: "geometry.fill", stylers: [ { color: "#FF00FF" }, {weight: 2} ] } where color option in the stylers array is a RGB Hex string of a pink tone and weight defines the weight of the feature in pixels. More about google.maps.MapTypeStyler object Complete listing on MapTypeStyler object can be found at (https://developers.google.com/maps/documentation/javascript/reference#MapTypeStyler) in details. After defining the styles array in our initMap() function, we have created a StyledMapType object: var bluishStyledMap = new google.maps.StyledMapType(bluishStyle, {name: "Bluish Google Base Maps with Pink Highways"}); This object takes two arguments, first one is the styles array, the second one is a google.maps.StyledMapTypeOptions object. . Here, we have included only the name property, however you can additionally include maxZoom and minZoom properties between which this StyledMapType object will be displayed. You can note that, the value that we have assigned for the name property is displayed in the interface, as you can see in the screen snapshot of this article. Once we have created the StyledMapType object, we have added an additional object called mapTypeControlOptions that takes mapTypeIds array in the mapOptions object replacing the mapTypeId property: var mapOptions = { center: new google.maps.LatLng(39.9078, 32.8252), zoom: 10, mapTypeControlOptions: {mapTypeIds: [google.maps.MapTypeId.ROADMAP, 'new_bluish_style']} }; This enables us to add multiple styles in addition to the standard ROADMAP map type. Next comes the step of linking the mapTypeId ('new_bluish_style') that we have specified in mapTypeIds array with the StyledMapType object (bluishStyledMap): map.mapTypes.set('new_bluish_style', bluishStyledMap); After linking the mapTypeId with the StyledMapType, we can finish with: map.setMapTypeId('new_bluish_style'); so that, the map interface opens with a base map styled within our intentions. In our article, we have covered how to style the base maps according to our taste. We have made use of google.maps.MapTypeStyle object to select the features types (google.maps.MapTypeStyleFeatureType) and related elements (google.maps.MapTypeStyleElementType) and styled them using the google.maps.MapTypeStyler object. Then, we have added our StyledMapType object to the map, showing our own styling of the base maps of Google Maps. There's more... Using StyledMapType object is only one of way handling user defined styled base maps in Google Maps JavaScript API. Another, yet simpler usage is through specifying the styles array in mapOptions object's styles property: var mapOptions = { center: new google.maps.LatLng(39.9078, 32.8252), zoom: 10, mapTypeId: google.maps.MapTypeId.ROADMAP, styles: bluishStyle }; Or; after defining our mapOptions object, we can add the styles property later by: map.setOptions({styles: bluishStyle }); There is an important difference between using StyledMapType object and mapOptions object's style property. Using StyledMapType object enables us to define a number of (virtually infinite) styles as map types. In addition, these map types can be seen in the map type control in the map interface, so that it is very easy to change back and forth between map types for the user. However, if the styles are attached to the map by mapOptions object's style property, there is no way for the user to change between multiple styles. In fact, in the map type control, there will be option for your new selecting the styles, because styles are not attached to a StyledMapType object, therefore cannot be identified as map types. Styled Map Wizard http://gmaps-samples-v3.googlecode.com/svn/trunk/styledmaps/wizard/index.html Preparing style arrays is a detailed job, with many cartographic details. Finding the correct combination in stylers for each feature and element type would take too much time, especially if you have the only way of editing in a text editor. Google has done a great job on preparing "The Styled Map Wizard" for easing this time consuming task that enables you to perform all your styling in an interface so, you can overview what you are changing in real time. After you finish your work, you can export your changes as JSON to be used as styles array in your application. Summary In this article, we presented the addition of external raster data through a series of steps alongside Google layers such as the Tile, Traffic, Transit, and Weather layers. Resources for Article: Further resources on this subject: Including Google Maps in your Posts Using Apache Roller 4.0 [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] Google Earth, Google Maps and Your Photos: a Tutorial Part II [Article]
Read more
  • 0
  • 0
  • 1205
article-image-handling-dom-dart
Packt
24 Dec 2013
15 min read
Save for later

Handling the DOM in Dart

Packt
24 Dec 2013
15 min read
(For more resources related to this topic, see here.) A Dart web application runs inside the browser (HTML) page that hosts the app; a single-page web app is more and more common. This page may already contain some HTML elements or nodes, such as <div> and <input>, and your Dart code will manipulate and change them, but it can also create new elements. The user interface may even be entirely built up through code. Besides that, Dart is responsible for implementing interactivity with the user (the handling of events, such as button-clicks) and the dynamic behavior of the program, for example, fetching data from a server and showing it on the screen. We explored some simple examples of these techniques. Compared to JavaScript, Dart has simplified the way in which code interacts with the collection of elements on a web page (called the DOM tree). This article teaches you this new method using a number of simple examples, culminating with a Ping Pong game. The following are the topics: Finding elements and changing their attributes Creating and removing elements Handling events Manipulating the style of page elements Animating a game Ping Pong using style(s) How to draw on a canvas – Ping Pong revisited Finding elements and changing their attributes All web apps import the Dart library dart:html; this is a huge collection of functions and classes needed to program the DOM (look it up at api.dartlang.org). Let's discuss the base classes, which are as follows: The Navigator class contains info about the browser running the app, such as the product (the name of the browser), its vendor, the MIME types supported by the installed plugins, and also the geolocation object. Every browser window corresponds to an object of the Window class, which contains, amongst many others, a navigator object, the close, print, scroll and moveTo methods, and a whole bunch of event handlers, such as onLoad, onClick, onKeyUp, onMouseOver, onTouchStart, and onSubmit. Use an alert to get a pop-up message in the web page, such as in todo_v2.dart: window.onLoad.listen( (e) => window.alert("I am at your disposal") ); If your browser has tabs, each tab opens in a separate window. From the Window class, you can access local storage or IndexedDB to store app data on the client The Window object also contains an object document of the Document class, which corresponds to the HTML document. It is used to query for, create, and manipulate elements within the document. The document also has a list of stylesheets (objects of the StyleSheet class)—we will use this in our first version of the Ping Pong game. Everything that appears on a web page can be represented by an object of the Node class; so, not only are tags and their attributes nodes, but also text, comments, and so on. The Document object in a Window class contains a List<Node> element of the nodes in the document tree (DOM) called childNodes. The Element class, being a subclass of Node, represents web page elements (tags, such as <p>, <div>, and so on); it has subclasses, such as ButtonElement, InputElement, TableElement, and so on, each corresponding to a specific HTML tag, such as <button>, <input>, <table>, and so on. Every element can have embedded tags, so it contains a List<Element> element called children. Let us make this more concrete by looking at todo_v2.dart, solely for didactic purposes—the HTML file contains an <input> tag with the id value task, and a <ul> tag with the id value list: <div><input id="task" type="text" placeholder="What do you want to do?"/> <p id="para">Initial paragraph text</p> </div> <div id="btns"> <button class="backgr">Toggle background color of header</button> <button class="backgr">Change text of paragraph</button> <button class="backgr">Change text of placeholder in input field and the background color of the buttons</button> </div> <div><ul id="list"/> </div> In our Dart code, we declare the following objects representing them: InputElement task; UListElement list; The following list object contains objects of the LIElement class, which are made in addItem(): var newTask = new LIElement(); You can see the different elements and their layout in the following screenshot: The screen of todo_v2 Finding elements Now we must bind these objects to the corresponding HTML elements. For that, we use the top-level functions querySelector and querySelectorAll; for example, the InputElement task is bound to the <input> tag with the id value task using: task = querySelector('#task'); . Both functions take a string (a CSS selector) that identifies the element, where the id value task will be preceded by #. CSS selectors are patterns that are used in .css files to select elements that you want to style. There are a number of them, but, generally, we only need a few basic selectors (for an overview visit http://www.w3schools.com/cssref/css_selectors.asp). If the element has an id attribute with the value abc, use querySelector('#abc') If the element has a class attribute with value abc, use querySelector('.abc') To get a list of all elements with the tag <button>, use querySelectorAll('button') To get a list of all text elements, use querySelectorAll('input[type="text"]') and all sorts of combinations of selectors; for example, querySelectorAll('#btns .backgr') will get a list of all elements with the backgr class that are inside a tag with the id value btns These functions are defined on the document object of the web page, so in code you will also see document.querySelector() and document.querySelectorAll(). Changing the attributes of elements All objects of the Element class have properties in common, such as classes, hidden, id, innerHtml, style, text, and title; specialized subclasses have additional properties, such as value for a ProgressElement method. Changing the value of a property in an element makes the browser re-render the page to show the changed user interface. Experiment with todo_v2.dart: import 'dart:html'; InputElement task; UListElement list; Element header; List<ButtonElement> btns; main() { task = querySelector('#task'); list = querySelector('#list'); task.onChange.listen( (e) => addItem() ); // find the h2 header element: header = querySelector('.header'); (1) // find the buttons: btns = querySelectorAll('button'); (2) // attach event handler to 1st and 2nd buttons: btns[0].onClick.listen( (e) => changeColorHeader() ); (3) btns[1].onDoubleClick.listen( (e) => changeTextPara() ); (4) // another way to get the same buttons with class backgr: var btns2 = querySelectorAll('#btns .backgr'); (5) btns2[2].onMouseOver.listen( (e) => changePlaceHolder() );(6) btns2[2].onClick.listen((e) => changeBtnsBackColor() ); (7) addElements(); } changeColorHeader() => header.classes.toggle('header2'); (8) changeTextPara() => querySelector('#para').text = "You changed my text!"; (9) changePlaceHolder() => task.placeholder = 'Come on, type something in!'; (10) changeBtnsBackColor() => btns.forEach( (b) => b.classes.add('btns_backgr')); (11) void addItem() { var newTask = new LIElement(); (12) newTask.text = task.value; (13) newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); (14) } addElements() { var ch1 = new CheckboxInputElement(); (15) ch1.checked = true; document.body.children.add(ch1); (16) var par = new Element.tag('p'); (17) par.text = 'I am a newly created paragraph!'; document.body.children.add(par); var el = new Element.html('<div><h4><b>A small divsection</b></h4></div>'); (18) document.body.children.add(el); var btn = new ButtonElement(); btn.text = 'Replace'; btn.onClick.listen(replacePar); document.body.children.add(btn); var btn2 = new ButtonElement(); btn2.text = 'Delete all list items'; btn2.onClick.listen( (e) => list.children.clear() ); (19) document.body.children.add(btn2); } replacePar(Event e) { var el2 = new Element.html('<div><h4><b>I replaced this div!</b></h4></div>'); el.replaceWith(el2); (20) } Comments for the numbered lines are as follows: We find the <h2> element via its class. We get a list of all the buttons via their tags. We attach an event handler to the Click event of the first button, which toggles the class of the <h2> element, changing its background color at each click (line (8)). We attach an event handler to the DoubleClick event of the second button, which changes the text in the <p> element (line (9)). We get the same list of buttons via a combination of CSS selectors. We attach an event handler to the MouseOver event of the third button, which changes the placeholder in the input field (line (10)). We attach a second event handler to the third button; clicking on it changes the background color of all buttons by adding a new CSS class to their classes collection (line (11)). Every HTML element also has an attribute Map where the keys are the attribute names; you can use this Map to change an attribute, for example: btn.attributes['disabled'] = 'true'; Please refer to the following document to see which attributes apply to which element: https://developer.mozilla.org/en-US/docs/HTML/Attributes Creating and removing elements The structure of a web page is represented as a tree of nodes in the Document Object Model (DOM). A web page can start its life with an initial DOM tree, marked up in its HTML file, and then the tree can be changed using code; or, it can start off with an empty tree, which is then entirely created using code in the app, that is every element is created through a constructor and its properties are set in code. Elements are subclasses of Node; they take up a rectangular space on the web page (with a width and height). They have, at most, one parent Element in which they are enclosed and can contain a list of Elements—their children (you can check this with the function hasChildNodes() that returns a bool function). Furthermore, they can receive events. Elements must first be created before they can be added to the list of a parent element. Elements can also be removed from a node. When elements are added or removed, the DOM tree is changed and the browser has to re-render the web page. An Element object is either bound to an existing node with the querySelector method of the document object or it can be created with its specific constructor, such as that in line (12) (where newTask belongs to the class LIElement—List Item element) or line (15). If useful, we could also specify the id in the code, such as in newTask.id = 'newTask'; If you need a DOM element in different places in your code, you can improve the performance of your app by querying it only once, binding it to a variable, and then working with that variable. After being created, the element properties can be given a value such as that in line (13). Then, the object (let's name it elem) is added to an existing node, for example, to the body node with document.body.children.add(elem), as in line (16), or to an existing node, as list in line (14). Elements can also be created with two named constructors from the Element class: Like Element.tag('tagName') in line (17), where tagName is any valid HTML tag, such as <p>, <div>, <input>, <select>, and so on. Like Element.html('htmlSnippet') in line (18), where htmlSnippet is any valid combination of HTML tags. Use the first constructor if you want to create everything dynamically at runtime; use the second constructor when you know what the HTML for that element will be like and you won't need to reference its child elements in your code (but by using the querySelector method, you can always find them if needed). You can leave the type of the created object open using var, or use the type Element, or use the specific class name (such as InputElement)—use the latter if you want your IDE to give you more specific code completion and warnings/errors against the possible misuse of types. When hovering over a list item, the item changes color and the cursor becomes a hand icon; this could be done in code (try it), but it is easier to do in the CSS file: #list li:hover { color: aqua; font-size:20 px; font-weight: bold; cursor: pointer; } To delete an Element elem from the DOM tree, use elem.remove(). We can delete list items by clicking on them, which is coded with only one line: newTask.onClick.listen( (e) => newTask.remove() ); To remove all the list items, use the List function clear(), such as in line (19). Replace elem with another element elem2 using elem.replaceWith(elem2), such as in line (20). Handling events When the user interacts with the web form, such as when clicking on a button or filling in a text field, an event fires; any element on the page can have events. The DOM contains hooks for these events and the developer can write code (an event handler) that the browser must execute when the event fires. How do we add an event handler to an element (which is also called registering an event handler)?. The general format is: element.onEvent.listen( event_handler ) (The spaces are not needed, but can be used to make the code more readable). Examples of events are Click, Change, Focus, Drag, MouseDown, Load, KeyUp, and so on. View this as the browser listening to events on elements and, when they occur, executing the indicated event handler. The argument that is passed to the listen() method is a callback function and has to be of the type EventListener; it has the signature: void EventListener(Event e) The event handler gets passed an Event parameter, succinctly called e or ev, that contains more specific info on the event, such as which mouse button should be pressed in case of a mouse event, on which object the event took place using e.target, and so on. If an event is not handled on the target object itself, you can still write the event handler in its parent, or its parent's parent, and so on up the DOM tree, where it will also get executed; in such a situation, the target property can be useful in determining the original event object. In todo_v2.dart, we examine the various event-coding styles. Using the general format, the Click event on btns2[2] can be handled using the following code: btns2[2].onClick.listen( changeBtnsBackColor ); where changeBtnsBackColor is either the event handler or callback function. This function is written as: changeBtnsBackColor(Event e) => btns.forEach( (b) => b.classes.add('btns_backgr')); Another, shorter way to write this (such as in line (7)) is: btns2[2].onClick.listen( (e) => changeBtnsBackColor() ); changeBtnsBackColor() => btns.forEach( (b) => b.classes.add('btns_backgr')); When a Click event occurs on btns2[2], the handler changeBtnsBackColor is called. In case the event handler needs more code lines, use the brace syntax as follows: changeBtnsBackColor(Event e) { btns.forEach( (b) => b.classes.add('btns_backgr')); // possibly other code } Familiarize yourself with these different ways of writing event handlers. Of course, when the handler needs only one line of code, there is no need for a separate method, as in the following code: newTask.onClick.listen( (e) => newTask.remove() ); For clarity, we use the function expression syntax => whenever possible, but you can inline the event handler and use the brace syntax along with an anonymous function, thus avoiding a separate method. So instead of executing the following code: task.onChange.listen( (e) => addItem() ); we could have executed: task.onChange.listen( (e) { var newTask = new LIElement(); newTask.text = task.value; newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); } ); JavaScript developers will find the preceding code very familiar, but it is also used frequently in Dart code, so make yourself acquainted with the code pattern ( (e) {...} );. The following is an example of how you can respond to key events (in this case, on the window object) using the keyCode and ctrlKey properties of the event e: window.onKeyPress .listen( (e) { if (e.keyCode == KeyCode.ENTER) { window.alert("You pressed ENTER"); } if (e.ctrlKey && e.keyCode == CTRL_ENTER) { window.alert("You pressed CTRL + ENTER"); } }); In this code, the constant const int CTRL_ENTER = 10; is used. (The list of keyCodes can be found at http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes). Manipulating the style of page elements CSS style properties can be changed in the code as well: every element elem has a classes property, which is a set of CSS classes. You can add a CSS class as follows: elem.classes.add ('cssclass'); as we did in changeBtnsBackColor (line (11)); by adding this class, the new style is immediately applied to the element. Or, we can remove it to take away the style: elem.classes.remove ('cssclass'); The toggle method (line (8)) elem.classes.toggle('cssclass'); is a combination of both: first the cssclass is applied (added), the next time it is removed, and, the time after that, it is applied again, and so on. Working with CSS classes is the best way to change the style, because the CSS definition is separated from the HTML markup. If you do want to change the style of an element directly, use its style property elem.style, where the cascade style of coding is very appropriate, for example: newTask.style ..fontWeight = 'bold' ..fontSize = '3em' ..color = 'red';
Read more
  • 0
  • 0
  • 3961

Packt
23 Dec 2013
5 min read
Save for later

GLSL – How to Set up the Shaders from the Host Application Side

Packt
23 Dec 2013
5 min read
(For more resources related to this topic, see here.) Setting up geometry Let's say that our mesh (a quad formed by two triangles) has the following information: vertex positions and texture coordinates. Also, we will arrange the data interleaved in the array. struct MyVertex { float x, y, z; float s, t; }; MyVertex geometry[] =  {{0,1,0,0,1}, {0,0,0,0,0}, {1,1,0,1,1},{1,0,0,1,0}}; // Let's create the objects that will encapsulate our geometry data GLuint vaoID, vboID; glGenVertexArray(1, &vaoID); glBindVertexArray(vaoID); glGenBuffers(1, &vboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); // Attach our data to the OpenGL objects glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(MyVertex), &geometry[0].x,GL_DYNAMIC_DRAW); // Specify the format of each vertex attribute glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(MyVertex), NULL); glEnableVertexAttribArray(1); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(MyVertex),(void*)(sizeof(float)*3)); At this point, we created the OpenGL objects, set up correctly each vertex attribute format and uploaded the data to GPU. Setting up textures Setting up textures follows the same pattern. First create the OpenGL objects, then fill the data and the format in which it is provided. const int width = 512; const int height = 512; const int bpp = 32; struct RGBColor { unsigned char R,G,B,A; }; RGBColor textureData[width * height]; for(size_t y = 0; y < height; ++y) for(size_t x = 0; x < width; ++x)                textureData[y*height+x] = …; //fill your texture here // Create GL object GLuint texID; glGenTextures(1, &texID); glBindTexture(GL_TEXTURE_2D, texID); // Fill up the data and set the texel format glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,GL_UNSIGNED_BYTE, &textureData[0].R); // Set texture format data: interpolation and clamping modes glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); And that's all about textures. Later we will use the texture ID to place the texture into a slot, and that slot number is the information that we will pass to the shader to tell it where the texture is placed in order to locate it. Setting up shaders In order to setup the shaders, we have to carry out some steps:  load up the source code, compile it and associate to a shader object, and link all shaders together into a program object. char* vs[1]; vs[0] = "#version 430\nlayout (location = 0) in vec3 PosIn;layout (location = 1) in vec2 TexCoordIn;smooth out vec2 TexCoordOut;uniform mat4 MVP; void main() { TexCoordOut = TexCoordIn;gl_Position = MVP * vec4(PosIn, 1.0);}"; char fs[1]; fs = "#version 430\n uniform sampler2D Image; smooth in vec2 TexCoord;out vec4 FBColor;void main() {FBColor = texture(Image, TexCoord);}"; // Upload source code and compile it GLuint pId, vsId, fsId; vsId = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vsId, 1, (const char**)&vs, NULL); glCompileShader(vsId); // Check for compilation errors GLint status = 0, bufferLength = 0; glGetShaderiv(vsId, GL_COMPILE_STATUS, &status); if(!status) { char* infolog = new char[bufferLength + 1]; glGetShaderiv(vsId, GL_INFO_LOG_LENGTH, &bufferLength); glGetShaderInfoLog(vsId, bufferLength, NULL, infolog); infolog[bufferLength] = 0; printf("Shader compile errors / warnings: %s\n", infolog); delete [] infolog; } The process for the fragment shader is exactly the same. The only change is that the shader object must be created as fsId = glCreateShader(GL_FRAGMENT_SHADER); // Now let's proceed to link the shaders into the program object pId = glCreateProgram(); glAttachShader(pId, vsId); glAttachShader(pId, fsId); glLinkProgram(pId); glGetProgramiv(pId, GL_LINK_STATUS, &status); if(!status) { char* infolog = new char[bufferLength + 1]; glGetProgramiv(pId, GL_INFO_LOG_LENGTH, &bufferLength); infolog[bufferLength] = 0; printf("Shader linking errors / warnings: %s\n", infolog); delete [] infolog; } // We do not need the vs and fs anymore, so it is same mark them for deletion. glDeleteShader(vsId); glDeleteShader(fsId); The last things to upload to the shader are two uniform variables: the one that corresponds with the view-projection matrix and the one that represents the texture. Those are uniform variables and are set in the following way: // First bind the program object where we want to upload the variables glUseProgram(pId); // Obtain the "slot number" where the uniform is located in GLint location = glGetUniformLocation(pId, "MVP"); float mvp[16] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1}; // Set the data into the uniform's location glUniformMatrix4fv(location, 1, GL_FALSE, mvp); // Active the texture slot 0 glActiveTexture(GL_TEXTURE0); // Bind the texture to the active slot glBindTexture(GL_TEXTURE_2D, texID); location = glGetUniformLocation(pId, "Image"); // Upload the texture's slot number to the uniform variable int imageSlot = 0; glUniform1i(location, imageSlot); And that's all. For the other types of shaders, process is all the same: Create shader object, upload source code, compile and link, but using the proper OpenGL types such as GL_GEOMETRY_SHADER or GL_COMPUTE_SHADER. A last step, to draw all these things, is to establish them as active and issue the draw call: glBindVertexArray(vaoID); glBindTexture(GL_TEXTURE_2D, texID); glUseProgram(pId); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); Resources for Article: Further resources on this subject: The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] Getting Started with GLSL [Article]
Read more
  • 0
  • 0
  • 1847

article-image-transformation
Packt
20 Dec 2013
23 min read
Save for later

Apache Camel: Transformation

Packt
20 Dec 2013
23 min read
(For more resources related to this topic, see here.) The latest version of the example code for this article can be found at http://github.com/CamelCookbook/camel-cookbook-examples. You can also download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. In this article we will explore a number of ways in which Camel performs message content transformation: Let us first look at some important concepts regarding transformation of messages in Camel: Using the transform statement. This allows you to reference Camel Expression Language code within the route to do message transformations. Calling a templating component, such as Camel's XSLT or Velocity template style components. This will typically reference an external template resource that is used in transforming your message. Calling a Java method (for example, beanref), defined by you, within a Camel route to perform the transformation. This is a special case processor that can invoke any referenced Java object method. Camel's Type Converter capability that can automatically cast data from one type to another transparently within your Camel route. This capability is extensible, so you can add your own Type Converters. Camel's Data Format capability that allows us to use built-in, or add our own, higher order message format converters. A Camel Data Format goes beyond simple data type converters, which handle simple data type translations such as String to int, or File to String. Data Formats are used to translate between a low-level representation (XML) and a high-level one (Java objects). Other examples include encrypting/decrypting data, and compressing/decompressing data. For more, see http://camel.apache.org/data-format.html. A number of Camel architectural concepts are used throughout this article. Full details can be found at the Apache Camel website at http://camel.apache.org. The code for this article is contained within the camel-cookbook-transformation module of the examples. Transforming using a Simple Expression When you want to transform a message in a relatively straightforward way, you use Camel's transform statement along with one of the Expression Languages provided by the framework. For example, Camel's Simple Expression Language provides you with a quick, inline mechanism for straightforward transformations. This recipe will show you how to use Camel's Simple Expression Language to transform the message body. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.simple package. Spring XML files are located under src/main/resources/META-INF/spring and are prefixed with simple. How to do it... In a Camel route, use a transform DSL statement containing the Expression Language code to do your transformation. In the XML DSL, this is written as follows: <route> <from uri="direct:start"/> <transform> <simple>Hello ${body}</simple> </transform> </route> In the Java DSL, the same route is expressed as: from("direct:start") .transform(simple("Hello ${body}")); In this example, the message transformation prefixes the incoming message with the phrase Hello using the Simple Expression Language. The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's Simple Expression Language is quite good at manipulating the String content through its access to all aspects of the message being processed, through its rich String and logical operators. The result of your Simple Expression becomes the new message body after the transform step. This includes predicates such as using Simple's logical operators, to evaluate a true or false condition; the results of that Boolean operation become the new message body containing a String: "true" or "false". The advantage of using a distinct transform step within a route, as opposed to embedding it within a processor, is that the logic is clearly visible to other programmers. Ensure that the expression embedded within your route is kept simple so as to not distract the next developer from the overall purpose of the integration. It is best to move more complex (or just lengthy) transformation logic into its own subroute, and invoke it using direct: or seda:. There's more... The transform statement will work with any Expression Language available in Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming inline with XQuery recipe will show you how to use the XQuery Expression Language to do transformations on XML messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel Simple Expression Language: http://camel.apache.org/simple.html Languages supported by Camel: http://camel.apache.org/languages.html The Transforming inline with XQuery recipe   Transforming inline with XQuery Camel supports the use of Camel's XQuery Expression Language along with the transform statement as a quick and easy way to transform an XML message within a route. This recipe will show you how to use an XQuery Expression to do in-route XML transformation. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xquery package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xquery. To use the XQuery Expression Language, you need to add a dependency element for the camel-saxon library, which provides the implementation for the XQuery Expression Language. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>${camel-version}</version> </dependency>   How to do it... In the Camel route, specify a transform statement followed by the XQuery Expression Language code to do your transformation. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <transform> <xquery> <books>{ for $x in /bookstore/book where $x/price>30 order by $x/title return $x/title }</books> </xquery> </transform> </route> When using the XML DSL, remember to XML encode the XQuery embedded XML elements. Therefore, < becomes &lt; and > becomes &gt;. In the Java DSL, the same route is expressed as: from("direct:start") .transform(xquery("<books>{ for $x in /bookstore/book " + "where $x/price>30 order by $x/title " + "return $x/title }</books>")); Feed the following input XML message through the transformation: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> The resulting message will be: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's XQuery Expression Language is a good way to inline XML transformation code within your route. The result of the XQuery Expression becomes the new message body after the transform step. All of the message's body, headers, and properties are made available to the XQuery Processor, so you can reference them directly within your XQuery statement. This provides you with a powerful mechanism for transforming XML messages. If you are more comfortable with XSLT, take a look at the Transforming with XSLT recipe. In-lining the transformation within your integration route can sometimes be an advantage as you can clearly see what is being changed. However, when the transformation expression becomes so complex that it starts to overwhelm the integration route, you may want to consider moving the transformation expression outside of the route. See the Transforming using a Simple Expression recipe for another inline transformation example, and see the Transforming with XSLT recipe for an example of externalizing your transformation. You can fetch the XQuery Expression from an external file using Camel's resource reference syntax. To reference an XQuery file on the classpath you can specify: <transform> <xquery>resource:classpath:/path/to/myxquery.xml</xquery> </transform> This is equivalent to using XQuery as an endpoint: <to uri="xquery:classpath:/path/to/myxquery.xml"/>   There's more... The XQuery Expression Language allows you to pass in headers associated with the message. These will show up as XQuery variables that can be referenced within your XQuery statements. Consider, from the previous example, to allow the value of the books that are filtered to be passed in with the message body, that is, parameterize the XQuery, you can modify the XQuery statement as follows: <transform> <xquery> declare variable $in.headers.myParamValue as xs:integer external; <books value='{$in.headers.myParamValue}'&gt;{ for $x in /bookstore/book where $x/price>$in.headers.myParamValue order by $x/title return $x/title }&lt;/books&gt; </xquery> </transform> Message headers will be associated with an XQuery variable called in.headers.<name of header>. To use this in your XQuery, you need to explicitly declare an external variable of the same name and XML Schema (xs:) type as the value of the message header. The transform statement will work with any Expression Language enabled within Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming using a Simple Expression recipe will show you how to use the Simple Expression Language to do transformations on String messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel XQuery Expression Language: http://camel.apache.org/xquery.html XQuery language: http://www.w3.org/XML/Query/ Languages supported by Camel: http://camel.apache.org/languages.html The Transforming with XSLT recipe The Transforming using a Simple Expression recipe   Transforming with XSLT When you want to transform an XML message using XSLT, use Camel's XSLT Component. This is similar to the Transforming inline with XQuery recipe except that there is no XSLT Expression Language, so it can only be used as an endpoint. This recipe will show you how to transform a message using an external XSLT resource. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xslt package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xslt. How to do it... In a Camel route, add the xslt processor step into the route at the point where you want the XSLT transformation to occur. The XSLT file must be referenced as an external resource, and depending on where the file is located, prefixed with either classpath:(default if not using a prefix), file:, or http:. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <to uri="xslt:book.xslt"/> </route> In the Java DSL, the same route is expressed as: from("direct:start") .to("xslt:book.xslt"); The next processing step in the route will see the transformed message content in the body of the exchange. How it works... The following example shows how the preceding steps will process an XML file. Consider the following input XML message: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> Process this with the following XSLT contained in books.xslt: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" > <xsl:output omit-xml-declaration="yes"/> <xsl:template match="/"> <books> <xsl:apply-templates select="/bookstore/book/title[../price>30]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> </xsl:stylesheet> The result will appear as follows: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The Camel XSLT Processor internally runs the message body through a registered Java XML transformer using the XSLT file referenced by the endpoint. This processor uses Camel's Type Converter capabilities to convert the input message body type to one of the supported XML source models in the following order of priority: StAXSource (off by default; this can be enabled by setting allowStAX=true on the endpoint URI) SAXSource StreamSource DOMSource Camel's Type Converter can convert from most input types (String, File, byte[], and so on) to one of the XML source types for most XML content loaded through other Camel endpoints with no extra work on your part. The output data type for the message is, by default, a String, and is configurable using the output parameter on the xslt endpoint URI. There's more... The XSLT Processor passes in headers, properties, and parameters associated with the message. These will show up as XSLT parameters that can be referenced within your XSLT statements. You can pass in the names of the books as parameters to the XSLT template; to do so, modify the previous XLST as follows: <xsl:param name="myParamValue"/> <xsl:template match="/"> <books> <xsl:attribute name="value"> <xsl:value-of select="$myParamValue"/> </xsl:attribute> <xsl:apply-templates select="/bookstore/book/title[../price>$myParamValue]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> The Exchange instance will be associated with a parameter called exchange; the IN message with a parameter called in; and the message headers, properties, and parameters will be associated XSLT parameters with the same name. To use these in your XSLT, you need to explicitly declare a parameter of the same name in your XSLT file. In the previous example, it is possible to use either a message header or exchange property called myParamValue. See also Message Translator: http://camel.apache.org/message-translator.html Camel XSLT Component: http://camel.apache.org/xslt Camel Type Converter: http://camel.apache.org/type-converter.html XSL working group: http://www.w3.org/Style/XSL/ The Transforming inline with XQuery recipe   Transforming from Java to XML with JAXB Camel's JAXB Component is one of a number of components that can be used to convert your XML data back and forth from Java objects. It provides a Camel Data Format that allows you to use JAXB annotated Java classes, and then marshal (Java to XML) or unmarshal (XML to Java) your data. JAXB is a Java standard for translating between XML data and Java that is used by creating annotated Java classes that bind, or map, to your XML data schema. The framework takes care of the rest. This recipe will show you how to use the JAXB Camel Data Format to convert back and forth from Java to XML. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.jaxb package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with jaxb. To use Camel's JAXB Component, you need to add a dependency element for the camel-jaxb library, which provides the implementation for the JAXB Data Format. Add the following to the dependencies section of your Maven POM: <dependency> lt;groupId>org.apache.camel</groupId> <artifactId>camel-jaxb</artifactId> <version>${camel-version}</version> lt;/dependency>   How to do it... The main steps for converting between Java and XML are as follows: Given a JAXB annotated model, reference that model within a named Camel Data Format. Use that named Data Format within your Camel route using the marshal and unmarshal DSL statements. Create an annotated Java model using standard JAXB annotations. There are a number of external tools that can automate this creation from existing XML or XSD (XML Schema) files: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "title", "author", "year", "price" } ) @XmlRootElement(name = "book") public class Book { @XmlElement(required = true) protected Book.Title title; @XmlElement(required = true) protected List<String> author; protected int year; protected double price; // getters and setters } Instantiate a JAXB Data Format within your Camel route that refers to the Java package(s) containing your JAXB annotated classes. In the XML DSL, this is written as: <camelContext > <dataFormats> <jaxb id="myJaxb" contextPath="org.camelcookbook .transformation.myschema"/> </dataFormats> <!-- route definitions here --> </camelContext> In the Java DSL, the Data Format is defined as: public class JaxbRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { DataFormat myJaxb= new JaxbDataFormat( "org.camelcookbook.transformation.myschema"); // route definitions here } } Reference the Data Format within your route, choosing marshal(Java to XML) or unmarshal(XML to Java) as appropriate. In the XML DSL, this routing logic is written as: <route> <from uri="direct:unmarshal"/> <unmarshal ref="myJaxb"/> </route> In the Java DSL, this is expressed as: from("direct:unmarshal").unmarshal(myJaxb);   How it works... Using Camel JAXB to translate your XML data back and forth to Java makes it much easier for the Java processors defined later on in your route to do custom message processing. This is useful when the built-in XML translators (for example, XSLT or XQuery) are not enough, or you just want to call existing Java code. Camel JAXB eliminates the boilerplate code from your integration flows by providing a wrapper around the standard JAXB mechanisms for instantiating the Java binding for the XML data. There's more... Camel JAXB works just fine with existing JAXB tooling like the maven-jaxb2-plugin plugin, which can automatically create JAXB-annotated Java classes from an XML Schema (XSD). See also Camel JAXB: http://camel.apache.org/jaxb.html Available Data Formats: http://camel.apache.org/data-format.html JAXB Specification: http://jcp.org/en/jsr/detail?id=222   Transforming from Java to JSON Camel's JSON Component is used when you need to convert your JSON data back and forth from Java. It provides a Camel Data Format that, without any requirement for an annotated Java class, allows you to marshal (Java to JSON) or unmarshal (JSON to Java) your data. There is only one step to using Camel JSON to marshal and unmarshal XML data. Within your Camel route, insert the marshal(Java to JSON), or unmarshal(JSON to Java) statement, and configure it to use the JSON Data Format. This recipe will show you how to use the camel-xstream library to convert from Java to JSON, and back. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.json package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with json. To use Camel's JSON Component, you need to add a dependency element for the camel-xstream library, which provides an implementation for the JSON Data Format using the XStream library. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xstream</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the Data Format within your route, choosing the marshal (Java to JSON), or unmarshal (JSON to Java) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <json/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().json() .to("mock:marshalResult");   How it works... Using Camel JSON simplifies translating your data between JSON and Java. This is convenient when you are dealing with REST endpoints and need Java processors in Camel to do custom message processing later on in the route. Camel JSON provides a wrapper around the JSON libraries for instantiating the Java binding for the JSON data, eliminating more boilerplate code from your integration flows. There's more... Camel JSON works with the XStream library by default, and can be configured to use other JSON libraries, such as Jackson or GSon. These other libraries provide additional features, more customization, and more flexibility that can be leveraged by Camel. To use them, include their respective Camel components, for example, camel-jackson, and specify the library within the json element: <dataFormats> <json id="myJson" library="Jackson"/> </dataFormats>   See also Camel JSON: http://camel.apache.org/json.html Available Data Formats: http://camel.apache.org/data-format.html   Transforming from XML to JSON Camel provides an XML JSON Component that converts your data back and forth between XML and JSON in a single step, without an intermediate Java object representation. It provides a Camel Data Format that allows you to marshal (XML to JSON), or unmarshal (JSON to XML) your data. This recipe will show you how to use the XML JSON Component to convert from XML to JSON, and back. Getting ready Java code for this recipe is located in the org.camelcookbook.transformation.xmljson package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xmljson. To use Camel's XML JSON Component, you need to add a dependency element for the camel-xmljson library, which provides an implementation for the XML JSON Data Format. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmljson</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the xmljson Data Format within your route, choosing the marshal(XML to JSON), or unmarshal(JSON to XML) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <xmljson/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().xmljson() .to("mock:marshalResult");   How it works... Using the Camel XML JSON Component simplifies translating your data between XML and JSON, making it convenient to use when you are dealing with REST endpoints. The XML JSON Data Format wraps around the Json-lib library, which provides the core translation capabilities, eliminating more boilerplate code from your integration flows. There's more... You may need to configure XML JSON if you want to fine-tune the output of your transformation. For example, consider the following JSON: [{"@category":"PROGRAMMING","title":{"@lang":"en","#text": "Apache Camel Developer's Cookbook"},"author":[ "Scott Cranton","Jakub Korab"],"year":"2013","price":"49.99"}] This will be converted as follows, by default, which may not be exactly what you want (notice the <a> and <e> elements): <?xml version="1.0" encoding="UTF-8"?> <a> <e category="PROGRAMMING"> <author> <e>Scott Cranton</e> <e>Jakub Korab</e> </author> <price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </e> </a> To configure XML JSON to use <bookstore> as the root element instead of <a>, use <book> for the individual elements instead of <e>, and expand the multiple author values to use a sequence of <author> elements, you would need to tune the configuration of the Data Format before referencing it in your route. In the XML DSL, the definition of the Data Format and the route that uses it is written as follows: <dataFormats> <xmljson id="myXmlJson" rootName="bookstore" elementName="book" expandableProperties="author author"/> </dataFormats> <route> <from uri="direct:unmarshalBookstore"/> <unmarshal ref="myXmlJson"/> <to uri="mock:unmarshalResult"/> </route> In the Java DSL, the same thing is expressed as: XmlJsonDataFormat xmlJsonFormat = new XmlJsonDataFormat(); xmlJsonFormat.setRootName("bookstore"); xmlJsonFormat.setElementName("book"); xmlJsonFormat.setExpandableProperties( Arrays.asList("author", "author")); from("direct:unmarshalBookstore") .unmarshal(xmlJsonFormat) .to("mock:unmarshalBookstoreResult"); This will result in the previous JSON being unmarshalled as follows: <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="PROGRAMMING"> <author>Scott Cranton</author> <author>Jakub Korab</author><price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </book> </bookstore>   See also Camel XML JSON: http://camel.apache.org/xmljson.html Available Data Formats: http://camel.apache.org/data-format.html Json-lib: http://json-lib.sourceforge.net   Summary We saw how Apache Camel is flexible in allowing us to transform and convert messages in various formats. This makes Apache Camel an ideal choice for integrating different systems together. Resources for Article: Further resources on this subject: Drools Integration Modules: Spring Framework and Apache Camel [Article] Installing Apache Karaf [Article] Working with AMQP [Article]
Read more
  • 0
  • 0
  • 6684
article-image-jboss-eap6-overview
Packt
19 Dec 2013
5 min read
Save for later

JBoss EAP6 Overview

Packt
19 Dec 2013
5 min read
(For more resources related to this topic, see here.) Understanding high availability To understand the term high availability, here is its definition from Wikipedia: "High availability is a system design approach and associated service implementation that ensures that a prearranged level of operational performance will be met during a contractual measurement period. Users want their systems, for example, hospitals, production computers, and the electrical grid to be ready to serve them at all times. If a user cannot access the system, it is said to be unavailable." In the IT field, when we mention the words "high availability", we usually think of the uptime of the server, and technologies such as clustering and load balancing can be used to achieve this. Clustering means to use multiple servers to form a group. From their perspective, users see the cluster as a single entity and access it as if it's just a single point. The following figure shows the structure of a cluster: To achieve the previously mentioned goal, we usually use a controller of the cluster, called load balancer, to sit in front of the cluster. Its job is to receive and dispatch user requests to a node inside the cluster, and the node will do the real work of processing the user requests. After the node processes the user request, the response will be sent to the load balancer, and the load balancer will send it back to the users. The following figure shows the workflow:     Besides load balancing user requests, the clustering system can also do failover inside itself. Failover means when a node has crashed, the load balancer can switch to other running nodes to process user requests. In a cluster, some nodes may fail during runtime. If this happens, the requests to the failed nodes should be redirected to the healthy nodes. The process is shown in the following figure: To make failover possible, the node in a cluster should be able to replicate user data from one to another. In JBoss EAP6, the Infinispan module, which is a data-grid solution provided by the JBoss community, does the web session replication. If one node fails, the user request could be redirected to another node; however, the session with the user won't be lost. The following figure illustrates failover: To achieve the previously mentioned goals, the JBoss community has provided us a powerful set of tools. In the next section we'll have an overview on it. JBoss EAP6 high availability As a Java EE application server, JBoss EAP6 uses modules coming from different open source projects: Web server (JBossWeb) EJB (JBoss EJB3) Web service (JBossWS/RESTEasy) Messaging (HornetQ) JPA and transaction management (Hibernate/Narayana) As we can see, JBoss EAP6 uses many more open source projects, and each part may have its own consideration to achieve the goal of high availability. Now let's have a brief on these parts with respect to high availability: JBoss Web, Apache httpd, mod_jk, and mod_cluster The clustering for a web server may be the most popular topic and is well understood by the majority. There are a lot of good solutions in the market. For JBoss EAP6, the solution it adopted is to use Apache httpd as the load balancer. httpd will dispatch the user requests to the EAP server. Red Hat has led two open source projects to work with httpd, which are called mod_jk and mod_cluster. In this article we'll learn how to use these two projects. EJB session bean JBoss EAP6 has provided the @org.jboss.ejb3.annotation.Clustered annotation that we can use on both the @Stateless and @Stateful session beans. The clustered annotation is JBoss EAP6/WildFly specific implementation. When using @Clustered with @Stateless, the session bean can be load balanced; and when @Clustered is used with the @Stateful bean, the state of the bean will be replicated in the cluster. JBossWS and RESTEasy JBoss EAP6 provides two web service solutions out of the box. One is JBossWS and the other is RESTEasy. JBossWS is a web service framework that implements the JAX-WS specification. RESTEasy is an implementation of the JAX-RS specification to help you to build RESTFul web services. HornetQ HornetQ is a high-performance messaging system provided by the JBoss community. The messaging system is designed to be asynchronous and has its own consideration on load balancing and failover. Hibernate and Narayana In the database and transaction management field, high availability is a huge topic. For example, each database vendor may have their own solutions on load balancing the database queries. For example, PostgreSQL has some open source solutions, for example, Slony and pgpool, which can let us replicate the database from master to slave and which distributes the user queries to different database nodes in a cluster. In the ORM layer, Hibernate also has projects such as Hibernate Shards that can deploy a data base in a distributed way. JGroups and JBoss Remoting JGroups and JBoss Remoting are the cornerstone of JBoss EAP6 clustering features, which enable it to support high availability. JGroups is a reliable communication system based on IP multicasting. JGroups is not limited to multicast and can use TCP too. JBoss Remoting is the underlying communication framework for multiple parts in JBoss EAP6. Summary In this article we learned the basic concepts about high availability and also had an overview of the basic functions of JBoss EAP6. This will help you in understanding JBoss EAP6 in a better way. Resources for Article: Further resources on this subject: Introduction to JBoss Clustering [Article] JBoss RichFaces 3.3 Supplemental Installation [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article]
Read more
  • 0
  • 0
  • 1333

article-image-fast-array-operations-numpy
Packt
19 Dec 2013
10 min read
Save for later

Fast Array Operations with NumPy

Packt
19 Dec 2013
10 min read
(For more resources related to this topic, see here.) Getting started with NumPy NumPy is founded around its multidimensional array object, numpy.ndarray. NumPy arrays are a collection of elements of the same data type; this fundamental restriction allows NumPy to pack the data in an efficient way. By storing the data in this way NumPy can handle arithmetic and mathematical operations at high speed. Creating arrays You can create NumPy arrays using the numpy.array function. It takes list-like object (or another array) as input and, optionally, a string expressing its data type. You can interactively test array creation using an IPython shell as follows: In [1]: import numpy as np In [2]: a = np.array([0, 1, 2]) Every NumPy array has a data type that can be accessed by the dtype attribute, as shown in the following code. In the following code example, dtype is a 64-bit integer. In [3]: a.dtype Out[3]: dtype('int64') If we want those numbers to be treated as a float type of variable, we can either pass the dtype argument in the np.array function or cast the array to another data type using the astype method as shown in the following code: In [4]: a = np.array([1, 2, 3], dtype='float32') In [5]: a.astype('float32') Out[5]: array([ 0.,  1.,  2.], dtype=float32) To create an array with two dimensions (an array of arrays) we can initialize the array using a nested sequence shown as follows: In [6]: a = np.array([[0, 1, 2], [3, 4, 5]]) In [7]: print(a) Out[7]: [[0 1 2]         [3 4 5]] The array created in this way has two dimensions—axes in NumPy's jargon. Such an array is like a table that contains two rows and three columns. We can access the axes structure using the ndarray.shape attribute: In [7]: a.shape Out[7]: (2, 3) Arrays can also be reshaped only as long as the product of the shape dimensions is equal to the total number of elements in the array. For example, we can reshape an array containing 16 elements in the following ways: (2, 8), (4, 4), or (2, 2, 4). To reshape an array we can either use the ndarray.reshape method or directly change the ndarray.shape attribute. The following code illustrates the use of the ndarray.reshape method: In [7]: a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8,                       9, 10, 11, 12, 13, 14, 15]) In [7]: a.shape Out[7]: (16,) In [8]: a.reshape(4, 4) # Equivalent: a.shape = (4, 4) Out[8]: array([[ 0,  1,  2,  3],        [ 4,  5,  6,  7],        [ 8,  9, 10, 11],        [12, 13, 14, 15]]) Thanks to this property you are also free to add dimensions of size one. You can reshape an array with 16 elements to (16, 1), (1, 16), (16, 1, 1), and so on. NumPy provides convenience functions, shown in the following code, to create arrays filled with zeros, filled with ones, or without an initialization value (empty—their actual value is meaningless and depends on the memory state). Those functions take the array shape as a tuple and optionally its dtype. In [8]: np.zeros((3, 3)) In [9]: np.empty((3, 3)) In [10]: np.ones((3, 3), dtype='float32') In our examples we will use the numpy.random module to generate random floating point numbers in the (0, 1) interval. The numpy.random module is shown as follows: In [11]: np.random.rand(3, 3) Sometimes it is convenient to initialize arrays that have a similar shape to other arrays. Again, NumPy provides some handy functions for that purpose such as zeros_like, empty_like, and ones_like. These functions are as follows: In [12]: np.zeros_like(a) In [13]: np.empty_like(a) In [14]: np.ones_like(a) Accessing arrays NumPy array interface is, on a shallow level, similar to Python lists. They can be indexed using integers, and can also be iterated using a for loop. In [15]: A = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]) In [16]: A[0] Out[16]: 0 In [17]: [a for a in A] Out[17]: [0, 1, 2, 3, 4, 5, 6, 7, 8] It is also possible to index an array in multiple dimensions. If we take a (3,3) array (an array containing 3 triplets) and we index the first element, we obtain the first triplet shown as follows: In [18]: A = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) In [19]: A[0] Out[19]: array([0, 1, 2]) We can index the triplet again by adding the other index separated by a comma. To get the second element of the first triplet we can index using [0, 1] as shown in the following code: In [20]: A[0, 1] Out[20]: 1 NumPy allows you to slice arrays in single and multiple dimensions. If we index on the first dimension we will get a collection of triplets shown as follows: In [21]: A[0:2] Out[21]: array([[0, 1, 2],                [3, 4, 5]]) If we slice the array with [0:2]. for every selected triplet we extract the first two elements, resulting in a (2, 2) array shown in the following code: In [22]: A[0:2, 0:2] Out[22]: array([[0, 1],                 [3, 4]]) Intuitively, you can update values in the array by using both numerical indexes and slices. The syntax is as follows: In [23]: A[0, 1] = 8 In [24]: A[0:2, 0:2] = [[1, 1], [1, 1]] Indexing with the slicing syntax is fast because it doesn't make copies of the array. In NumPy terminology it returns a view over the same memory area. If we take a slice of the original array and then changes one of its value; the original array will be updated as well. The following code illustrates an example of the same: In [25]: a = np.array([1, 1, 1, 1]) In [26]: a_view = A[0:2] In [27]: a_view[0] = 2 In [28]: print(A) Out[28]: [2 1 1 1] We can take a look at another example that shows how the slicing syntax can be used in a real-world scenario. We define an array r_i, shown in the following line of code, which contains a set of 10 coordinates (x, y); its shape will be (10, 2): In [29]: r_i = np.random.rand(10, 2) A typical operation is extracting the x component of each coordinate. In other words you want to extract the items [0, 0], [1, 0], [2, 0], and so on. resulting in an array with shape (10,). It is helpful to think that the first index is moving while the second one is fixed (at 0). With this in mind, we will slice every index on the first axis (the moving one) and take the first element (the fixed one) on the second axis as shown in the following line of code: In [30]: x_i = r_i[:, 0] On the other hand, the following expression of code will keep the first index fixed and the second index moving, giving the first (x, y) coordinate: In [31]: r_0 = r_i[0, :] Slicing all the indexes over the last axis is optional; using r_i[0] has the same effect as r_i[0, :]. NumPy allows to index an array by using another NumPy array made of either integer or Boolean values—a feature called fancy indexing. If you index with an array of integers, NumPy will interpret the integers as indexes and will return an array containing their corresponding values. If we index an array containing 10 elements with [0, 2, 3], we obtain an array of size 3 containing the elements at positions 0, 2 and 3. The following code gives us an illustration of this concept: In [32]: a = np.array([9, 8, 7, 6, 5, 4, 3, 2, 1, 0]) In [33]: idx = np.array([0, 2, 3]) In [34]: a[idx] Out[34]: array([9, 7, 6]) You can use fancy indexing on multiple dimensions by passing an array for each dimension. If we want to extract the elements [0, 2] and [1, 3] we have to pack all the indexes acting on the first axis in one array, and the ones acting on the second axis in another. This can be seen in the following code: In [35]: a = np.array([[0, 1, 2], [3, 4, 5],                        [6, 7, 8], [9, 10, 11]]) In [36]: idx1 = np.array([0, 1]) In [37]: idx2 = np.array([2, 3]) In [38]: a[idx1, idx2] You can also use normal lists as index arrays, but not tuples. For example the following two statements are equivalent: >>> a[np.array([0, 1])] # is equivalent to >>> a[[0, 1]] However, if you use a tuple, NumPy will interpret the following statement as an index on multiple dimensions: >>> a[(0, 1)] # is equivalent to >>> a[0, 1] The index arrays are not required to be one-dimensional; we can extract elements from the original array in any shape. For example we can select elements from the original array to form a (2,2) array shown as follows: In [39]: idx1 = [[0, 1], [3, 2]] In [40]: idx2 = [[0, 2], [1, 1]] In [41]: a[idx1, idx2] Out[41]: array([[ 0,  5],                 [10,  7]]) The array slicing and fancy indexing features can be combined. For example, this is useful if we want to swap the x and y columns in a coordinate array. In the following code, the first index will be running over all the elements (a slice), and for each of those we extract the element in position 1 (the y) first and then the one in position 0 (the x): In [42]: r_i = np.random(10, 2) In [43]: r_i[:, [0, 1]] = r_i[:, [1, 0]] When the index array is a Boolean there are slightly different rules. The Boolean array will act like a mask; every element corresponding to True will be extracted and put in the output array. This procedure is shown as follows: In [44]: a = np.array([0, 1, 2, 3, 4, 5]) In [45]: mask = np.array([True, False, True, False, False, False]) In [46]: a[mask] Out[46]: array([0, 2]) The same rules apply when dealing with multiple dimensions. Furthermore, if the index array has the same shape as the original array, the elements corresponding to True will be selected and put in the resulting array. Indexing in NumPy is a reasonably fast operation. Anyway, when speed is critical, you can use the, slightly faster, numpy.take and numpy.compress functions to squeeze out a little more speed. The first argument of numpy.take is the array we want to operate on, and the second is the list of indexes we want to extract. The last argument is axis; if not provided, the indexes will act on the flattened array, otherwise they will act along the specified axis. In [47]: r_i = np.random(100, 2) In [48]: idx = np.arange(50) # integers 0 to 50 In [49]: %timeit np.take(r_i, idx, axis=0) 1000000 loops, best of 3: 962 ns per loop In [50]: %timeit r_i[idx] 100000 loops, best of 3: 3.09 us per loop The similar, but faster version for Boolean arrays is numpy.compress which works in the same way. The use of numpy.compress is shown as follows: In [51]: idx = np.ones(100, dtype='bool') # all True values In [52]: %timeit np.compress(idx, r_i, axis=0) 1000000 loops, best of 3: 1.65 us per loop In [53]: %timeit r_i[idx] 100000 loops, best of 3: 5.47 us per loop Summary The article thus covers the basics of NumPy arrays, talking about the creating of arrays and how we can access them. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Python Multimedia: Fun with Animations using Pyglet [Article]
Read more
  • 0
  • 0
  • 13402