Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-rules-and-events
Packt
12 Jul 2013
10 min read
Save for later

Rules and Events

Packt
12 Jul 2013
10 min read
(For more resources related to this topic, see here.) Handling specifc events is something everybody expects from an application. While JavaScript has its own event handling model, working with Dynamics CRM offers a different set of events that we can take advantage of. The JavaScript event model, while it might work, is not supported, and defnitely not the approach you want to take when working within the context of Dynamics CRM. Some of the most notable events and their counterparts in JavaScript are described in the following table: Dynamics CRM 2011 JavaScript Description OnLoad onload This is a form event. Executes when a form is loaded. Most common use is to filter and hide elements on the form. OnSave onsubmit This is a form event. It executes when a form is saved. Most common use is to stop an operation from executing, as a result of a failed validation procedure. TabStateChange n/a This is a form event. It executes when the DisplayState of the tab changes. OnChange onchange This is a field specific event. It executes when tabbing out of a field where you've changed the value. Please note that there is no equivalent for onfocus and onblur. OnReadyStateComplete n/a This event indicates that the content of an IFrame has completed loading. Additional details on Dynamics CRM 2011 specifc events can be found on MSDN at http://msdn.microsoft.com/en-us/library/gg334481.aspx. Form load event usage In this recipe, we will focus on executing a few operations triggered by the form load event. We can check the value of a specifc field on the form, and based on that we can decide to hide a tab, hide a field, and prepopulate a text field with a predefned value. Getting ready... Just as with any of the previous recipes, you will need access to an environment, and permissions to make customizations. You should be a system administrator, a system customizer, or a custom role configured to allow you to perform the following operations. How to do it... For the purpose of this exercise, we will add to the Contact entity a new tab called "Special Customer", with some additional custom fields. We will also add an option set that we will check to determine if we hide or not the fields, as well as two new fields: one text field and one lookup field. So let's get started! Open the contact's main form for editing. Add a new tab by going to Insert | Tab | One Column. Double-click on the newly added tab to open the Tab Properties window. Change the Label field of the tab to Special Customer. Make sure the show label is expanded by default and visible checkboxes are checked. Click on OK. Add a few additional text fields on this tab. We will be hiding the tab along with the content within the tab. Add a new field, named Is Special Customer (new_IsSpecialCustomer). Leave the default yes/no values. Add the newly created field to the general form for the contact. Add another new text field, named Customer Classifcation (new_ CustomerClassifcation). Leave the Format as Text, and the default Maximum Length to 100, as shown in the following screenshot: Add the newly created text field to the general form, under the previously added field. Add a new lookup field, called Partner (new_Partner). Make it a lookup for a contact, as shown in the following screenshot: Add this new field to the general form, under the other two fields. Save and Publish the Contact form. Your form should look similar to the following screenshot: Observe the fact that I have ordered the three fields one on top of the other. The reason for this is because the default tab order in CRM is vertical and across. This way, when all the fields are visible, I can tab right from one to another. In your solution where you made the previous changes, add a new web resource named FormLoader (new_FormLoader). Set the Type to JScript. Click on the Text Editor button and insert the following function: function IsSpecialCustomer(){var _isSpecialSelection = null;var _isSpecial = Xrm.Page.getAttribute("new_isspecialcustomer");if(_isSpecial != null){_isSpecialSelection = _isSpecial.getValue();}if(_isSpecialSelection == false){// hide the Special Customer tabXrm.Page.ui.tabs.get("tab_5").setVisible(false);// hide the Customer Classification fieldXrm.Page.ui.controls.get("new_customerclassification").setVisible(false);// hide the Partner fieldXrm.Page.ui.controls.get("new_partner").setVisible(false);}} Save and Publish the web resource. Go back to the Contact form, and on the ribbon select Form Properties. On the Events tab, add the library created as web resource in the Forms Libraries section, and in the Event Handlers area, on the Form OnLoad add the function we created: Click on the Text Editor button and insert the following function: Click on OK, then click on Save andPublish the form Test your configuration by opening a new contact, setting the Is Special Customer field to No. Save and close the contact. Open it again, and the tab and fields should be hidden. How it works... The whole idea of this script is not much different from what we have demonstrated in some of the previous recipes. Based on a set form value, we hide a tab and some fields. Where we capture the difference is where we set the script to execute. Working with scripts executing when the form loads gives us a whole new way of handling various scenarios. There's more... In many scenarios, working with the form load events in conjunction with the other field events can potentially result in a very complex solution. When debugging, always pay close attention to the type of event you associate your script function with. See also See the Combining events recipe towards the end of this article for a more complex recipe detailing how to work with multiple events to achieve the expected result. Form save event usage While working with the Form OnLoad event can help us format and arrange the user interface, working with the Form OnSave opens up a new door towards validation of user input and execution of business process amongst others. Getting ready Using the same solution we have worked on in the previous recipe, we will continue to demonstrate a few other aspects of working with the forms in Dynamics CRM 2011. In this recipe the focus is on the handling the Form OnSave event. How to do it... First off, in order to kick off this, we might want to verify a set of fields for a condition, or perform a calculation based on a formula. In order to simplify this process, we can just check a simple yes/no condition on a form. How it works... Using the previously customized solution, we will be taking advantage of the Contact entity and the fields that we have already customized on that form. If you are starting with this recipe fresh, take the following step before delving into this recipe: Add a new two-options field, named Is Special Customer (new_IsSpecialCustomer). Leave the default yes/no values. Using this field, if the answer is No, we will stop the save process. In your solution add a new web resource. I have named it new_ch4rcp2. Set its type to JScript. Enter the following function in your resource: function StopSave(context){var _isSpecialSelection = null;var _isSpecial = Xrm.Page.getAttribute("new_isspecialcustomer");if(_isSpecial != null){_isSpecialSelection = _isSpecial.getValue();}if(_isSpecialSelection == false){alert("You cannot save your record while the Customer is not afriend!");context.getEventArgs().preventDefault();}} The function basically checks for the value in our Is Special Customer. If a value is retrieved, and that value is No, we can bring up an alert and stop the Save and Close event. Now, back on to the contact's main form, we attach this new function to the form's OnSave event. Save and Publish your solution. In order to test this functionality, we will create a new contact, populate all the required fields, and set the Is Special Customer field to No. Now try to click on Save and Close. You will get an alert as seen in the following screenshot, and the form will not close nor be saved. Changing the Is Special Customer selection to Yes and saving the form will now save and close the form. There's more... While this recipe only describes in a very simplistic manner the way to stop a form from saving and closing, the possibilities here are immense. Think about what you can do on form save, and what you can achieve if a condition should be met in order to allow the form to be saved. Starting a process instead of saving the form Another good use for blocking the save and close form is to take a different path. Let's say we want to kick off a workfow when we block the save form. We can call from the previous function a new function as follows: function launchWorkflow(dialogID, typeName, recordId){var serverUri = Mscrm.CrmUri.create('/cs/dialog/rundialog.aspx');window.showModalDialog(serverUri + '?DialogId=' + dialogID +'&EntityName=' + typeName +'&ObjectId=' + recordId, null, 'width=615,height=480,resizable=1,status=1,scrollbars=1');// Reload formwindow.location.reload(true);} We pass to this function the following three parameters: GUID of the Workfow or Dialog The type name of the entity The ID of the record See also For more details on parameters see the following article on MSDN: http://msdn.microsoft.com/en-us/library/gg309332.aspx Field change event usage In this recipe we will drill down to a lower level. We have handled form events, and now it is time to handle field events. The following recipe will show you how to bring all these together and achieve exactly the result you need. Getting ready For the purpose of this recipe, let's focus on reusing the previous solution. We will check the value of a field, and act upon it. How to do it... In order to walkthrough this recipe, follow these steps:> Create a new form field called new_changeevent, with a label of Change Event, and a Type of Two Options. Leave the default values of No and Yes. Leave the Default Value as No. Add this field to your main Contact form. Add the following script to a new JScript web resource: function ChangeEvent(){var _changeEventSelection = null;var _isChanged = Xrm.Page.getAttribute("new_changeevent");if(_isChanged != null){_changeEventSelection = _isChanged.getValue();}if(_changeEventSelection == true){alert("Change event is set to True");// perform other actions here}else{alert("Change event is set to False");}} This function, as seen in the previous recipes, checks the value of the Two Options field, and performs and action based on the user selection. The action in this example is simply bringing an alert message up. Add the new web resource to the form libraries. Associate this new function to the OnChange event of the field we have just created. Save and Publish your solution. Create a new contact, and try changing the Change Event value from No to Yes and back. Every time the selection is changed, a different message comes up in the alert. How it works... Handling events at the field level, specifcally the OnSave event, allows us to dynamically execute various other functions. We can easily take advantage of this functionality to modify the form displayed to a user dynamically, based on a selection. Based on a field value, we can defne areas or field on the form to be hidden and shown.
Read more
  • 0
  • 0
  • 865

article-image-database-active-record-and-model-tricks
Packt
11 Jul 2013
14 min read
Save for later

Database, Active Record, and Model Tricks

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting data from a database Most applications today use databases. Be it a small website or a social network, at least some parts are powered by databases. Yii introduces three ways that allow you to work with databases: Active Record Query builder SQL via DAO We will use all these methods to get data from the film, film_actor, and actor tables and show it in a list. We will measure the execution time and memory usage to determine when to use these methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Download the Sakila database from the following URL:http://dev.mysql.com/doc/index-other.html Execute the downloaded SQLs; first schema then data. Configure the DB connection in protected/config/main.php to use the Sakila database. Use Gii to create models for the actor and film tables. How to do it... We will create protected/controllers/DbController.php as follows: <?php class DbController extends Controller { protected function afterAction($action) { $time = sprintf('%0.5f', Yii::getLogger() ->getExecutionTime()); $memory = round(memory_get_peak_usage()/(1024*1024),2)."MB"; echo "Time: $time, memory: $memory"; parent::afterAction($action); } public function actionAr() { $actors = Actor::model()->findAll(array('with' => 'films', 'order' => 't.first_name, t.last_name, films.title')); echo '<ol>'; foreach($actors as $actor) { echo '<li>'; echo $actor->first_name.' '.$actor->last_name; echo '<ol>'; foreach($actor->films as $film) { echo '<li>'; echo $film->title; echo '</li>'; } echo '</ol>'; echo '</li>'; } echo '</ol>'; } public function actionQueryBuilder() { $rows = Yii::app()->db->createCommand() ->from('actor') ->join('film_actor', 'actor.actor_id=film_actor.actor_id') ->leftJoin('film', 'film.film_id=film_actor.film_id') ->order('actor.first_name, actor.last_name, film.title') ->queryAll(); $this->renderRows($rows); } public function actionSql() { $sql = "SELECT * FROM actor a JOIN film_actor fa ON fa.actor_id = a.actor_id JOIN film f ON fa.film_id = f.film_id ORDER BY a.first_name, a.last_name, f.title"; $rows = Yii::app()->db->createCommand($sql)->queryAll(); $this->renderRows($rows); } public function renderRows($rows) { $lastActorName = null; echo '<ol>'; foreach($rows as $row) { $actorName = $row['first_name'].' '.$row['last_name']; if($actorName!=$lastActorName){ if($lastActorName!==null){ echo '</ol>'; echo '</li>'; } $lastActorName = $actorName; echo '<li>'; echo $actorName; echo '<ol>'; } echo '<li>'; echo $row['title']; echo '</li>'; } echo '</ol>'; } } Here, we have three actions corresponding to three different methods of getting data from a database. After running the preceding db/ar, db/queryBuilder and db/sql actions, you should get a tree showing 200 actors and 1,000 films they have acted in, as shown in the following screenshot: At the bottom there are statistics that give information about the memory usage and execution time. Absolute numbers can be different if you run this code, but the difference between the methods used should be about the same: Method Memory usage (megabytes) Execution time (seconds) Active Record 19.74 1.14109 Query builder 17.98 0.35732 SQL (DAO) 17.74 0.35038 How it works... Let's review the preceding code. The actionAr action method gets model instances by using the Active Record approach. We start with the Actor model generated with Gii to get all the actors and specify 'with' => 'films' to get the corresponding films using a single query or eager loading through relation, which Gii builds for us from InnoDB table foreign keys. We then simply iterate over all the actors and for each actor—over each film. Then for each item, we print its name. The actionQueryBuilder function uses query builder. First, we create a query command for the current DB connection with Yii::app()->db->createCommand(). We then add query parts one by one with from, join, and leftJoin. These methods escape values, tables, and field names automatically. The queryAll function returns an array of raw database rows. Each row is also an array indexed with result field names. We pass the result to renderRows, which renders it. With actionSql, we do the same, except we pass SQL directly instead of adding its parts one by one. It's worth mentioning that we should escape parameter values manually with Yii::app()->db->quoteValue before using them in the query string. The renderRows function renders the query builder. The DAO raw row requires you to add more checks and generally, it feels unnatural compared to rendering an Active Record result. As we can see, all these methods give the same result in the end, but they all have different performance, syntax, and extra features. We will now do a comparison and figure out when to use each method: Method Active Record Query Builder SQL (DAO) Syntax This will do SQL for you. Gii will generate models and relations for you. Works with models, completely OO-style, and very clean API. Produces array of properly nested models as the result. Clean API, suitable for building query on the fly. Produces raw data arrays as the result. Good for complex SQL. Manual values and keywords quoting. Not very suitable for building query on the fly. Produces raw data arrays as results. Performance Higher memory usage and execution time compared to SQL and query builder. Okay. Okay. Extra features Quotes values and names automatically. Behaviors. Before/after hooks. Validation. Quotes values and names automatically. None. Best for Prototyping selects. Update, delete, and create actions for single models (model gives a huge benefit when using with forms). Working with large amounts of data, building queries on the fly. Complex queries you want to do with pure SQL and have maximum possible performance. There's more... In order to learn more about working with databases in Yii, refer to the following resources: http://www.yiiframework.com/doc/guide/en/database.dao http://www.yiiframework.com/doc/guide/en/database.query-builder http://www.yiiframework.com/doc/guide/en/database.ar See also The Using CDbCriteria recipe Defining and using multiple DB connections Multiple database connections are not used very often for new standalone web applications. However, when you are building an add-on application for an existing system, you will most probably need another database connection. From this recipe you will learn how to define multiple DB connections and use them with DAO, query builder, and Active Record models. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Create two MySQL databases named db1 and db2. Create a table named post in db1 as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Create a table named comment in db2 as follows: DROP TABLE IF EXISTS `comment`; CREATE TABLE IF NOT EXISTS `comment` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `text` TEXT NOT NULL, `postId` INT(10) UNSIGNED NOT NULL, PRIMARY KEY (`id`) ); How to do it... We will start with configuring the DB connections. Open protected/config/main.php and define a primary connection as described in the official guide: 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=db1', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), Copy it, rename the db component to db2, and change the connection string accordingly. Also, you need to add the class name as follows: 'db2'=>array( 'class'=>'CDbConnection', 'connectionString' => 'mysql:host=localhost;dbname=db2', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), That is it. Now you have two database connections and you can use them with DAO and query builder as follows: $db1Rows = Yii::app()->db->createCommand($sql)->queryAll(); $db2Rows = Yii::app()->db2->createCommand($sql)->queryAll(); Now, if we need to use Active Record models, we first need to create Post and Comment models with Gii. Starting from Yii version 1.1.11, you can just select an appropriate connection for each model.Now you can use the Comment model as usual. Create protected/controllers/DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { $post = new Post(); $post->title = "Post #".rand(1, 1000); $post->text = "text"; $post->save(); echo '<h1>Posts</h1>'; $posts = Post::model()->findAll(); foreach($posts as $post) { echo $post->title."<br />"; } $comment = new Comment(); $comment->postId = $post->id; $comment->text = "comment #".rand(1, 1000); $comment->save(); echo '<h1>Comments</h1>'; $comments = Comment::model()->findAll(); foreach($comments as $comment) { echo $comment->text."<br />"; } } } Run dbtest/index multiple times and you should see records added to both databases, as shown in the following screenshot: How it works... In Yii you can add and configure your own components through the configuration file. For non-standard components, such as db2, you have to specify the component class. Similarly, you can add db3, db4, or any other component, for example, facebookApi. The remaining array key/value pairs are assigned to the component's public properties respectively. There's more... Depending on the RDBMS used, there are additional things we can do to make it easier to use multiple databases. Cross-database relations If you are using MySQL, it is possible to create cross-database relations for your models. In order to do this, you should prefix the Comment model's table name with the database name as follows: class Comment extends CActiveRecord { //… public function tableName() { return 'db2.comment'; } //… } Now, if you have a comments relation defined in the Post model relations method, you can use the following code: $posts = Post::model()->with('comments')->findAll(); Further reading For further information, refer to the following URL: http://www.yiiframework.com/doc/api/CActiveRecord See also The Getting data from a database recipe Using scopes to get models for different languages Internationalizing your application is not an easy task. You need to translate interfaces, translate messages, format dates properly, and so on. Yii helps you to do this by giving you access to the Common Locale Data Repository ( CLDR ) data of Unicode and providing translation and formatting tools. When it comes to applications with data in multiple languages, you have to find your own way. From this recipe, you will learn a possible way to get a handy model function that will help to get blog posts for different languages. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up the database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `lang` VARCHAR(5) NOT NULL DEFAULT 'en', `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO `post`(`id`,`lang`,`title`,`text`) VALUES (1,'en_us','Yii news','Text in English'), (2,'de','Yii Nachrichten','Text in Deutsch'); Generate a Post model using Gii. How to do it... Add the following methods to protected/models/Post.php as follows: class Post extends CActiveRecord { public function defaultScope() { return array( 'condition' => "lang=:lang", 'params' => array( ':lang' => Yii::app()->language, ), ); } public function lang($lang){ $this->getDbCriteria()->mergeWith(array( 'condition' => "lang=:lang", 'params' => array( ':lang' => $lang, ), )); return $this; } } That is it. Now, we can use our model. Create protected/controllers/ DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { // Get posts written in default application language $posts = Post::model()->findAll(); echo '<h1>Default language</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } // Get posts written in German $posts = Post::model()->lang('de')->findAll(); echo '<h1>German</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } } } Now, run dbtest/index and you should get an output similar to the one shown in the following screenshot: How it works... We have used Yii's Active Record scopes in the preceding code. The defaultScope function returns the default condition or criteria that will be applied to all the Post model query methods. As we need to specify the language explicitly, we create a scope named lang, which accepts the language name. With $this->getDbCriteria(), we get the model's criteria in its current state and then merge it with the new condition. As the condition is exactly the same as in defaultScope, except for the parameter value, it overrides the default scope. In order to support chained calls, lang returns the model instance by itself. There's more... For further information, refer to the following URLs: http://www.yiiframework.com/doc/guide/en/database.ar http://www.yiiframework.com/doc/api/CDbCriteria/ See also The Getting data from a database recipe The Using CDbCriteria recipe Processing model fields with AR event-like methods Active Record implementation in Yii is very powerful and has many features. One of these features is event-like methods , which you can use to preprocess model fields before putting them into the database or getting them from a database, as well as deleting data related to the model, and so on. In this recipe, we will linkify all URLs in the post text and we will list all existing Active Record event-like methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up a database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Generate the Post model using Gii How to do it... Add the following method to protected/models/Post.php as follows: protected function beforeSave() { $this->text = preg_replace('~((?:https?|ftps?)://.*?)( |$)~iu', '<a href="1">1</a>2', $this->text); return parent::beforeSave(); } That is it. Now, try saving a post containing a link. Create protected/controllers/TestController.php as follows: <?php class TestController extends CController { function actionIndex() { $post=new Post(); $post->title='links test'; $post->text='test http://www.yiiframework.com/ test'; $post->save(); print_r($post->text); } } Run test/index. You should get the following: How it works... The beforeSave method is implemented in the CActiveRecord class and executed just before saving a model. By using a regular expression, we replace everything that looks like a URL with a link that uses this URL and call the parent implementation, so that real events are raised properly. In order to prevent saving, you can return false. There's more... There are more event-like methods available as shown in the following table: Method name Description afterConstruct Called after a model instance is created by the new operator beforeDelete/afterDelete Called before/after deleting a record beforeFind/afterFind Method is invoked before/after each record is instantiated by a find method beforeSave/afterSave Method is invoked before/after saving a record successfully beforeValidate/afterValidate Method is invoked before/after validation ends Further reading In order to learn more about using event-like methods in Yii, you can refer to the following URLs: http://www.yiiframework.com/doc/api/CActiveRecord/ http://www.yiiframework.com/doc/api/CModel See also The Using Yii events recipe  The Highlighting code with Yii recipe The Automating timestamps recipe The Setting up an author automatically recipe
Read more
  • 0
  • 0
  • 3119

article-image-why-mybatis
Packt
10 Jul 2013
8 min read
Save for later

Why MyBatis

Packt
10 Jul 2013
8 min read
(For more resources related to this topic, see here.) Eliminates a lot of JDBC boilerplate code Java has a Java DataBase Connectivity (JDBC) API to work with relational databases. But JDBC is a very low-level API, and we need to write a lot of code to perform database operations. Let us examine how we can implement simple insert and select operations on a STUDENTS table using plain JDBC. Assume that the STUDENTS table has STUD_ID, NAME, EMAIL, and DOB columns. The corresponding Student JavaBean is as follows: package com.mybatis3.domain; import java.util.Date; public class Student { private Integer studId; private String name; private String email; private Date dob; // setters and getters } The following StudentService.java program implements the SELECT and INSERT operations on the STUDENTS table using JDBC. public Student findStudentById(int studId) { Student student = null; Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "SELECT * FROM STUDENTS WHERE STUD_ID=?"; //create PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, studId); ResultSet rs = pstmt.executeQuery(); //fetch results from database and populate into Java objects if(rs.next()) { student = new Student(); student.setStudId(rs.getInt("stud_id")); student.setName(rs.getString("name")); student.setEmail(rs.getString("email")); student.setDob(rs.getDate("dob")); } } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } return student; } public void createStudent(Student student) { Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(?,?,?,?)"; //create a PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, student.getStudId()); pstmt.setString(2, student.getName()); pstmt.setString(3, student.getEmail()); pstmt.setDate(4, new java.sql.Date(student.getDob().getTime())); pstmt.executeUpdate(); } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } } protected Connection getDatabaseConnection() throws SQLException { try{ Class.forName("com.mysql.jdbc.Driver"); return DriverManager.getConnection ("jdbc:mysql://localhost:3306/test", "root", "admin"); } catch (SQLException e){ throw e; } catch (Exception e){ throw new RuntimeException(e); } } There is a lot of duplicate code in each of the preceding methods, for creating a connection, creating a statement, setting input parameters, and closing the resources, such as the connection, statement, and result set. MyBatis abstracts all these common tasks so that the developer can focus on the really important aspects, such as preparing the SQL statement that needs to be executed and passing the input data as Java objects. In addition to this, MyBatis automates the process of setting the query parameters from the input Java object properties and populates the Java objects with the SQL query results as well. Now let us see how we can implement the preceding methods using MyBatis: Configure the queries in a SQL Mapper config file, say StudentMapper.xml. <select id="findStudentById" parameterType="int" resultType=" Student"> SELECT STUD_ID AS studId, NAME, EMAIL, DOB FROM STUDENTS WHERE STUD_ID=#{Id} </select> <insert id="insertStudent" parameterType="Student"> INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(#{studId},#{name},#{email},#{dob}) </insert> Create a StudentMapper interface. public interface StudentMapper { Student findStudentById(Integer id); void insertStudent(Student student); } In Java code, you can invoke these statements as follows: SqlSession session = getSqlSessionFactory().openSession(); StudentMapper mapper = session.getMapper(StudentMapper.class); // Select Student by Id Student student = mapper.selectStudentById(1); //To insert a Student record mapper.insertStudent(student); That's it! You don't need to create the Connection, PrepareStatement, extract, and set parameters and close the connection by yourself for every database operation. Just configure the database connection properties and SQL statements, and MyBatis will take care of all the ground work. Don't worry about what SqlSessionFactory, SqlSession, and Mapper XML files are. Along with these, MyBatis provides many other features that simplify the implementation of persistence logic. It supports the mapping of complex SQL result set data to nested object graph structures It supports the mapping of one-to-one and one-to-many results to Java objects It supports building dynamic SQL queries based on the input data Low learning curve One of the primary reasons for MyBatis' popularity is that it is very simple to learn and use because it depends on your knowledge of Java and SQL. If developers are familiar with Java and SQL, they will fnd it fairly easy to get started with MyBatis. Works well with legacy databases Sometimes we may need to work with legacy databases that are not in a normalized form. It is possible, but diffcult, to work with these kinds of legacy databases with fully-fedged ORM frameworks such as Hibernate because they attempt to statically map Java objects to database tables. MyBatis works by mapping query results to Java objects; this makes it easy for MyBatis to work with legacy databases. You can create Java domain objects following the object-oriented model, execute queries Embraces SQL Full-fedged ORM frameworks such as Hibernate encourage working with entity objects and generate SQL queries under the hood. Because of this SQL generation, we may not be able to take advantage of database-specifc features. Hibernate allows to execute native SQLs, but that might defeat the promise of a database-independent persistence. The MyBatis framework embraces SQL instead of hiding it from developers. As MyBatis won't generate any SQLs and developers are responsible for preparing the queries, you can take advantage of database-specifc features and prepare optimized SQL queries. Also, working with stored procedures is supported by MyBatis. Supports integration with Spring and Guice frameworks MyBatis provides out-of-the-box integration support for the popular dependency injection frameworks Spring and Guice; this further simplifes working with MyBatis. Supports integration with third-party cache libraries MyBatis has inbuilt support for caching SELECT query results within the scope of SqlSession level ResultSets. In addition to this, MyBatis also provides integration support for various third-party cache libraries, such as EHCache, OSCache, and Hazelcast. Better performance Performance is one of the key factors for the success of any software application. There are lots of things to consider for better performance, but for many applications, the persistence layer is a key for overall system performance. MyBatis supports database connection pooling that eliminates the cost of creating a database connection on demand for every request. MyBatis has an in-built cache mechanism which caches the results of SQL queries at the SqlSession level. That is, if you invoke the same mapped select query, then MyBatis returns the cached result instead of querying the database again. MyBatis doesn't use proxying heavily and hence yields better performance compared to other ORM frameworks that use proxies extensively. There are no one-size-fits-all solutions in software development. Each application has a different set of requirements, and we should choose our tools and frameworks based on application needs. In the previous section, we have seen various advantages of using MyBatis. But there will be cases where MyBatis may not be the ideal or best solution.If your application is driven by an object model and wants to generate SQL dynamically, MyBatis may not be a good ft for you. Also, if you want to have a transitive persistence mechanism (saving the parent object should persist associated child objects as well) for your application, Hibernate will be better suited for it. Installing and configuring MyBatis We are assuming that the JDK 1.6+ and MySQL 5 database servers have been installed on your system. The installation process of JDK and MySQL is outside the scope of this article. At the time of writing this article, the latest version of MyBatis is MyBatis 3.2.2. Even though it is not mandatory to use IDEs, such as Eclipse, NetBeans IDE, or IntelliJ IDEA for coding, they greatly simplify development with features such as handy autocompletion, refactoring, and debugging. You can use any of your favorite IDEs for this purpose. This section explains how to develop a simple Java project using MyBatis: By creating a STUDENTS table and inserting sample data By creating a Java project and adding mybatis-3.2.2.jar to the classpath By creating the mybatis-config.xml and StudentMapper.xml configuration files By creating the MyBatisSqlSessionFactory singleton class By creating the StudentMapper interface and the StudentService classes By creating a JUnit test for testing StudentService Summary In this article, we discussed about MyBatis and the advantages of using MyBatis instead of plain JDBC for database access. Resources for Article : Further resources on this subject: Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] New Features in JPA 2.0 [Article] An Introduction to Hibernate and Spring: Part 1 [Article]
Read more
  • 0
  • 0
  • 7886
Banner background image

article-image-optimizing-performance
Packt
02 Jul 2013
14 min read
Save for later

Optimizing Performance

Packt
02 Jul 2013
14 min read
(For more resources related to this topic, see here.) Improving relevance and Quality Score AdWords rewards advertisers who choose relevant keywords and write compelling ads with good Quality Scores. The better your Quality Scores, the less you'll need to pay for each click, resulting in more profits for you. This ecosystem evolved to benefit users, Google, and advertisers. If the ads on Google were irrelevant and of poor quality, users would get frustrated and not click on them, and Google would lose revenue. From an advertiser's perspective, when users click on irrelevant ads, they tend to leave your website, costing you money and not contributing to your bottom line. AdWords was designed to encourage high-quality ads, and as an advertiser you'll reap many benefits from optimizing them to improve relevance. Getting ready First, check your Quality Scores to identify low quality keywords to focus on. Go to the Campaigns tab. Click on the Keywords tab. Go to Columns and choose Customize columns. From the Attributes section, choose Qual. score. Click on Apply and you will see an extra column with your Quality Scores. In your Keywords tab, sort the Qual. score column to review low Quality Score keywords. Generally, Quality Score 1 to 3 is considered low, 4 to 6 is average with room for improvement, 7 to 9 is good, and 10 is considered great. Another way you can identify low-quality keywords is with filters. Create a keyword filter to see all keywords that are below a certain Quality Score. Download this report to have an easy to refer to summary of all keywords you'll need to focus on. How to do it... To improve your Quality Scores, follow these 10 tips: Start with low Quality Score keywords that get the most impressions. This is where you'll have the biggest impact. Re-organize your keywords into more tightly themed ad groups. If a keyword has a low Quality Score, try moving it to its own ad group with more specific ad text and its own negative keywords. Your broad match keywords may be getting expanded to irrelevant variations. Try changing them to a more specific match type. Add negative keywords to eliminate irrelevant impressions and increase your CTR. For example, add free as a negative keyword to eliminate someone looking for free products and services online. Run a search terms report to see what queries are triggering clicks and get new negative keyword ideas. Some of your low quality keywords may not be relevant to your website. If a keyword has a very low Quality Score and rarely shows, it could be negatively impacting the rest of your account. Consider deleting it. Write new ads for your low Quality Score keywords, placing each keyword in your ad text, ideally in your headline. Test multiple ad versions to see which one resonates better with your customers. Experiment with different calls-to-action, promotions, and ways to describe the unique benefits of your products and services. Pause the lower performing ads in each ad group, if you are testing multiple variations to ensure that ads getting a better CTR show more often. Try implementing dynamic keyword insertion to have AdWords automatically insert your keywords into the ad titles or description lines. Choose more specific landing pages. Your landing page should be relevant to your keywords and contain your keywords on the page. If it does not, consider creating new landing pages for your most important keywords. How it works... Quality Score is a measure of relevance and is calculated by taking into account the following factors: Your keyword's CTR: Your CTR is like an online voting system; people in the search auction vote on how relevant your ads are with their clicks. Your display URL's CTR: Your display URL's past CTR affects your Quality Scores. How relevant your keywords are: Some keywords you choose will be more relevant to your business than others. If you sell snowboards, but would like to run on a keyword like "snow," a generic term that's not as relevant to your business, you will receive a much lower Quality Score. Pick specific keywords that clearly describe your products and stay away from general keywords that could apply to many different businesses. The relevance of your ads to your keywords: Your ads need to include your keywords in the ad text. If you have too many keywords for them all to be reflected in your ad copy, create additional, smaller ad groups. When a searched keywords is included in an ad text, that term is highlighted by Google in your ad, helping it stand out even more on the Google search results page. Landing page quality: The keywords you choose should be included in your ad text and further mirrored on your landing page. In addition to your landing page being relevant to your keywords, it also needs to be transparent and easy to navigate. Historical account performance: Advertisers who continue to choose poor quality keywords will receive low Quality Scores when adding new keywords. This system helps Google discourage advertisers who continue to choose irrelevant keywords and encourage advertisers who create relevant, quality keywords and ads. Performance in the regions you are targeting: The regions you target via your campaign settings page will affect your Quality Scores. Performance on the devices you are targeting: You may get different Quality Scores on mobile and tablet devices, if your keywords perform differently depending on device. Quality Score is dynamic and is calculated every time a search triggers your ad. In order to achieve better Quality Scores, you'll need to focus on tying together all of the various elements that comprise Quality Score. Increasing relevance helps you achieve a better ad rank and pay less for each click. The Quality Score algorithm is designed to reward relevancy and encourage advertisers to create high-quality accounts, which will in turn help you achieve better ROI with AdWords. There's more… The more general your keywords are, the more difficult it will be to obtain a high Quality Score for them, even after following all of the recommended AdWords best practices. In such cases, you'll need to weigh if the lower Quality Score is worth the traffic and conversions you get from these keywords. Keep in mind that if you continue to choose low-quality keywords, this will hurt your overall account performance. Improving ad rank Your ad position is going to heavily impact visibility and traffic, with the top-ranked ads receiving the most clicks. Obviously, the more competitive your keywords are, the more costly it will be to have your ads show in the #1 spot. However, there are specific shortand long-term strategies that will help you obtain the best possible ad rank. Getting ready First, isolate the keywords that are not ranked optimally: Identify keywords that are not showing on the first page of Google's search results If you have a specific ad position in mind, use filters in your Keywords tab to see which keywords are not meeting this criteria Quickly diagnose your keywords to figure out if they are showing or are restricted by Quality Scores and bids. On your Keywords tab, click on Keyword details and select Diagnose keywords. How to do it... To improve your ad rank, you can: Increase your bid Improve your Quality Score Increasing your bids is the easy fix short-term solution. However, continuing to increase how much you spend on each click when your ad rank slips is not going to be profitable in the long run. The long-term strategy to improving ad position is to raise your Quality Scores. To improve Quality Score, start with the following: Refine your campaign structure, breaking out related keywords into their own ad groups, which will help you write more relevant ads. Refine ads with more compelling ad copy, using keywords in ad text. Pause lower CTR ads if you are running multiple ad variations. Add negative keywords to weed out impressions that are not relevant and are weighing down your CTR. How it works... Your ad rank determines your ad position, or where your ads show in relation to other advertisers. The ad rank formula consists of your Quality Score and your bid: Ad Rank = Quality Score x Max CPC Ad rank is calculated each time your ad enters the ad auction. This means that for each new query your ads could appear in a different position. There's more… The higher your Quality Score, the less you'll need to bid to maintain your ad rank. This strategy helps AdWords ensure high quality ads on Google.com and encourages advertisers to optimize their accounts. Changing keyword match types Keyword match types control who sees your ads and how the keywords you have chosen are expanded to match other relevant queries. Using too many of your keywords in the most restrictive match types can limit your traffic, while using too many broad keywords can generate some or a lot of irrelevant clicks. Getting ready Determine which keywords you might want to change match types for. Here are a couple of common edits advertisers make: Broad match keywords with low Quality Scores and no conversions. Change to phrase or exact match to restrict variations. Exact match keywords with no impressions. Change to more general match type to broaden reach. How to do it... To change a single keyword's match type: Go to the Campaigns tab. Click on the Keywords tab or click on a specific campaign and ad group first. In your keyword table, click on the keyword you'd like to edit. Before you can proceed, you might need to agree to the system warning by clicking on Yes, I understand. The system warns you that if you edit a keyword, it will be deleted and treated as a new keyword in AdWords. You can check the Don't show this message again checkbox so you don't have to see this warning each time you edit a keyword. Next, you'll be able to choose a different match type from the drop-down menu. In this screenshot, we are choosing to change a broad keyword to a more specific match type. Click on Save. To change match types for multiple keywords: From your Keywords tab, check all of the keywords you'd like to edit. From the Edit drop-down menu, choose Change match type. Choose what you'd like to change your match type from and to. Since changing a match type deletes the old keyword and creates a new one, you have the option to create duplicate versions of the keywords you have selected and add them in the new match types. To use that option, check Duplicate keywords and change match type in duplicates. You can preview your changes before they go live by clicking on Preview changes. Click on Make changes. How it works... Changing a keyword's match type deletes the old keyword and creates a brand new keyword in your account. It also resets a keyword's history to 0, but performance data will still be available for all deleted keywords. Scheduling ads to run during key days and times Many advertisers choose to run AdWords campaigns only during hours when they have customer support available. If you have a limited budget, you might want to focus your ad budgets on days and times your customers are most likely to be looking for you. Getting ready Determine if ad scheduling is necessary and appropriate for your business. Advertisers that may benefit from this include businesses that operate primarily during specific hours. For example, a website with customer support available to take calls during business hours only, or a pizza delivery service that only delivers evenings. Review performance by day and hour of day, keeping in mind that you will see fewer clicks and impressions during less busy times, so you have focus on conversion rates and CPA instead. Some advertisers get great conversion rates during off peak hours, late at night and in the early mornings, when fewer advertisers are competing in the ad auction. Keep in mind how your customers interact with you. If you rely on calls and only have customer support during specific hours, make sure your ads are focused on when you have the proper support available. How to do it... To enable ad scheduling: Go to the Campaigns tab. Click on the specific campaign you'd like to edit. Go to the Settings tab. Select Ad schedule. Click on Edit ad schedule. Click on + Create custom schedule. From the drop-down menu, choose to create a schedule for all days, Monday through Friday, or specific days of the week, and then set your hours. Click on +Add to add additional parameters. Click on Save. How it works... Ad scheduling helps you control when your ads appear to potential customers. Ad scheduling is set at the campaign level, which means that it applies to all keywords and ads within a single campaign. By default, AdWords campaigns are set to run all days of the week and all hours of the day. There's more… When you set up ad scheduling, keep in mind your account's time zone. You can find out your time zone by going to My Account | Preferences. AdWords will also reference your time zone as you create a custom schedule for each campaign. You cannot change your time zone. Expanding your keyword list Expanding your keywords will be one of your main strategies to increase clicks as well as conversions. Just as markets evolve and search patterns change, your keywords also need to be updated in order not to become stagnant. Here we will discuss several tools you can use to build up and refresh your keyword list. Getting ready Review your website and compare your list of products and services to your AdWords account. Are your current keywords covering all of the categories you specialize in? Are there other ways to describe some of your key offerings? Who are your main competitors and are they doing PPC? How to do it... To expand your keyword list, try one of the following strategies. Automated keyword suggestions To see automated keyword ideas relevant to your website, follow these steps: Click on the Campaigns tab. Go into a specific campaign and ad group. Clock on + Add keywords above your ad group's current keyword summary. AdWords will suggest new sample keywords based on a scan of your website grouped into related categories. Click to expand each category and review the suggested keywords. If you like a keyword, click on Add to move it to the Add keywords box. Do not simply add all of the automated suggestions, as not all of them will be specific enough. You as a business owner know your audience best and should pick and choose only the keywords that are the most relevant. Make sure that you are not adding keywords that may be already present in your other campaigns or ad groups. Click on Save after adding all of the relevant keywords. Search terms report Review your search terms report regularly and add any relevant keywords that resulted in clicks and conversions. Click on Add as keyword recipe after viewing your search terms to add them to your account. Competitor keywords Use websites such as spyfu.com to see what keywords your competitors' ads are appearing on and to download their keyword lists. Enter a competitor's URL into the search box to uncover profitable keywords you missed. You can download a competitor's full keyword list, sort, and filter it, or export it to an AdWords-friendly format. The tool can even organize a domain's keywords into targeted ad groups so you have less manual work to do. Google's keyword tool In addition to entering your own domain into Google's keyword tool, try typing in a competitor's website and see what keywords are being recommended. How it works... Adding new relevant keywords to your AdWords account will help drive more impressions and clicks. With new and unique keywords, you can capitalize on previously untapped opportunities to drive new leads and sales.
Read more
  • 0
  • 0
  • 1158

article-image-hubs
Packt
28 Jun 2013
8 min read
Save for later

Hubs

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Moving up one level While PersistentConnection seems very easy to work with, it is the lowest level in SignalR. It does provide the perfect abstraction for keeping a connection open between a client and a server, but that's just about all it does provide. Working with different operations is not far from how you would deal with things in a regular socket connection, where you basically have to parse whatever is coming from a client and figure out what operation is being asked to be performed based on the input. SignalR provides a higher level of abstraction that removes this need and you can write your server-side code in a more intuitive manner. In SignalR, this higher level of abstraction is called a Hub. Basically, a Hub represents an abstraction that allows you to write classes with methods that take different parameters, as you would with any API in your application, and then makes it completely transparent on the client—at least for JavaScript. This resembles a concept called Remote Procedure Call (RPC), with many incarnations of it out there. For our chat application at this stage, we basically just want to be able to send a message from a client to the server and have it send the message to all of the other clients connected. To do this, we will now move away from the PersistentConnection and introduce a new class called Hub using the following steps: First, start off by deleting the ChatConnection class from your Web project. Now we want to add a Hub implementation instead. Right-click on the SignalRChat project and select Add | New Item. In the dialog, chose Class and give it a name Chat.cs. This is the class that will represent our Hub. Make it inherit from Hub: Public class Chat : Hub Add the necessary import statement at the top of the file: using Microsoft.AspNet.SignalR.Hubs; In the class we will add a simple method that the clients will call to send a message. We call the method Send and take one parameter into it; a string which contains the message being sent by the client: Public void Send(string message){} From the base class of Hub, we get a few things that we can use. For now we'll be using the Clients property to broadcast to all other clients connected to the Hub. On the Clients property, you'll find an All property which is dynamic; on this we can call anything and the client will just have to subscribe to the method we call, if the client is interested. It is possible to change the name of the Hub to not be the same as the class name. An attribute called HubName() can be placed in front of the class to give it a new name. The attribute takes one parameter; the name you want for your Hub. Similarly, for methods inside your Hub, you can use an attribute called HubMethodName() to give the method a different name. The next thing we need to do is to go into the Global.asax.cs file, and make some changes. Firstly, we remove the .MapConnection(…) line and replace it with a .MapHubs() line. This will make all Hubs in your application automatically accessible from a default URL. All Hubs in the application will be mapped to /signalr/<name of hub>; so more concretely the path will be: http:// <your-site>:port/signalr/<name of hub>. We're going with the defaults for now. It should cover the needs on the server-side code. Moving into the JavaScript/HTML part of things, SignalR comes with a JavaScript proxy generator that can generate JavaScript proxies from your Hubs mapped using .MapHubs(). This is also subject to the same default URL but will follow the configuration given to .MapHubs().We will need to include a script reference in the HTML code right after the line that references the SignalR JavaScript file. We add the following: <script src = "/signalr/hubs" type="text/javascript"></script> This will include the generated proxies for our JavaScript client. What this means is that we get whatever is exposed on a Hub generated for us and we can start using it straight away. Before we get started with the concrete implementation for our web client, we can move all of the custom code revitalizing the Rich Client, for PersistentConnection altogether. We then want to get to our proxy, and work with it. It sits on the connection object that SignalR adds to jQuery. So, for us, that means an object called chat will be there. On the the chat object, sit two important properties, one representing the client functions that get invoked when the server "calls" something on the client. And the second one is the property representing the server and all of the functionalities that we can call from the client. Let's start by hooking up the client and its methods. Earlier we implemented in the Hub sitting on the server a call to addMessage() with the message. This can be added to the client property inside the chat Hub instance: Basically, whenever the server calls that method, our client counterpart will be called. Now what we need to do is to start the Hub and print out when we are connected to the chat window: $.connection.hub.start().done(function() {$("#chatWindow").val("Connectedn");}); Then we need to hook up the click event on the button and call the server to send messages. Again, we use the server property sitting on the chat hub instance in the client, which corresponds to a method on the Hub: $("#sendButton").click(function() {chat.server.send($("#messageTextBox").val());$("#messageTextBox").val("");}); You should now have something that looks as follows: You may have noticed that the send function on the client is in camelCase and the server-side C# code has it in PascalCase. SignalR automatically translates between the two case types. In general, camelCase is the preferred and the most broadly used casing style in JavaScript—while Pascal being the most used in C#. You should now be having a full sample in HTML/JavaScript that looks like the following screenshot: Running it should produce the same result as before, with the exception of the .NET terminal client, which also needs alterations. In fact, let's just get rid of the code inside Program.cs and start over. The client API is a bit rougher in C#; this comes from the more statically typed nature of C#. Sure, it is possible—technically—to get pretty close to what has been done in JavaScript, but it hasn't been a focal point for the SignalR team. Basically, we need a different connection than the PersistentConnection class. We'll be needing a HubConnection class. From the HubConnection class we can create a proxy for the chat Hub: As with JavaScript, we can hook up client-side methods that get invoked when the server calls any client. Although as mentioned, not as elegantly as in JavaScript. On the chat Hub instance, we get a method called On(), which can be used to specify a client-side method corresponding to the client call from the server. So we set addMessage to point to a method which, in our case, is for now just an inline lambda expression. Now we need, as with PersistentConnection, to start the connection and wait until it's connected: hubConnection.Start().Wait(); Now we can get user input and send it off to the server. Again, as with client methods called from the server, we have a slightly different approach than with JavaScript; we call the Invoke method giving it the name of the method to call on the server and any arguments. The Invoke() method does take a parameter, so you can specify any number of arguments which will then be sent to the server: The finished result should look something like the following screenshot, and now work in full correspondence with the JavaScript chat: Summary Exposing our functionality through Hubs makes it easier to consume on the client, at least on JavaScript based clients, due to the proxy generation. It basically brings it to the client as if it was on the client. With the Hub you also get the ability to call the client from the server in a more natural manner. One of the things often important for applications is the ability to ?lter out messages so you only get messages relevant for your context. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Deploying .NET-based Applications on to Microsoft Windows CE Enabled Smart Devices [Article]
Read more
  • 0
  • 0
  • 1331

article-image-python-libraries-geospatial-development
Packt
17 Jun 2013
14 min read
Save for later

Python Libraries for Geospatial Development

Packt
17 Jun 2013
14 min read
(For more resources related to this topic, see here.) Reading and writing geospatial data While you could in theory write your own parser to read a particular geospatial data format, it is much easier to use an existing Python library to do this. We will look at two popular libraries for reading and writing geospatial data: GDAL and OGR. GDAL/OGR Unfortunately, the naming of these two libraries is rather confusing. Geospatial Data Abstraction Library ( GDAL), was originally just a library for working with raster geospatial data, while the separate OGR library was intended to work with vector data. However, the two libraries are now partially merged, and are generally downloaded and installed together under the combined name of "GDAL". To avoid confusion, we will call this combined library GDAL/OGR and use "GDAL" to refer to just the raster translation library. A default installation of GDAL supports reading 116 different raster file formats, and writing to 58 different formats. OGR by default supports reading 56 different vector file formats, and writing to 30 formats. This makes GDAL/OGR one of the most powerful geospatial data translators available, and certainly the most useful freely-available library for reading and writing geospatial data. GDAL design GDAL uses the following data model for describing raster geospatial data: Let's take a look at the various parts of this model: A dataset holds all the raster data, in the form of a collection of raster "bands", along with information that is common to all these bands. A dataset normally represents the contents of a single file. A raster band represents a band, channel, or layer within the image. For example, RGB image data would normally have separate bands for the red, green, and blue components of the image. The raster size specifies the overall width and height of the image, in pixels. The georeferencing transform converts from (x, y) raster coordinates into georeferenced coordinates—that is, coordinates on the surface of the earth. There are two types of georeferencing transforms supported by GDAL: affine transformations and ground control points. An affine transformation is a mathematical formula allowing the following operations to be applied to the raster data: More than one of these operations can be applied at once; this allows you to perform sophisticated transforms such as rotations. Affine transformations are sometimes referred to as linear transformations. Ground Control Points ( GCPs) relate one or more positions within the raster to their equivalent georeferenced coordinates, as shown in the following figure: Note that GDAL does not translate coordinates using GCPs— that is left up to the application, and generally involves complex mathematical functions to perform the transformation. The coordinate system describes the georeferenced coordinates produced the georeferencing transform. The coordinate system includes the projection and datum, as well as the units and scale used by the raster data. The metadata contains additional information about the dataset as a whole. Each raster band contains the following (among other things): The band raster size: This is the size (number of pixels across and number of lines high) for the data within the band. This may be the same as the raster size for the overall dataset, in which case the dataset is at full resolution, or the band's data may need to be scaled to match the dataset. Some band metadata providing extra information specific to this band. A color table describing how pixel values are translated into colors. The raster data itself. GDAL provides a number of drivers which allow you to read (and sometimes write) various types of raster geospatial data. When reading a file, GDAL selects a suitable driver automatically based on the type of data; when writing, you first select the driver and then tell the driver to create the new dataset you want to write to. GDAL example code A Digital Elevation Model ( DEM) file contains height values. In the following example program, we use GDAL to calculate the average of the height values contained in a sample DEM file. In this case, we use a DEM file downloaded from the GLOBE elevation dataset: from osgeo import gdal,gdalconst import struct dataset = gdal.Open("data/e10g") band = dataset.GetRasterBand(1) fmt = "<" + ("h" * band.XSize) totHeight = 0 for y in range(band.YSize): scanline = band.ReadRaster(0, y, band.XSize, 1, band.XSize, 1, band.DataType) values = struct.unpack(fmt, scanline) for value in values: if value == -500: # Special height value for the sea -> ignore. continue totHeight = totHeight + value average = totHeight / (band.XSize * band.YSize) print "Average height =", average As you can see, this program obtains the single raster band from the DEM file, and then reads through it one scanline at a time. We then use the struct standard Python library module to read the individual height values out of the scanline. Because the GLOBE dataset uses a special height value of -500 to represent the ocean, we exclude these values from our calculations. Finally, we use the remaining height values to calculate the average height, in meters, over the entire DEM data file. OGR design OGR uses the following model for working with vector-based geospatial data: Let's take a look at this design in more detail: The data source represents the file you are working with—though it doesn't have to be a file. It could just as easily be a URL or some other source of data. The data source has one or more layers , representing sets of related data. For example, a single data source representing a country may contain a "terrain" layer, a "contour lines" layer, a "roads" later, and a "city boundaries" layer. Other data sources may consist of just one layer. Each layer has a spatial reference and a list of features. The spatial reference specifies the projection and datum used by the layer's data. A feature corresponds to some significant element within the layer. For example, a feature might represent a state, a city, a road, an island, and so on. Each feature has a list of attributes and a geometry. The attributes provide additional meta-information about the feature. For example, an attribute might provide the name for a city's feature, its population, or the feature's unique ID used to retrieve additional information about the feature from an external database. Finally, the geometry describes the physical shape or location of the feature. Geometries are recursive data structures that can themselves contain sub-geometries—for example, a "country" feature might consist of a geometry that encompasses several islands, each represented by a subgeometry within the main "country" geometry. The geometry design within OGR is based on the Open Geospatial Consortium's "Simple Features" model for representing geospatial geometries. For more information, see http://www.opengeospatial.org/standards/sfa . Like GDAL, OGR also provides a number of drivers which allow you to read (and sometimes write) various types of vector-based geospatial data. When reading a file, OGR selects a suitable driver automatically; when writing, you first select the driver and then tell the driver to create the new data source to write to. OGR example code The following example program uses OGR to read through the contents of a shapefile, printing out the value of the NAME attribute for each feature along with the geometry type: from osgeo import ogr shapefile = ogr.Open("TM_WORLD_BORDERS-0.3.shp") layer = shapefile.GetLayer(0) for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) name = feature.GetField("NAME") geometry = feature.GetGeometryRef() print i, name, geometry.GetGeometryName() Documentation GDAL and OGR are well documented, but with a catch for Python programmers. The GDAL/OGR library and associated command-line tools are all written in C and C++. Bindings are available which allow access from a variety of other languages, including Python, but the documentation is all written for the C++ version of the libraries. This can make reading the documentation rather challenging—not only are all the method signatures written in C++, but the Python bindings have changed many of the method and class names to make them more "pythonic". Fortunately, the Python libraries are largely self-documenting, thanks to all the docstrings embedded in the Python bindings themselves. This means you can explore the documentation using tools such as Python's built-in pydoc utility, which can be run from the command line like this: % pydoc -g osgeo This will open up a GUI window allowing you to read the documentation using a web browser. Alternatively, if you want to find out about a single method or class, you can use Python's built-in help() command from the Python command line, like this: >>> import osgeo.ogr >>> help(osgeo.ogr.DataSource.CopyLayer) Not all the methods are documented, so you may need to refer to the C++ docs on the GDAL website for more information, and some of the docstrings are copied directly from the C++ documentation—but in general the documentation for GDAL/OGR is excellent, and should allow you to quickly come up to speed using this library. Availability GDAL/OGR runs on modern Unix machines, including Linux and Mac OS X, as well as most versions of Microsoft Windows. The main website for GDAL can be found at: http://gdal.org The main website for OGR is at: http://gdal.org/ogr To download GDAL/OGR, follow the Downloads link on the main GDAL website. Windows users may find the FWTools package useful, as it provides a wide range of geospatial software for win32 machines, including GDAL/OGR and its Python bindings. FWTools can be found at: http://fwtools.maptools.org For those running Mac OS X, prebuilt binaries can be obtained from: http://www.kyngchaos.com/software/frameworks Make sure that you install GDAL Version 1.9 or later, as you will need this version to work through the examples in this book. Being an open source package, the complete source code for GDAL/OGR is available from the website, so you can compile it yourself. Most people, however, will simply want to use a prebuilt binary version. Dealing with projections One of the challenges of working with geospatial data is that geodetic locations (points on the Earth's surface) are mapped into a two-dimensional Cartesian plane using a cartographic projection. Whenever you have some geospatial data, you need to know which projection that data uses. You also need to know the datum (model of the Earth's shape) assumed by the data. A common challenge when dealing with geospatial data is that you have to convert data from one projection/datum to another. Fortunately, there is a Python library pyproj which makes this task easy. pyproj pyproj is a Python "wrapper" around another library called PROJ.4. "PROJ.4" is an abbreviation for Version 4 of the PROJ library. PROJ was originally written by the US Geological Survey for dealing with map projections, and has been widely used in geospatial software for many years. The pyproj library makes it possible to access the functionality of PROJ.4 from within your Python programs. Design The pyproj library consists of the following pieces: pyproj consists of just two classes: Proj and Geod. Proj converts from longitude and latitude values to native map (x, y) coordinates, and vice versa. Geod performs various Great Circle distance and angle calculations. Both are built on top of the PROJ.4 library. Let's take a closer look at these two classes. Proj Proj is a cartographic transformation class, allowing you to convert geographic coordinates (that is, latitude and longitude values) into cartographic coordinates (x, y values, by default in meters) and vice versa. When you create a new Proj instance, you specify the projection, datum, and other values used to describe how the projection is to be done. For example, to use the Transverse Mercator projection and the WGS84 ellipsoid, you would do the following: projection = pyproj.Proj(proj='tmerc', ellps='WGS84') Once you have created a Proj instance, you can use it to convert a latitude and longitude to an (x, y) coordinate using the given projection. You can also use it to do an inverse projection—that is, converting from an (x, y) coordinate back into a latitude and longitude value again. The helpful transform() function can be used to directly convert coordinates from one projection to another. You simply provide the starting coordinates, the Proj object that describes the starting coordinates' projection, and the desired ending projection. This can be very useful when converting coordinates, either singly or en masse. Geod Geod is a geodetic computation class, which allows you to perform various Great Circle calculations. We looked at Great Circle calculations earlier, when considering how to accurately calculate the distance between two points on the Earth's surface. The Geod class, however, can do more than this: The fwd() method takes a starting point, an azimuth (angular direction) and a distance, and returns the ending point and the back azimuth (the angle from the end point back to the start point again):   The inv() method takes two coordinates and returns the forward and back azimuth as well as the distance between them:   The npts() method calculates the coordinates of a number of points spaced equidistantly along a geodesic line running from the start to the end point:   When you create a new Geod object, you specify the ellipsoid to use when performing the geodetic calculations. The ellipsoid can be selected from a number of predefined ellipsoids, or you can enter the parameters for the ellipsoid (equatorial radius, polar radius, and so on) directly. Example code The following example starts with a location specified using UTM zone 17 coordinates. Using two Proj objects to define the UTM Zone 17 and lat/long projections, it translates this location's coordinates into latitude and longitude values: import pyproj UTM_X = 565718.5235 UTM_Y = 3980998.9244 srcProj = pyproj.Proj(proj="utm", zone="11", ellps="clrk66", units="m") dstProj = pyproj.Proj(proj="longlat", ellps="WGS84", datum="WGS84") long,lat = pyproj.transform(srcProj, dstProj, UTM_X, UTM_Y) print "UTM zone 11 coordinate (%0.4f, %0.4f) = %0.4f, %0.4f" % (UTM_X, UTM_Y, lat, long) Continuing on with this example, let's take the calculated lat/long values and, using a Geod object, calculate another point 10 kilometers northeast of that location: angle = 315 # 315 degrees = northeast. distance = 10000 geod = pyproj.Geod(ellps="WGS84") long2,lat2,invAngle = geod.fwd(long, lat, angle, distance) print "%0.4f, %0.4f is 10km northeast of %0.4f, %0.4f" % (lat2, long2, lat, long) Documentation The documentation available on the pyproj website, and in the docs directory provided with the source code, is excellent as far as it goes. It describes how to use the various classes and methods, what they do and what parameters are required. However, the documentation is rather sparse when it comes to the parameters used when creating a new Proj object. As the documentation says: A Proj class instance is initialized with proj map projection control parameter key/value pairs. The key/value pairs can either be passed in a dictionary, or as keyword arguments, or as a proj4 string (compatible with the proj command). The documentation does provide a link to a website listing a number of standard map projections and their associated parameters, but understanding what these parameters mean generally requires you to delve into the PROJ documentation itself. The documentation for PROJ is dense and confusing, even more so because the main manual is written for PROJ Version 3, with addendums for later versions. Attempting to make sense of all this can be quite challenging. Fortunately, in most cases you won't need to refer to the PROJ documentation at all. When working with geospatial data using GDAL or OGR, you can easily extract the projection as a "proj4 string" which can be passed directly to the Proj initializer. If you want to hardwire the projection, you can generally choose a projection and ellipsoid using the proj="..." and ellps="..." parameters, respectively. If you want to do more than this, though, you will need to refer to the PROJ documentation for more details. To find out more about PROJ, and to read the original documentation, you can find everything you need at: http://trac.osgeo.org/proj
Read more
  • 0
  • 0
  • 2817
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-creating-pop-menu
Packt
14 Jun 2013
7 min read
Save for later

Creating a pop-up menu

Packt
14 Jun 2013
7 min read
(For more resources related to this topic, see here.) How to do it... Open the application model (Application.e4xmi) and go to Application | Windows | Trimmed Window | Controls | Perspective Stack | Perspective | Controls | PartSashContainer | Part ( Code Snippets). Expand the Code Snippet part and right-click on the Menus node. Select Add child | Popup Menu. Set the ID of the pop-up menu to codesnippetapp.snippetlist.popupmenu. Right-click on the newly added pop-up menu and select Add Child | DirectMenuItem. Set Label of the menu item as New Snippet. Click on Class URI link. This opens the New Handler wizard. Click on the Browse button next to the Package textbox and select codesnippetapp.handlers from the list of packages displayed. Set Name as NewSnippetMenuHandler and click on the Finish button. The new class file is opened in the editor. Go back to the application model. Refer to the following screenshot: Right-click on the Popup Menu node and add another pop-up menu item with the Delete label and the DeleteSnippetMenuHandler class. Now we need to register this pop-up menu with the TableViewer class in the Code Snippets part. Open the class SnippetListView (you can find this class in the codesnippetapp.views package). We will have to register the pop-up menu using Menu Service. Add the EMenuService argument to the postConstruct method: @PostConstruct public void postConstruct(Composite parent, IEclipseContext ctx, EMenuService menuService) Append the following code to the postConstruct method: menuService.registerContextMenu(snippetsList.getTable(), "codesnippetapp.snippetlist.popupmenu"); Run the application. Right-click in the TableViewer class on the left-hand side. You should see a pop-up menu with two options: New Snippet and Delete. How it works... To add a pop-up menu, you first need to create a menu in the application model for a part in which you want to display the menu. In this recipe, we added a menu to the Code Snippets part. Then, you add menu items. In this recipe, we added two DirectMenuItem. For the main menu bar in the task of adding menu and toolbar buttons, we added HandledMenuItem, because we wanted to share the handler for the menu between toolbar button and menu item. However, in this case, we need only one implementation of options in the pop-up menu, so we created DirectMenuItem. But, if you want to add keyboard shortcuts for the menu options, then you may want to create HandledMenuItem instead of DirectMenuItem. For each menu item, you set a class URI that is a handler class for the menu item. The next step is to register this pop-up menu with a UI control. In our application, we want to associate this menu with the TableViewer class that displays a list of snippets. To register a menu with any UI control, you need to get an instance of EMenuService. We obtained this instance in the postConstruct method of SnippetListView using DI—we added the EMenuService argument to the postConstruct method. Then, we used registerContextMenu of EMenuService to associate the pop-up menu with the TableViewer class. registerContextMenu takes instances of the UI control and menu ID as arguments. There's more... The Delete option in our pop-up menu makes sense only when you click on any snippet. So, when you right-click on an area of TreeViewer that does not have any snippet at that location, the Delete option should not be displayed, only the New Snippet option. This can be done using core expressions. You can find more information about core expressions at http://wiki.eclipse.org/Platform_Expression_Framework, and http://wiki.eclipse.org/Command_Core_Expressions. We will use a core expression to decide if the Delete menu option should be displayed. We will add a mouse listener to the TableViewer class. If the mouse was clicked on a Snippet, then we will add SnippetData to IEclipseContext with the snippet_at_mouse_click key. If there is no snippet at the location, then we will remove this key from IEclipseContext. Then, we will add a core expression to check if the snippet_at_mouse_click variable is of type codesnippetapp.data.SnippetData. We will then associate this core expression with the Delete menu item in the application model. Adding mouse listener to the TableViewer class Create a static field in the SnippetListView class. private static String SNIPPET_AT_MOUSE_CLICK = "snippet_at_mouse_ click"; Make the ctx argument of the postConstruct method, final. Append the following code in the postConstruct method: //Add mouse listener to check if there is a snippet at mouse click snippetsList.getTable().addMouseListener(new MouseAdapter() { @Override public void mouseDown(MouseEvent e) { if (e.button == 1) //Ignore if left mouse button return; //Get snippet at the location of mouse click TableItem itemAtClick = snippetsList.getTable(). getItem(new Point(e.x, e.y)); if (itemAtClick != null) { //Add selected snippet to the context ctx.set(SNIPPET_AT_MOUSE_CLICK, itemAtClick.getData()); } else { //No snippet at the mouse click. Remove the variable ctx.remove(SNIPPET_AT_MOUSE_CLICK); } } }); Creating core expression Carry out to the following steps: Open plugin.xml and go to the Dependencies tab. Add org.eclipse.core.expression as a required plugin. Go to the Extensions tab. Add the org.eclipse.core.expressions. definitions extension. This will add a new definition. Change the ID of the definition to CodeSnippetApp.delete.snippet.expression. Right-click on the definition and select New |With. Change the name of the variable to snippet_at_mouse_click. This is the same variable name we set in the SnippetListView class. Right-click on the With node, and select New | instanceof option. Set the value to codesnippetapp.data.SnippetData. This core expression will be true when the type of (instanceof) the snippet_at_mouse_click variable is codesnippetapp.data.SnippetData. Click on plugin.xml and verify that the core expression definition is as follows: <extension point="org.eclipse.core.expressions.definitions"> <definition id="CodeSnippetApp.delete.snippet.expression"> <with variable="snippet_at_mouse_click"> <instanceof value="codesnippetapp.data.SnippetData"> </instanceof> </with> </definition> </extension> Setting the core expression for Menu Item Open the application model (Application.e4xmi) and go to DirectMenuItem for the Delete pop-up menu. Right-click on the menu item and select Add child | VisibleWhen Core Expression. This will add a Core Expression child node. Click on the Core Expression node and then on the Find button next to the Expression Id textbox and select CodeSnippetApp.delete.snippet. expression from the list. This is the ID of the core expression definition we added in plugin.xml. Run the application. When you right-click on the Snippets List view, which does not have any snippet at this point, you should see only the New Snippet menu option. Summary In this task, we created a pop-up menu that is displayed when you right-click in the snippets list. If no snippet is selected at a location where you right-click, then it displays a pop-up menu with a single option to add a snippet. If there is a snippet at the location, then we display a menu that has options to delete the snippet and add a snippet. Resources for Article : Further resources on this subject: Installing Alfresco Software Development Kit (SDK) [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article] Deployment of Reports with BIRT [Article]
Read more
  • 0
  • 0
  • 2606

article-image-top-features-youll-want-know-about
Packt
12 Jun 2013
10 min read
Save for later

Top features you'll want to know about

Packt
12 Jun 2013
10 min read
(For more resources related to this topic, see here.) 1 – Track changes and production revisions (for Adobe Story Plus only) It is important to keep track of any changes you or someone else may make to a document. It's easy to save over the previous version with the new one, but what if you want to compare the previous and current versions to one another? You are able to track any and all revisions through this feature. Called revision styles, all revisions become associated with a unique style for easier identification. Track changes Before moving to revisions, we need to be able to know how to insert and track changes made to a document. This is how it is done: When in the AUTHORING view, in the document, go to the Review tab in the top tool bar. Check Start Tracking Changes to enable it, and uncheck it to disable: When it is checked, any new content you add will be in red text and highlighted: There is a speech bubble on the right-hand side of the addition, which allows for the person making the change to add a comment. Click on the icon to open the comment window: When you place the cursor over the inserted change, a new bubble will appear telling you who made the change and when. On the far right-hand side, you can either accept or reject the change: Production revisions You have to be in the Authoring view in order to make a revision. Production revisions highlight certain pages where changes have been made. The script becomes locked and all changes are highlighted in the revision style you choose. On the title page, a note is inserted on the bottomright corner giving the date of the last revision. This is also done on the footer of every page where there is a change. The color changes and borders will not be exported in a PDF. Before starting a revision, make sure that you have done the following: Act on all tracked changes in your document by accepting or rejecting them. Disable track changes after completing accepting/rejecting tracked changes. Now, after completing the preceding steps, follow these steps: Select Production | Start Revision. In the Active Revision drop-down, choose a revision style: This style will be used for the markups in the revision. Make sure that you haven't already used the chosen style for a previous revision document. Click Start Revision. Creating a revision style Follow these steps to create your revision style: Select Production | Manage Revisions. Click on the + icon. Enter a name for the style The following options can be tailored according to your needs: Revision Color: Used to choose a color from the color menu. This color will then be applied to all the revised text and the border of the individual pages that contain the revisions. The border color will not be displayed in a printed or exported document. Mark: The default mark is displayed on the right of the revised content. You can change this mark by choosing any symbol of your liking. Date: The revision date. Revision Text Style: The chosen formatting option is used to display revised text. Click Done and your new style will be available from now on. Deleting or modifying existing revisions Let's take a look at how we can delete or modify already existing revisions: Select the style that you want to delete or modify. You can do either of the following: Click on the - sign to delete the style To modify, simply edit its values and click Done Display options for revisions Adobe Story also provides some display options for revisions, here's how we can set them up: Select Production | Manage Revisions. In Viewing Options, the following display options can be personalized according to your needs: Show Markup For: The options are Select All or Active. This will let you choose whether you want to have all the markups shown for all revisions or just the active ones. Mark Position: The mark you set in Revision Style is set to the right-hand side by default; you can also change its position. Show Date In Script Header and Footer: If you do not want to display the date, disable this option. Locking or unlocking scene numbers When you lock scene numbers, you prevent the renumbering of existing scenes whenever a new scene is added during production revisions. When you do insert a new scene, Adobe Story will apply a number to the scene preceding it. For example, if you add a new scene in between scene 4 and 5, it will be numbered 4A. Here's how we can lock and unlock scene numbers: Select Production | Manage Scene Numbers. Select the Keep Existing Scene Number option to lock all current scenes. To unlock, deselect Keep Existing Scene Number. Omitting or unomitting scenes Adobe Story allows you to remove a scene without affecting the scene numbers remaining in the script. The word OMITTED will appear at the location of the scene you've chosen to omit. You can, at a later date, unomit the scene if you chose and recover the content. To omit a scene, simply place your cursor on the scene and then select Production | Omit Scene. To unomit a scene, place your cursor on the omitted scene and then select Production | Unomit Scene. Printing production revisions If you want to print your revisions, it is easy to do so; just follow these steps: Select File | Print. Choose any one of the following option: Entire Script All ChangedPages Revision To print in color, select the Print Revised Text In Color option. Identifying the total number of revised pages Here's how we can identify the total number of revised pages: Select Production | Manage Revisions. In Viewing Options, select All and click Done. 2 – Tagging Along with the advent of the "cloud" concept, tagging individual words to content has become something of a norm in today's online society. Adobe Story has incorporated a similar system. With tagging, you can tag words and phrases in your scripts automatically, or manually by using the Tagging Panel option. For example, "boom" can be tagged as "sound effect". Tagging panel To open the panel, you must be first in the AUTHORING view. Select View | Tagging Panel. The panel will open on the right-hand side of the document. To add tags to the panel, enter the name of the tag in the field next to the Create button: To delete a tag from the tagging panel, select the tag and then click on the Delete this Tag link: Tagging automatically You must be in the online mode for the Autotagging feature to work. It will not work in the offline mode. The Autotagging feature is only available for English scripts. This is how it's done: Select File | Tagging | Start Autotagging. Or select it from the drop-down menu option in the Tagging panel: Once you enable Autotagging, the script will be locked. You will have to wait until the process has completed before being able to edit the document; the following screenshot shows the message being displayed: Tagging manually Select View | Tagging Panel. Choose the word or phrase you would like to tag. If what you're choosing has already been tagged, it will be appended to the tag list for the word or phrase. Select a tag from Taglist in the Tagging panel. Do either of the following: Select the Show In Bold option if you want the tagged words or phrases to be displayed in bold. Select the Show Color option if you would prefer Story to display the selected color (you can choose a color for each tag with a color palette on the righthand of the tag in the Taglist panel) to the tag: Finding words or phrases by their specific tag Follow these steps to search for words or phrases with a specific tag: Disable visibility for all tags. Enable visibility for the tag that you want to search. To do this, simply click on the eye icon on the left-hand side of the tagged word: Use the arrow icons in the Tagging panel in order to navigate through the tags in the script. Only the visible tags will be shown. Viewing tags associated with a word or phrase To view tags associated with a word or a phrase, you can do either of the following: Select the word/phrase. The tags associated with the word/phrase will be highlighted in the Tagging panel. Scroll through the panel in order to view the tags associated with it: Move your mouse over the word/phrase. The information will be displayed in the tool tip: Hold Ctrl (Cmd on Mac) and double-click to view the associated tags: Removing tags Over the word you wish to edit, hold Ctrl (Cmd on Mac) and double-click to bring up the Applied Tags panel. Click on the Remove This Tag icon for the chosen tag. Click Close. To remove all the tags, select File | Tagging | Remove All Tags. To remove all the manual tags, select File | Tagging | Remove All Manual Tags. To remove all the auto tags, select File | Tagging | Remove Auto Tags. 3 – Application for iOS-based devices Adobe Story has an application for iOS-based devices. This application is available currently only in English. It allows you to read and review Adobe Story scripts and documents. It does not support AV (Audio Visual) scripts, Multicolumn scripts, and TV scripts as of yet. Logging in Before you start, make sure you have registered yourself with Adobe Story using the web or desktop application. Use the same combination of e-mail address and password used on the full application with the iOS version. Accept the TOU before attempting to log in. If you want to log out, select Account and then select Log Out. Viewing documents, scene outline, and scenes The ten most recently read files will be displayed upon logging in to the Adobe Story application. To view all the documents, click Categories. To view the scene outline, select the script in the Recent Files or Categories view. To view the contents of a scene, select the scene in the scene outline. Use the arrow icons to move among the scenes. To view Notifications, in the Recent Files view, select Notifications. A list of notifications is displayed. Highlighted notifications are for new ones. Reviewing scripts As long as you have author, co-author, or reviewer permissions, you will be able to review a script. Open the script and navigate to the scene. Do one of the following: Double-click to select the content that you want to comment on. Click on Comment, or on the Add Comment button. To comment on the content that has already been commented on, enter your comment in the Write New Comment textbox. To navigate comments, use the arrow icons. Click Post. Viewing or deleting comments In the scene containing the comments, select Comments. The comment list is displayed. The paragraph containing the comment is highlighted when you select on a comment in the list. Select Delete after clicking on the desired comment. Summary In this article we learned about three of Adobe Story's key features. We learned about track changes and production revisions. we learned about tagging, and learned about more about Adobe Story in iOS devices. There is a whole lot more to learn as far as the features in Adobe Story is concerned. Resources for Article : Further resources on this subject: Integrating Scala, Groovy, and Flex Development with Apache Maven [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article] An Introduction to Flash Builder 4-Network Monitor [Article]
Read more
  • 0
  • 0
  • 985

article-image-article-top-features-you-need-know-about
Packt
03 Jun 2013
3 min read
Save for later

Top features you need to know about

Packt
03 Jun 2013
3 min read
(For more resources related to this topic, see here.) 1 – Minimap The minimap is an innovative feature of Sublime Text 2 that gives you a bird's-eye view of the document you are editing. Always present at the right-hand side of the editor, it allows you to quickly look at a live, updated, zoomed out version of your current document. While the text will rarely be distinguishable, it allows for a topographical view of your document structure. Image The minimap feature is also very useful for navigating a large document as it can behave similar to a scroll bar. When clicked on, the minimap can be used to scroll the document to a different portion. However, should you find yourself not needing the minimap, or need the screen real estate it inhabits, it can easily be hidden by using the Menu bar to select View | Hide Minimap. 2 – Multiple cursors Another way Sublime Text 2 differentiates itself from the crowded text editor market is by way of including the functionality that allows the user to edit a document in multiple places at the same time. This can be very useful when making an identical change in multiple places. It is especially useful when the change that needs to occur cannot be easily accomplished with find and replace. By pressing command + left-click on OS X, or Ctrl + left-click on other platforms, an additional cursor will be placed at the location of the click. Each additional cursor will mirror the original cursor. The following screenshot shows a demo of this functionality. First, I created cursors on each of my three lines of text. Then I proceeded to type test without quotes: Image Now, as shown in the following screenshot, anything typed will be typed identically on the three lines where the cursors are placed. In this case I typed a space followed by the word test. This addition was simultaneous and I only had to make the change once, after creating the additional cursors. Image To return to a single cursor, simply press Esc or left-click anywhere on the document. Summary This article covered a few few features of Sublime Text 2 including multiple cursors, a plugin system, and a few others which will be covered in this article. Resources for Article : Further resources on this subject: Building a Flex Type-Ahead Text Input [Article] Introduction to Data Binding [Article] Working with Binding data and UI elements in Silverlight 4 [Article]
Read more
  • 0
  • 0
  • 633

article-image-building-winrt-components-be-consumed-any-language-become-expert
Packt
30 May 2013
5 min read
Save for later

Building WinRT components to be consumed from any language (Become an expert)

Packt
30 May 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Please refer to the WinRTCalculator project for the full working code to create a WinRT component and consume it in Javascript. How to do it... Perform the following steps to create a WinRT component and consume it in Javascript: Launch Visual Studio 2012 and create a new project. Expand Visual C++ from the left pane and then select the node for Windows Store apps. Select the Windows Runtime component and then name the project as WinRTCalculator. Open Class1.h and add the following method declarations: double ComputeAddition(double num1, double num2);double ComputeSubstraction(double num1, double num2);double ComputeMultiplication(double num1, double num2);double ComputeDivision(double num1, double num2); Open Class1.cpp and add the following method implementations: double Class1::ComputeAddition(double num1, double num2){return num1+num2;}double Class1::ComputeSubstraction(double num1, double num2){if(num1>num2)return num1-num2;else return num2-num1;}double Class1::ComputeMultiplication(double num1, double num2){return num1*num2;}double Class1::ComputeDivision(double num1, double num2){if (num2 !=0){ return num1/num2; } else return 0; } Now save the project and build it. Now we need to create a Javascript project where the preceding WinRTCalculator component will be consumed. To create the Javascript project, follow these steps: Right-click on Solution Explorer and go to Add | New Project. Expand JavaScript from the left pane, and choose Blank App. Name the project as ConsumeWinRTCalculator. Right-click on ConsumeWinRTCalculator and set it as Startup Project . Add a project reference to WinRTCalculator, as follows: Right-click on the ConsumeWinRTCalculator project and choose Add Reference. Go to Solution | Projects from the left pane of the References Manager dialog box. Select WinRTCalculator from the center pane and then click on the OK button. Open the default.html file and add the following HTML code in the body: <p>Calculator from javascript</p> <div id="inputDiv"> <br /><br /> <span id="inputNum1Div">Input Number - 1 : </span> <input id="num1" /> <br /><br /> <span id="inputNum2Div">Input Number - 2 : </span> <input id="num2" /> <br /><br /> <p id="status"></p> </div> <br /><br /> <div id="addButtonDiv"> <button id="addButton" onclick= "AdditionButton_Click()">Addition of Two Numbers </button> </div> <div id="addResultDiv"> <p id="addResult"></p> </div> <br /><br /> <div id="subButtonDiv"> <button id= "subButton" onclick="SubsctractionButton_Click()"> Substraction of two numbers</button> </div> <div id="subResultDiv"> <p id="subResult"></p> </div> <br /><br /> <div id="mulButtonDiv"> <button id= "mulButton" onclick="MultiplicationButton_Click()"> Multiplcation of two numbers</button> </div> <div id="mulResultDiv"> <p id="mulResult"></p> </div> <br /><br /> <div id="divButtonDiv"> <button id= "divButton" onclick="DivisionButton_Click()"> Division of two numbers</button> </div> <div id="divResultDiv"> <p id="divResult"></p> </div> Open the default.css style file from 5725OT_08_CodeWinRTCalculatorConsumeWinRTCalculatorcss default.css and copy-paste the styles to your default.css style file. Add JavaScript event handlers that will call the WinRTCalculator component DLL. Add the following code at the end of the default.js file: var nativeObject = new WinRTCalculator.Class1(); function AdditionButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeAddition(num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('addResult').innerHTML = result; } } function SubsctractionButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeSubstraction (num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('subResult').innerHTML = result; } } function MultiplicationButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeMultiplication (num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('mulResult').innerHTML = result; } } Now press the F5 key to run the application. Enter the two numbers and click on the Addition of Two Numbers button or on any of the shown buttons to display the computation. How it works... The Class1.h and Class1.cpp files have a public ref class. It's an Activatable class that JavaScript can create by using a new expression. JavaScript activates the C++ class Class1 and then calls its methods and the returned values are populated to the HTML Div. There's more... While debugging a JavaScript project that has a reference to a WinRT component DLL, the debugger is set to enable either stepping through the script or through the component native code. To change this setting, right-click on the JavaScript project and go to Properties | Debugging | Debugger Type. If a C++ Windows Runtime component project is removed from a solution, the corresponding project reference from the JavaScript project must also be removed manually. Summary In this article, we learned how to reate a WinRT component and call it from JavaScript. Resources for Article : Further resources on this subject: Installation and basic features of EnterpriseDB [Article] Editing DataGrids with Popup Windows in Flex [Article] Monitoring Windows with Zabbix 1.8 [Article]
Read more
  • 0
  • 0
  • 984
article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 5704

article-image-developing-web-project-jasperreports
Packt
27 May 2013
11 min read
Save for later

Developing a Web Project for JasperReports

Packt
27 May 2013
11 min read
(For more resources related to this topic, see here.) Setting the environment First, we need to install the required software, Oracle Enterprise Pack for Eclipse 12c, from http://www.oracle.com/technetwork/middleware/ias/ downloads/wls-main-097127.html using Installers with Oracle WebLogic Server, Oracle Coherence and Oracle Enterprise Pack for Eclipse, and download the Oracle Database 11g Express Edition from http://www.oracle.com/technetwork/products/express-edition/overview/index.html. Setting the environment requires the following tasks: Creating database tables Configuring a data source in WebLogic Server 12c Copying JasperReports required JAR files to the server classpath First create a database table, which shall be the data source for creating the reports, with the following SQL script. If a database table has already been created, the table may be used for this article too. CREATE TABLE OE.Catalog(CatalogId INTEGER PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25)); INSERT INTO OE.Catalog VALUES('1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss'); INSERT INTO OE.Catalog VALUES('2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); INSERT INTO OE.Catalog VALUES('3', 'Oracle Magazine', 'Oracle Publishing', 'March-April 2005', 'Starting with Oracle ADF ', 'Steve Muench'); Next, configure a data source in WebLogic server with JNDI name jdbc/OracleDS. Next, we need to download some JasperReports JAR files including dependencies. Download the JAR/ZIP files listed below and extract the zip/tar.gz to a directory, c:/jasperreports for example.   JAR/ZIP Donwload URL jasperreports-4.7.0.jar http://sourceforge.net/projects/ jasperreports/files/jasperreports/JasperReports%204.7.0/ itext-2.1.0 http://mirrors.ibiblio.org/pub/mirrors/maven2/com/ lowagie/itext/2.1.0/itext-2.1.0.jar commons-beanutils-1.8.3-bin.zip http://commons.apache.org/beanutils/download_beanutils.cgi commons-digester-2.1.jar http://commons.apache.org/digester/download_digester.cgi commons-logging-1.1.1-bin http://commons.apache.org/logging/download_logging.cgi  poi-bin-3.8-20120326 zip or tar.gz http://poi.apache.org/download.html#POI-3.8 All the JasperReports libraries are open source. We shall be using the following JAR files to create a JasperReports report: JAR File Description commons-beanutils-1.8.3.jar JavaBeans utility classes commons-beanutils-bean-collections-1.8.3.jar Collections framework extension classes commons-beanutils-core-1.8.3.jar JavaBeans utility core classes commons-digester-2.1.jar Classes for processing XML documents. commons-logging-1.1.1.jar Logging classes iText-2.1.0.jar PDF library jasperreports-4.7.0.jar JasperReports API poi-3.8-20120326.jar, poi-excelant-3.8-20120326.jar, poi-ooxml-3.8-20120326.jar, poi-ooxml-schemas-3.8-20120326.jar, poi-scratchpad-3.8-20120326.jar Apache Jakarta POI  classes and dependencies. Add the Jasper Reports required by the JAR files to the user_projectsdomains base_domainbinstartWebLogic.bat script's CLASSPATH variable: set SAVE_CLASSPATH=%CLASSPATH%;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-1.8.3.jar;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-bean-collections-1.8.3.jar;C: jasperreportscommons-beanutils-1.8.3commons-beanutils-core- 1.8.3.jar;C:jasperreportscommons-digester-2.1.jar;C:jasperreports commons-logging-1.1.1commons-logging-1.1.1.jar;C:jasperreports itext-2.1.0.jar;C:jasperreportsjasperreports-4.7.0.jar;C: jasperreportspoi-3.8poi-3.8-20120326.jar;C:jasperreportspoi- 3.8poi-scratchpad-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- 3.8-20120326.jar;C:jasperreportspoi-3.8.jar;C:jasperreports poi-3.8poi-excelant-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- schemas-3.8-20120326.jar Creating a Dynamic Web project in Eclipse First, we need to create a web project for generating JasperReports reports. Select File | New | Other. In New wizard select Web | Dynamic Web Project. In Dynamic Web Project configuration specify a Project name (PDFExcelReports for example), select the Target Runtime as Oracle WebLogic Server 11g R1 ( 10.3.5). Click on Next. Select the default Java settings; that is, Default output folder as build/classes, and then click on Next. In WebModule, specify ContextRoot as PDFExcelReports and Content Directory as WebContent. Click on Finish. A web project for PDFExcelReports gets generated. Right-click on the project node in ProjectExplorer and select Project Properties. In Properties, select Project Facets. The Dynamic Web Module project facet should be selected by default as shown in the following screenshot: Next, create a User Library for JasperReports JAR files and dependencies. Select Java Build Path in Properties. Click on Add Library. In Add Library, select User Library and click on Next. In User Library, click on User Libraries. In User Libraries, click on New. In New User Library, specify a User library name (JasperReports) and click on OK. A new user library gets added to User Libraries. Click on Add JARs to add JAR files to the library. The following screenshot shows the JasperReports that are added: Creating the configuration file We require a JasperReports configuration file for generating reports. JasperReports XML configuration files are based on the jasperreport.dtd DTD, with a root element of jasperReport. We shall specify the JasperReports report design in an XML configuration bin file, which we have called config.xml. Create an XML file config.xml in the webContent folder by selecting XML | XML File in the New wizard. Some of the other elements (with commonly used subelements and attributes) in a JasperReports configuration XML file are listed in the following table: XML Element Description Sub-Elements Attributes jasperReport Root Element reportFont, parameter, queryString, field, variable, group, title, pageHeader, columnHeader, detail, columnFooter, pageFooter. name, columnCount, pageWidth, pageHeight, orientation, columnWidth, columnSpacing, leftMargin, rightMargin, topMargin, bottomMargin. reportFont Report level font definitions - name, isDefault, fontName, size, isBold, isItalic, isUnderline, isStrikeThrough, pdfFontName, pdfEncoding, isPdfEmbedded parameter Object references used in generating a report. Referenced with P${name} parameterDescription, defaultValueExpression name, class queryString Specifies the SQL query for retrieving data from a database. - - field Database table columns included in report. Referenced with F${name} fieldDescription name, class variable Variable used in the report XML file. Referenced with V${name} variableExpression, initialValueExpression name,class. title Report title band - pageHeader Page Header band - columnHeader Specifies the different columns in the report generated. band - detail Specifies the column values band - columnFooter Column footer band - A report section is represented with the band element. A band element includes staticText and textElement elements. A staticText element is used to add static text to a report (for example, column headers) and a textElement element is used to add dynamically generated text to a report (for example, column values retrieved from a database table). We won't be using all or even most of these element and attributes. Specify the page width with the pageWidth attribute in the root element jasperReport. Specify the report fonts using the reportFont element. The reportElement elements specify the ARIAL_NORMAL, ARIAL_BOLD, and ARIAL_ITALIC fonts used in the report. Specify a ReportTitle parameter using the parameter element. The queryString of the example JasperReports configuration XML file catalog.xml specifies the SQL query to retrieve the data for the report. <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM OE.Catalog]]> </queryString> The PDF report has the columns CatalogId, Journal, Publisher, Edition, Title, and Author. Specify a report band for the report title. The ReportTitle parameter is invoked using the $P {ReportTitle} expression. Specify a column header using the columnHeader element. Specify static text with the staticText element. Specify the report detail with the detail element. A column text field is defined using the textField element. The dynamic value of a text field is defined using the textFieldExpression element: <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> Specify a page footer with the pageFooter element. Report parameters are defined using $P{}, report fields using $F{}, and report variables using $V{}. The config. xml file is listed as follows: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE jasperReport PUBLIC "-//JasperReports//DTD Report Design// EN" "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd"> <jasperReport name="PDFReport" pageWidth="975"> The following code snippet specifies the report fonts: <reportFont name="Arial_Normal" isDefault="true" fontName="Arial" size="15" isBold="false" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Bold" isDefault="false" fontName="Arial" size="15" isBold="true" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Bold" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Italic" isDefault="false" fontName="Arial" size="12" isBold="false" isItalic="true" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Oblique" pdfEncoding="Cp1252" isPdfEmbedded="false"/> The following code snippet specifies the parameter for the report title, the SQL query to generate the report with, and the report fields. The resultset from the SQL query gets bound to the fields. <parameter name="ReportTitle" class="java.lang.String"/> <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM Catalog]]></queryString> <field name="CatalogId" class="java.lang.String"/> <field name="Journal" class="java.lang.String"/> <field name="Publisher" class="java.lang.String"/> <field name="Edition" class="java.lang.String"/> <field name="Title" class="java.lang.String"/> <field name="Author" class="java.lang.String"/> Add the report title to the report as follows: <title> <band height="50"> <textField> <reportElement x="350" y="0" width="200" height="50" /> <textFieldExpression class="java.lang. String">$P{ReportTitle}</textFieldExpression> </textField> </band> </title> <pageHeader> <band> </band> </pageHeader> Add the column's header as follows: <columnHeader> <band height="20"> <staticText> <reportElement x="0" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[CATALOG ID]]></text> </staticText> <staticText> <reportElement x="125" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[JOURNAL]]></text> </staticText> <staticText> <reportElement x="250" y="0" width="150" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[PUBLISHER]]></text> </staticText> <staticText> <reportElement x="425" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[EDITION]]></text> </staticText> <staticText> <reportElement x="550" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[TITLE]]></text> </staticText> <staticText> <reportElement x="775" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[AUTHOR]]></text> </staticText> </band> </columnHeader> The following code snippet shows how to add the report detail, which consists of values retrieved using the SQL query from the Oracle database: <detail> <band height="20"> <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="125" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Jour nal}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="250" y="0" width="150" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Publ isher}]]></textFieldExpression> </textField> <textField> <reportElement x="425" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Edit ion}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="550" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Title}]]></textFieldExpression> </textField> <textField> <reportElement x="775" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Author}]]></textFieldExpression> </textField> </band> </detail> Add the column and page footer including the page number as follows: <columnFooter> <band> </band> </columnFooter> <pageFooter> <band height="15"> <staticText> <reportElement x="0" y="0" width="40" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <text><![CDATA[Page #]]></text> </staticText> <textField> <reportElement x="40" y="0" width="100" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <textFieldExpression class="java.lang. Integer"><![CDATA[$V{PAGE_NUMBER}]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band> </band> </summary> </jasperReport> We need to create a JAR file for the config.xml file and add the JAR file to the WebLogic Server's domain's lib directory. Create a JAR file using the following command from the directory containing the config.xml as follows: >jar cf config.jar config.xml Add the config.jar file to the user_projectsdomainsbase_domainlib directory, which is in the classpath of the server.
Read more
  • 0
  • 0
  • 3915

article-image-quick-start
Packt
22 May 2013
8 min read
Save for later

Quick start

Packt
22 May 2013
8 min read
(For more resources related to this topic, see here.) Common issues in Google Map Maker Before we get started, it's worth taking into consideration some of the known issues with Google Map Maker: Map Maker interface does not usually come fully translated into all languages at the same time. UI translations are usually rolled out gradually and are a part of another community-driven effort. This project is accessible at http://www.google.com/ transconsole/giyl/chooseProject. Note that some of the languages—for example, Urdu despite being translated completely—still are not available in Map Maker UI. Map Maker has not been verified for compatibility with Internet Explorer 7 and earlier versions of IE. Google Map Maker is accessed by firing the URL http://www.google.com/mapmaker. To access and get started with Map Maker, you must have a Google Account in order to start making and submitting edits. A Google Account is a unified sign-in system that provides access to a variety of free Google consumer products such as Gmail, Google Groups, Google Maps, Google Wallet, AdWords, AdSense, and so on. Think of a Google Account as a single Google sign-in, made up of an e-mail address (any e-mail address, does not have to be a Gmail) and a password of your choice, that gives you access to all the Google products under your own profile. Create your Google Account by visiting https://accounts.google.com/SignUp if you would like to use another e-mail address. If you already have a Gmail account, please sign in from the left pane when you visit http://www.google.com/mapmaker, as shown here: The Map Maker interface during the first visit The Map Maker interface The Google Map Maker interface is simple, intuitive, and easy to use. It has standard graphical icons that help you navigate around the tools and functionalities. Let us take a closer look at it: A first-time login to Map Maker starts by displaying a tutorial to quickly take you through the key features of Google Map Maker. You can navigate your way through the quick tutorial by going back and forth using the respective forward and back arrows. You can close the quick tutorial and get started with making edits right away by clicking on the X icon on top. Don't worry, you can always access the tutorial later, as will be explained later in this book. The Map Maker UI Let us take a detailed look at the Map Maker interface, I have tried to subdivide it based on the main functionalities and purposes of the tools. Key tools/sections that you need to know are highlighted and clearly labeled as well. I have named them based strictly on the functionality and this is by no means the conventional way of doing so. Let us take a quick deep dive into the tools and see what each section serves: Search The search area allows you to search and fly to places you want to in Map Maker in an instant. It works just like Google Search, only that it returns a map zoomed to the area/business you queried. Try it. Type the name of your city and hit Enter. This comes in handy when, on visiting Google Map Maker, the default load is not defaulting to your current location much as it should or just when you want to make edits and/or reviews in some other area you are familiar with or just to view and visit places. Take a look at the following search query: Review area This is the area that displays your own recent edits as well as displaying edits happening within your neighborhood that you created or are based on your location. You can switch between the tabs based on the functionality that you want; the different tabs are explained as follows: Everything: This tab is like a channel stream or timeline. Shows the recent activities in terms of new edits, reviews, or comments by you and other mappers within the neighborhood view of the map, that is, the current location of the map that is in view. See the following example: An Everything view To Review: This area only highlights the edits whose reviews are pending. Recently Published: Streams all those recent edits, which have been approved and published. You can, however, still contest these edits or correct them if they are incorrect. Filter by Category: Just next to the Recently Published tab, you will find a three-dot tab that allows you to expand this section. This section is the filter section and gives you the power to filter by categories the actions, places, and edits you would like to perform. For instance, you may just be interested in (re)viewing road and line features or the chronological order of the edits being made in the locale. Filter by Category Map view area This is the area where the Google Maps loads in order to allow you to perform the operations and edits that you want. The map view usually defaults to your current location when you visit http://www.google.com/mapmaker. Map controls These tools allow you to control the view of the map. They allow you to pan, zoom, and view Street View for supported cities. Let's take a look at what and how each of the tools comes in handy: Map controls Edit control This is the area that allows you to make new edits to Map Maker and correct existing ones as well. You can create new point, line, and polygon features by exploring the Add New tab. Note that the tools will change according to the main tool selected. You can also edit existing point, line, polygon, and direction features by exploring the Edit tab. We will take a deep dive into this section a little later in this book. Personal/User area I call this the personal area, because it allows you to personalize your Map Maker through custom settings and adding labs (experimental features that are still under testing and development). Labs allow you to extend the normal functionality of Map maker. This section also allows you to share your edits, directions, and maps with your friends by generating a unique URL for it. Create and make changes to your Map Maker profile, access Help and discussion forums, report a bug, and as well as submit feedback to the Google Maker team by using these tools. Personal user area View The View section allows you to switch between the different layers of Google Map Maker—Satellite and Map. In a Map view, you only get to view the map details created by users, whereas in Satellite view, you can see the map elements overlaying the satellite imagery provided to Google by various satellite imagery providers and partners. This is the best layer to use when making edits as it allows you to draw/trace over the edits from the satellite imagery to creating the features in a process called digitization in Cartography terms. It is actually the backbone of this community-driven project. Users have to align everything from satellite imagery to points feature, line features, and polygon features for better accuracy; otherwise their edits may be denied or delayed in the reviewing process. You can add more layers such as photos, which will display edits /features alongside the photos uploaded among other features. To switch between and add layers, simply click on it and the Map view will be populated with the layer(s) of your selection. Different views in Map Maker Contributors The Contributors' segment displays all the contributors who have made a substantial number of edits on the area of the Map view. It displays the contributors' preferred nicknames (set during the signing-up stage). If you click on any nickname, it takes you to their respective Map Maker profiles showing their edits and badges earned. Scale This section will show us the display scale of the map as we zoom in and out. Summary This article explained in detail how we can use the different features of Map Maker to our benefit. It also explained the different interfaces used in Google Map Maker. Resources for Article : Further resources on this subject: Moodle 2.0 Multimedia: Working with 2D and 3D Maps [Article] Google Earth, Google Maps and Your Photos: a Tutorial [Article] Google Earth, Google Maps and Your Photos: a Tutorial Part II [Article]
Read more
  • 0
  • 0
  • 1061
article-image-building-bar-graph-cityscape
Packt
16 May 2013
4 min read
Save for later

Building a bar graph cityscape

Packt
16 May 2013
4 min read
(For more resources related to this topic, see here.) Building a bar graph cityscape (Intermediate) In this article, we will take the standard bar graph and explore how we can manipulate it into something completely different. By default, the bar graph is a horizontal bar as follows: In this article, we are going to paint it black, turn it on its side, and make it into a building. Getting ready To start, I've created a simple wallpaper that looks a little bit like a cityscape at sunset. How to do it... Create a new skin folder and name it sunset. This time, create another folder inside it called memory, and create a memory.ini skin file within it. Write the following code into the memory.ini file: [Rainmeter]Name: Memory PlazaVersion: 1Update=1000Author=You![Variables]width=80height=200[Measure]Measure=PhysicalMemory[Meter]MeasureName=MeasureMeter=BAR X=0 Y=0 BarColor=0,0,0,255 BarOrientation=Vertical W=#width# H=#height# Save the file, then load up the new skin. This is what I got: Click-and-drag the black graph and move it to that it sit directly on the horizon. This is what you should end up with: So, now you have successfully built a vertical bar graph that measures your memory use and added it to your cityscape! I will call this building Memory Plaza. Memory Plaza will grow as most of your memory is used up by programs running in Windows. Make sure it doesn't get too tall! How it works... There are several interesting things that happened in this recipe: We created a memory measurement to see how much memory is being used We created a bar meter and connected it to the memory measurement We turned the bar meter on its side and customized it to look just like the buildings We created a memory measure in the following code block: [Measure]Measure=PhysicalMemory In the block, we created a new measurement named Measure. There are other types of memory measurements we can use, but we have just gone with PhysicalMemory. The full documentation on memory measurement is available at http://docs.rainmeter.net/manual/measures/memory. We then created the bar meter in the following code block: [Meter]MeasureName=MeasureMeter=BARX=0Y=0BarColor=0,0,0,255BarOrientation=VerticalW=#width#H=#height# The full documentation on the bar meter is available at http://docs.rainmeter.net/manual/meters/bar You should be able to guess what is going on with the MeasureName, Meter, X, and Y fields. The BarColor field provides the color of the bar. We've made it black to match the color of the horizon. As we want the bar to grow upwards as a building, we have set BarOrientation to be Vertical. The last two fields are new: W=#width#H=#height# If you were to guess that the W field represented width and the H field represented height, then you would be correct. The values in the hashes like #width# are variables. Variables are like containers for values that we can change the contents of. When we want to take the values out to use, we use the variable names within Rainmeter by wrapping them in hashes. If you look higher up the code block, you will find that we declared the variables like so: [Variables]width=80height=200 The obvious benefit to this is that you only have to write down the values for width and height once. There's more... Right now, it is not obvious where the maximum readings are. Why not draw a crane, or building scaffolding, to mark the highest position to which the buildings can grow? Summary This article helped you to create a live cityscape with buildings that grow or shrink depending on the resources that your Windows operating system is consuming, such as RAM usage, memory usage, and so on. Resources for Article : Further resources on this subject: User Interface Design in ICEfaces 1.8: Part 1 [Article] Enlighten your desktop with Elive [Article] User Interface Design in ICEfaces 1.8: Part 2 [Article]
Read more
  • 0
  • 0
  • 1271

article-image-quick-start-selenium-tests
Packt
16 May 2013
4 min read
Save for later

Quick Start into Selenium Tests

Packt
16 May 2013
4 min read
(For more resources related to this topic, see here.) Step 1 – Recording and adding commands in a test In this section we will show you how to record a test on a demo e-commerce application. We will test the product search feature of the application using the following steps: Launch the Firefox browser. Open the website for testing in the Firefox browser. For this example we will use http://demo.magentocommerce.com/. Open Selenium IDE from the Tools menu. Selenium IDE by default sets the recording mode on. If it's not pressed, you can start recording by pressing the (record) button in the top-right corner. Now switch back to the Firefox browser window and type Nokia in the search textbox and click on the Search button as shown: Check if the link Nokia 2610 Phone is present in the search results. We can do that by selecting the link and opening the context menu (right-click) and selecting Show All Available Commands | assertElementPresent link=Nokia 2610 Phone. Next, we will click on the Nokia 2610 Phone link to open the product page and check if the Nokia 2610 Phone text is displayed on the product page. To do this, select the Nokia 2610 Phone text and open the context menu (right-click) and select Show All Available Commands | assertTextPresent link=Nokia 2610 Phone: Go back to Selenium IDE. All the previous steps are recorded by Selenium IDE in the Command-Target-Value format as shown in the following screenshot. Stop the recording session by clicking on the Recording button: Step 2 – Saving the recorded test Before we play back the recorded test, let's save it in Selenium IDE: Select File | Save Test Case from the Selenium IDE main menu: In the Save As dialog box, enter the test case name as SearchTest.html and click on the Save button. The test will be saved with the name SearchTest. Step 3 – Saving the test suite In Selenium IDE, we can group multiple tests in a test suite. Let's create a test suite and Selenium IDE will automatically add SearchTest to this suite: Select File | Save Test Suite from the Selenium IDE main menu. In the Save As dialog box, enter the test case name as SearchFeatureTests.html and click on the Save button. You can create and record more than one test case in a test suite. Step 4 – Running the recorded test Selenium IDE provides multiple ways to execute the tests: Option 1 – running a single test case Select the test which you want to execute from the test suite pane and click on the (play current test case) button . Selenium IDE will start the playback of the test and you can see the steps that we recorded earlier are being played automatically in the browser window. At end of execution, Selenium IDE will display results as per the following screenshot: Option 2 – running all tests from a test suite If you have multiple tests in a test suite, you can use the (play the entire test suite) button to play all the test cases. After the test is executed in Selenium IDE, you can see the results in Log tab. All the steps which are successfully completed will be heighted in green and checks in dark green. If there are any failures in the test, those will be highlighted in red. This is how Selenium IDE helps you testing your web application. Summary We learned how to record a test, save a test case, enhance a test by adding commands, and run a test with Selenium IDE. This article also got you started on with programming with Selenium WebDriver. Resources for Article : Further resources on this subject: Python Testing: Installing the Robot Framework [Article] First Steps with Selenium RC [Article] User Extensions and Add-ons in Selenium 1.0 Testing Tools [Article]
Read more
  • 0
  • 0
  • 1773