Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-net-45-extension-methods-iqueryable
Packt
14 May 2013
4 min read
Save for later

.NET 4.5 Extension Methods on IQueryable

Packt
14 May 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Refer to the IQueryableExtensions.cs file in the ExtensionMethods.Libraryproject for the extension methods. The models are located in Models/PagedList.cs and Models/IPagedList.cs. These methods are used in the IQueryableExtensionTests.cs file in the ExtensionMethods.Tests project. How to do it... The following code snippet shows a general use of extension methods on IQueryables: public static User ByUserId(this IQueryable<User> query, int userId) { return query.First(u => u.UserId == userId); } The following code snippet is a paged list class for pagination of data: public class PagedList<T> : List<T>, IPagedList { public PagedList(IQueryable<T> source, int index, int pageSize) { this.TotalCount = source.Count(); this.PageSize = pageSize; this.PageIndex = index; this.AddRange(source.Skip(index * pageSize).Take(pageSize). ToList()); } public PagedList(List<T> source, int index, int pageSize) { this.TotalCount = source.Count(); this.PageSize = pageSize; this.PageIndex = index; this.AddRange(source.Skip(index * pageSize).Take(pageSize). ToList()); } public int TotalCount { get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public bool IsPreviousPage { get { return (PageIndex > 0); } } public bool IsNextPage { get { return (PageIndex * PageSize) <=TotalCount; } } } The following code snippet is the extension method that executes and converts the query to the PagedList object: public static PagedList<T> ToPagedList<T>(this IQueryable<T> source, int index, int pageSize) { return new PagedList<T>(source, index, pageSize); } The following code snippet shows how we use these extension methods: [TestMethod] public void UserByIdReturnsCorrectUser() { var query = new List<User> { new User {UserId = 1}, new User {UserId = 2} }.AsQueryable(); var user = query.ByUserId(1); Assert.AreEqual(1, user.UserId); } [TestMethod] public void PagedList_Contains_Correct_Number_Of_Elements() { var query = new List<int>{1,2,3,4,5,6,7,8,9,10}.AsQueryable(); var pagedList = query.ToPagedList(0, 5); Assert.AreEqual(5, pagedList.Count); Assert.AreEqual(10, pagedList.TotalCount); } How it works... The first code snippet ByUserId is the most commonly used type of extension method for IQueryable types. An alternative to this method is to use the repository pattern and add a method of getting a user by the Id. But sometimes, we will expose the query to lower levels of the app such as the service layer where we might need to use this feature at multiple places, hence refactoring that logic into an extension method makes perfect sense. This extension method evaluates and executes the query immediately due to requesting a single value using the First() method: query.First(u => u.UserId == userId); The second code snippet gives us a PagedList model which becomes a valuable class when working with grids or pagination. The constructor accepts an IQueryable or IList and converts that data into a paged list. Take note of the line in which we evaluate the source by calling ToList(). This line executes the query on the provider: this.AddRange(source.Skip(index * pageSize).Take(pageSize).ToList()); In the code snippets using these extension methods, we have created a list and cast it to an IQueryable type. This is purely for the purpose of demonstration. In a real application, the query would be coming from a LINQ to SQL or entities context, which is in charge of executing the query against a database. We need to be careful of how extension methods on IQueryable are written. A poorly written query will result in unexpected behavior, such as premature query execution. If the extension method is simply building up the query (using method chaining), ensure that the query is not evaluated inside the method. If the query is evaluated and executed before the method finishes, any other use of the query outside of the extension method will result in operating on the data in memory. Summary In this article, you have learned a few tricks and caveats when using extending IQueryable. Resources for Article : Further resources on this subject: Working With ASP.NET DataList Control [Article] NHibernate 3.0: Using LINQ Specifications in the data access layer [Article] LINQ to Objects [Article]
Read more
  • 0
  • 0
  • 3803

article-image-working-home-page-components-and-custom-links
Packt
10 May 2013
6 min read
Save for later

Working with Home Page Components and Custom Links

Packt
10 May 2013
6 min read
(For more resources related to this topic, see here.) Creating a Personal Setup link using the standard Custom Links on the sidebar All users need to change their personal settings, from time to time, in the Salesforce CRM application. They may, for example, wish to edit their user information, change their password, or you may need them to grant login access to administrators, plus many other reasons. Accessing the Personal Setup area is done by users clicking on their name, looking for the Setup link in the drop-down list, clicking on the Setup link, and then finally clicking on the Personal Setup link in the sidebar. All this takes time and can often be a challenge for less-experienced users of the application. By providing a direct shortcut link in the sidebar, all users will be able to access their Personal Setup area with a single click, and save their time and efforts. How to do it... Carry out the following steps to create a Personal Setup link in the sidebar: Navigate to the home page components' setup page by going to Your Name | Setup | Customize | Home | Home Page Components. Locate the Custom Links row within the Standard Components section. Click on Edit. Within the Custom Links page you can enter a maximum of 15 links. Enter Personal Setup in the 1. Bookmark field. Enter /ui/setup/Setup?setupid=PersonalSetup in the corresponding URL field, as shown in the following screenshot: Click on Save. We now need to add the standard Custom Links component to a home page layout (if it has not been already added). Navigate to the home page components setup page by going to Your Name | Setup | Customize | Home | Home Page Layouts. Determine which home page layout to place the component on and click on Edit. Here we are editing the home page layout named DE Default, as shown in the following screenshot: We will be presented with the Step 1. Select the components to show page. Check the Custom Links checkbox in the Select Narrow Components to Show section, as shown in the following screenshot: Click on Next. Move Custom Links to the top position in the Narrow (Left) Column using the Arrange the component on your home page section, as shown in the following screenshot: Click on Save. How it works... The link appears in the sidebar within the standard Custom Links section, as shown in the following screenshot: When the link is clicked, the user is immediately presented with their Personal Setup page. There's more... Clicking on the link displays the Personal Setup page in the same window and is useful when there is no requirement for the link to open up in a new browser window. The following screenshot shows the result of clicking on the Personal Setup Custom Link: See also The Using Custom Links to open Training in a new window from the sidebar recipe in this article. Using Custom Links to open Training in a new window from the sidebar In the Salesforce CRM application, there are various options for help and training. Accessing the training area is done by the users by clicking on the Help link at the top of the page (which then opens in a new browser window). Users then need to look for the Training tab within the new page and then click on the tab. All this takes a little time and can often be a challenge for less-experienced users of the application. By providing a direct shortcut link in the sidebar, all users will be able to open Training automatically in a new window with a single click, thus saving time and effort. How to do it... Carry out the following steps to create a link in the sidebar to open Training in a new window: Navigate to the Custom Links home page by going to Your Name | Setup | Customize | Home | Custom Links. Click on New. Enter the label of the Custom Link in the Label field. Here, type the text Training. Accept the default name of the Custom Link in the Name field, Training. Leave the Protected Component checkbox unchecked. The Protected Component option is used by developers to mark the Custom Link as protected in managed packages. This then allows the developer to delete the link in any future releases of the managed package without worrying about causing package installations to fail. Enter the following description in the Description field: This a link to Salesforce Training. Choose the Display in new window option from the Behavior picklist. Choose the URL option from the Content Source picklist. Enter /help/doc/user_ed.jsp?loc=training into the source section as shown in the following screenshot: Ensure the selection Unicode (UTF-8) is set in the Link Encoding picklist. Click on Save. We now need to create a custom home page component to house this custom link. The alert displayed in the following screenshot reminds us of that: Click on OK. Now navigate to the home page components setup page by going to Your Name | Setup | Customize | Home | Home Page Components. Click on New. Click on Next (on the Understanding Custom Components splash screen, if shown). The Next button is found on the Understanding Custom Components splash screen (this page is only shown if the Don't show this page again checkbox has not previously been checked) as in the following screenshot: Here, we will be presented with the Step 1. New Custom Components page. Enter the name of the Custom Component in the Name field. Enter the text Custom Links (in New Window). Select the Links option from the Type options list as shown in the following screenshot: Click on Next. Now add the Training link to the list of Custom Links to show as shown in the following screenshot: Click on Save We have created our Training link's custom home page component but we are not finished yet. We now need to add the custom home page component to a home page layout. Navigate to the home page components setup page by going to Your Name | Setup | Customize | Home | Home Page Layouts. Determine which home page layout to place the component on and click on Edit. Here we are editing the home page layout named DE Default, as shown in the following screenshot: We will be presented with the Step 1. Select the components to show page. Check the Custom Links (in New Window) checkbox in the Select Narrow Components to Show section as shown in the following screenshot: Click on Next. Move Custom Links (in New Window) to the top position in Narrow (Left) Column using the Arrange the component on your home page. section, as shown in the following screenshot: Click on Save. How it works... Clicking on the Training link opens a new smaller browser window with the Salesforce Training page directly accessed and loaded alongside the main Salesforce CRM application windows. Users can switch back to the main application when they want and simply close the Training window when they are finished viewing it. You can see what this looks like in the following screenshot: See also The Creating a Personal Setup link using the standard Custom Links on the sidebar recipe in this article.
Read more
  • 0
  • 0
  • 3601

Packt
07 May 2013
6 min read
Save for later

Using Debug Perspective – setting breakpoints

Packt
07 May 2013
6 min read
(For more resources related to this topic, see here.) In this article we will learn why breakpoints are important, how to set them up, and how to navigate through the code using the Step Into, Step Over, and Step Return breakpoint manipulation options. Let's practice setting up a breakpoint on our sample program. How to do it... I am sure you are aware that in Java, the first method that is executed is the main() method. Our Employee class has its own main() method, so let's set up a breakpoint in it. If you are on Windows and If your project has more than one class, and the main() method is located in a different class from the one you are about to debug, go to the icon and select Debug | Debug Configurations.... Select the class you are about to debug in the left menu and set the Main class option to the class that has the main() method. To set up a breakpoint Go to the main() method of the Employee class. Find andrew.setNumber(123) (on line 110). Right-click on the line number (if you don't see the line number, click on the left margin of the Employee tab and choose Show Line Numbers as shown in the following screenshot). Got to line 110, right-click on the margin again, and click on Toggle Breakpoint. If your breakpoint is successfully set up, you will see the icon right near the line number of the method you wanted to set the breakpoint to. (In our case right near line number 110.) Another way to set a breakpoint is to double-click on the gray field near the line number. To unset the breakpoint, just double-click on it again. Now let's run the debugger by clicking on the icon. If you are asked to switch to Debug Perspective, click Yes. After running the debugger you should see that your program has highlighted the line with the breakpoint, as shown in the following screenshot: This means that the program has stopped executing before line 110 and is waiting for your actions. Before we proceed with exploring how Step Into/ Step Over works, let's take a look at the Variables view. Right now there are two lines: args that represents arguments passed to the main() method and andrew. Expand andrew and you will see the current state of all the class variables as shown in the following screenshot: We will keep on watching this view as it will change when we go through our class. Let's learn how to navigate through the debugger. We navigate with the help of the Step Into/Step Over buttons. Let's see how it works. At this point we are standing at line 110 and there are four options that we can choose. We can Step Into, Step Over, Go to Next breakpoint, or Go Back. Step Into means that we go inside the method and explore its functionality. Let's try it right now. In the top menu there are two buttons: . The first button is responsible for Step Into (F5) and the second one for Step Over (F6). Click on the Step Into (F5) icon. Now you are taken to the setNumber() method. If you take a look at the Variables view, there are two lines now: this, which relates to the class variables, and _number, which is a variable of the local method. Your position is at the if statement evaluating if the number is between certain limits (at line 46). Click on the Step Into (F5) icon again. As there is no method to Step Into, the pointer moves one line down; now it's on number = _number. This line assigns local variables to the class variable. Click on the Step Into (F5) icon again. Right now the pointer should be at line 49. Now let's look at the Variables view again. If you expand this, you will see that the number is highlighted in yellow. This is because the variables have changed their values. See the following screenshot:   This highlighting is very useful, as it allows you to see what variables have changed their values right after it has happened. Before we use Step Into one last time in this exercise, let's also take a look at the Debug view. Sometimes, when the project is big, it is very hard to determine what is the hierarchical structure of the classes and methods, which you are in at the moment. In our case the Debug view shows that we are in the setNumber(int) method on line 49, and if we look below this line, we see that we came here from the main() method on line 110. See the following screenshot: If at any time you want to return back to the calling method (in our case it is the main() method) use Step Return (F7) . Click on the Step Into (F5) icon. As we have reached the end of the setNumber() method, Step Into makes it return back to the main() method, and the pointer points to the next line (line 111 in our case). Also, the Debug view no longer shows setNumber() in its hierarchy. Now, let's consider the situation when you don't want to step into the method but you want to go directly to the next method. In this situation we use Step Over (F6). Note that when you use Step Over, the method is still called but the debugger does not walk you through it. Let's step over the setAge() method. Clicking on Step Over (F6), you might see that you were not prompted inside the setAge() method, but directly to line 112. Take a look at the Variables view. You will see that age is set to 28 (that is what the setAge() method does), despite you not walking through this method. The last two controls that I want to mention in this recipe are Resume (F8) , which allows you to continue running your application until it next faces the breakpoint or the end of the program, and Terminate (Ctrl + F2), which stops the debugging process. Summary This article helped you to learn what a breakpoint is. You also learned how to set breakpoints and how to navigate through the debugger using Step Into, Step Over, and Step Return breakpoint manipulation options. Resources for Article : Further resources on this subject: Android Application Testing: Getting Started [Article] JBoss RichFaces 3.3 Supplemental Installation [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article]
Read more
  • 0
  • 0
  • 804
Banner background image

article-image-validating-and-using-model-data
Packt
07 May 2013
14 min read
Save for later

Validating and Using the Model Data

Packt
07 May 2013
14 min read
(For more resources related to this topic, see here.) Declarative validation It's easy to set up declarative validation for an entity object to validate the data that is passed through the metadata file. Declarative validation is the validation added for an attribute or an entity object to fulfill a particular business validation. It is called declarative validation because we don't write any code to achieve the validation as all the business validations are achieved declaratively. The entity object holds the business rules that are defined to fulfill specific business needs such as a range check for an attribute value or to check if the attribute value provided by the user is a valid value from the list defined. The rules are incorporated to maintain a standard way to validate the data. Knowing the lifecycle of an entity object It is important to know the lifecycle of an entity object before knowing the validation that is applied to an entity object. The following diagram depicts the lifecycle of an entity: When a new row is created using an entity object, the status of the entity is set to NEW. When an entity is initialized with some values, the status is changed from NEW to INITIALIZED. At this time, the entity is marked invalid or dirty; this means that the state of the entity is changed from the value that was previously checked with the database value. The status of an entity is changed to UNMODIFIED, and the entity is marked valid after applying validation rules and committing to the database. When the value of an unmodified entity is changed, the status is changed to MODIFIED and the entity is marked dirty again. The modified entity again goes to an UNMODIFIED state when it is saved to the database. When an entity is removed from the database, the status is changed to DELETED. When the value is committed, the status changes to DEAD. Types of validation Validation rules are applied to an entity to make sure that only valid values are committed to the database and to prevent any invalid data from getting saved to the database. In ADF, we use validation rules for the entity object to make sure the row is valid all the time. There are three types of validation rules that can be set for the entity objects; they are as follows: Entity-level validation Attribute-level validation Transaction-level validation Entity-level validation As we know, an entity represents a row in the database table. Entity-level validation is the business rule that is added to the database row. For example, the validation rule that has to be applied to a row is termed as entity-level validation. There are two unique declarative validators that will be available only for entity-level validation—Collection and UniqueKey. The following diagram explains that entity-level validations are applied on a single row in the EMP table. The validated row is highlighted in bold. Attribute-level validation Attribute-level validations are applied to attributes. Business logic mostly involves specific validations to compare different attribute values or to restrict the attributes to a specific range. These kinds of validations are done in attribute-level validation. Some of the declarative validators available in ADF are Compare, Length, and Range. The Precision and Mandatory attribute validations are added, by default, to the attributes from the column definition in the underlying database table. We can only set the display message for the validation. The following diagram explains that the validation is happening on the attributes in the second row: There can be any number of validations defined on a single attribute or on multiple attributes in an entity. In the diagram, Empno has a validation that is different from the validation defined for Ename. Validation for the Job attribute is different from that for the Sal attribute. Similarly, we can define validations for attributes in the entity object. Transaction-level validation Transaction-level validations are done after all entity-level validations are completed. If you want to add any kind of validation at the end of the process, you can defer the validation to the transaction level to ensure that the validation is performed only once. Built-in declarative validators ADF Business Components includes some built-in validators to support and apply validations for entity objects. The following screenshot explains how a declarative validation will show up in the Overview tab: The Business Rules section for the EmpEO.xml file will list all the validations for the EmpEO entity. In the previous screenshot, we will see that the there are no entity-level validators defined and some of the attribute-level validations are listed in the Attributes folder. Collection validator A Collection validator is available only for entity-level validation. To perform operations such as average, min, max, count, and sum for the collection of rows, we use the collection validator. Collection validators are compared to the GROUP BY operation in an SQL query with a validation. The aggregate functions, such as count, sum, min, and max are added to validate the entity row. The validator is operated against the literal value, expression, query result, and so on. You must have the association accessor to add a collection validation. Time for action – adding a collection validator for the DeptEO file Now, we will add a Collection validator to DeptEO.xml for adding a count validation rule. Imagine a business rule that says that the number of employees added to department number 10 should be more than five. In this case, you will have a count operation for the employees added to department number 10 and show a message if the count is less than 5 for a particular department. We will break this action into the following three parts: Adding a declarative validation: In this case, the number of employees added to the department should be greater than five Specifying the execution rule: In our case, the execution of this validation should be fired only for department number 10 Displaying the error message: We have to show an error message to the user stating that the number of employees added to the department is less than five Adding the validation Following are the steps to add the validation: Go to the Business Rules section of DeptEO.xml. You will find the Business Rules section in the Overview tab. Select Entity Validators and click on the + button. You may right-click on the Entity Validators folder and then select New Validator to add a validator. Select Collection as Rule Type and move on to the Rule Definition tab. In this section, select Count for the Operation field; Accessor is the association accessor that gets added through a composition association relationship. Only the composition association accessor will be listed in the Accessor drop-down menu. Select the accessor for EmpEO listed in the dropdown, with Empno as the value for Attribute. In order to create a composition association accessor, you will have to create an association between DeptEO.xml and EmpEO.xml based on the Deptno attribute with cardinality of 1 to *. The Composition Association option has to be selected to enable a composition relationship between the two entities. The value of the Operator option should be selected as Greater Than. Compare with will be a literal value, which is 5 that can be entered in the Enter Literal Value section below. Specifying the execution rule Following are the steps to specify the execution: Now to set the execution rule, we will move to the Validation Execution tab. In the Conditional Execution section, add Deptno = '10' as the value for Conditional Execution Expression. In the Triggering Attribute section, select the Execute only if one of the Selected Attributes has been changed checkbox. Move the Empno attribute to the Selected Attributes list. This will make sure that the validation is fired only if the Empno attribute is changed: Displaying the error message Following are the steps to display the error message: Go to the Failure Handling section and select the Error option for Validation Failure Severity. In the Failure Message section, enter the following text: Please enter more than 5 Employees You can add the message stored in a resource bundle to Failure Message by clicking on the magnifying glass icon. What just happened? We have added a collection validation for our EmpEO.xml object. Every time a new employee is added to the department, the validation rule fires as we have selected Empno as our triggering attribute. The rule is also validated against the condition that we have provided to check if the department number is 10. If the department number is 10, the count for that department is calculated. When the user is ready to commit the data to the database, the rule is validated to check if the count is greater than 5. If the number of employees added is less than 5, the error message is displayed to the user. When we add a collection validator, the EmpEO.xml file gets updated with appropriate entries. The following entries get added for the aforementioned validation in the EmpEO.xml file: <validation:CollectionValidationBean Name="EmpEO_Rule_0" ResId= "com.empdirectory.model.entity.EmpEO_Rule_0" OnAttribute="Empno" OperandType="LITERAL" Inverse="false" CompareType="GREATERTHAN" CompareValue="5" Operation="count"> <validation:OnCondition> <![CDATA[Deptno = '10']]> </validation:OnCondition> </validation:CollectionValidationBean> <ResourceBundle> <PropertiesBundle PropertiesFile= "com.empdirectory.model.ModelBundle"/> </ResourceBundle> The error message that is added in the Failure Handling section is automatically added to the resource bundle. The Compare validator The Compare validator is used to compare the current attribute value with other values. The attribute value can be compared against the literal value, query result, expression, view object attribute, and so on. The operators supported are equal, not-equal, less-than, greater-than, less-than or equal to, and greater-than or equal to. The Key Exists validator This validator is used to check if the key value exists for an entity object. The key value can be a primary key, foreign key, or an alternate key. The Key Exists validator is used to find the key from the entity cache, and if the key is not found, the value is determined from the database. Because of this reason, the Key Exists validator is considered to give better performance. For example, when an employee is assigned to a department deptNo 50 and you want to make sure that deptNo 50 already exists in the DEPT table. The Length validator This validator is used to check the string length of an attribute value. The comparison is based on the character or byte length. The List validator This validator is used to create a validation for the attribute in a list. The operators included in this validation are In and NotIn. These two operators help the validation rule check if an attribute value is in a list. The Method validator Sometimes, we would like to add our own validation with some extra logic coded in our Java class file. For this purpose, ADF provides a declarative validator to map the validation rule against a method in the entity-implementation class. The implementation class is generated in the Java section of the entity object. We need to create and select a method to handle method validation. The method is named as validateXXX(), and the returned value will be of the Boolean type. The Range validator This validator is used to add a rule to validate a range for the attribute value. The operators included are Between and NotBetween. The range will have a minimum and maximum value that can be entered for the attribute. The Regular Expression validator For example, let us consider that we have a validation rule to check if the e-mail ID provided by the user is in the correct format. For the e-mail validation, we have some common rules such as the following: The e-mail ID should start with a string and end with the @ character The e-mail ID's last character cannot be the dot (.) character Two @ characters are not allowed within an e-mail ID For this purpose, ADF provides a declarative Regular Expression validator. We can use the regex pattern to check the value of the attribute. The e-mail address and the US phone number pattern is provided by default: Email: [A-Z0-9._%+-]+@[A-Z0-,9.-]+.[A-Z]{2,4} Phone Number (US): [0-9]{3}-?[0-9]{3}-?[0-9]{4} You should select the required pattern and then click on the Use Pattern button to use it. Matches and NotMatches are the two operators that are included with this validator. The Script validator If we want to include an expression and validate the business rule, the Script validator is the best choice. ADF supports Groovy expressions to provide Script validation for an attribute. The UniqueKey validator This validator is available for use only for entity-level validation. To check for uniqueness in the record, we would be using this validator. If we have a primary key defined for the entity object, the Uniqueness Check Definition section will list the primary keys defined to check for uniqueness, as shown in the following screenshot: If we have to perform a uniqueness check against any attribute other than the primary key attributes, we will have to create an alternate key for the entity object. Time for action – creating an alternate key for DeptEO Currently, the DeptEO.xml file has Deptno as the primary key. We would add business validation that states that there should not be a way to create a duplicate of the department name that is already available. The following steps show how to create an alternate key: Go to the General section of the DeptEO.xml file and expand the Alternate Keys section. Alternate keys are keys that are not part of the primary key. Click on the little + icon to add a new alternate key. Move the Dname attribute from the Available list to the Selected list and click on the OK button. What just happened? We have created an alternate key against the Dname attribute to prepare for a unique check validation for the department name. When the alternate key is added to an entity object, we will see the AltKey attribute listed in the Alternate Key section of the General tab. In the DeptEO.xml file, you will find the following code that gets added for the alternate key definition: <Key Name="AltKey" AltKey="true"> <DesignTime> <Attr Name="_isUnique" Value="true"/> <Attr Name="_DBObjectName" Value="HR.DEPT"/> </DesignTime> <AttrArray Name="Attributes"> <Item Value= "com.empdirectory.model.entity.DeptEO.Dname"/> </AttrArray> </Key> Have a go hero – compare the attributes For the first time, we have learned about the validations in ADF. So it's time for you to create your own validation for the EmpEO and DeptEO entity objects. Add validations for the following business scenarios: Continue with the creation of the uniqueness check for the department name in the DeptEO.xml file. The salary of the employees should not be greater than 1000. Display the following message if otherwise: Please enter Salary less than 1000. Display the message invalid date if the employee's hire date is after 10-10-2001. The length of the characters entered for Dname of DeptEO.xml should not be greater than 10. The location of a department can only be NEWYORK, CALIFORNIA, or CHICAGO. The department name should always be entered in uppercase. If the user enters a value in lowercase, display a message. The salary of an employee with the MANAGER job role should be between 800 and 1000. Display an error message if the value is not in this range. The employee name should always start with an uppercase letter and should end with any character other than special characters such as :, ;, and _. After creating all the validations, check the code and tags generated in the entity's XML file for each of the aforementioned validations.
Read more
  • 0
  • 0
  • 6938

article-image-creating-lazarus-component
Packt
22 Apr 2013
14 min read
Save for later

Creating a Lazarus Component

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Creating a new component package We are going to create a custom-logging component, and add it to the Misc tab of the component palette. To do this, we first need to create a new package and add out component to that package along with any other required resources, such as an icon for the component. To create a new package, do the following: Select package from the main menu. Select New Package.... from the submenu. Select a directory that appears in the Save dialog and create a new directory called MyComponents. Select the MyComponents directory. Enter MyComponents as the filename and press the Save button. Now, you have a new package that is ready to have components added to it. Follow these steps: On the Package dialog window, click on the add (+) button. Select the New Component tab. Select TComponent as Ancestor Type. Set New class name to TMessageLog. Set Palette Page to Misc. Leave all the other settings as they are. You should now have something similar to the following screenshot. If so, click on the Create New Component button: You should see messagelog.pas listed under the Files node in the Package dialog window. Let's open this file and see what the auto-generated code contains. Double-click on the file or choose Open file from More menu in the Package dialog. Do not name your component the same as the package. This will cause you problems when you compile the package later. If you were to do this, the .pas file would be over written, because the compile procedure creates a .pas file for the package automatically. The code in the Source Editor window is given as follows: unit TMessageLog;{$mode objfpc}{$H+}interfaceusesClasses, SysUtils, LResources, Forms, Controls, Graphics, Dialogs,StdCtrls;typeTMessageLog = class(TComponent)private{ Private declarations }protected{ Protected declarations }public{ Public declarations }published{ Published declarations }end;procedure Register;implementationprocedure Register;beginRegisterComponents('Misc',[TMessageLog]);end;end. What should stand out in the auto-generated code is the global procedure RegisterComponents. RegisterComponents is contained in the Classes unit. The procedure registers the component (or components if you create more than one in the unit) to the component page that is passed to it as the first parameter of the procedure. Since everything is in order, we can now compile the package and install the component. Click the Compile button on the toolbar. Once the compile procedure has been completed, select Install, which is located in the menu under the Use button. You will be presented with a dialog telling you that Lazarus needs to be rebuilt. Click on the Yes button, as shown in the following screenshot: The Lazarus rebuilding process will take some time. When it is complete, it will need to be restarted. If this does not happen automatically, then restart Lazarus yourself. On restarting Lazarus, select the Misc tab on the component palette. You should see the new component as the last component on the tab, as shown in the following screenshot: You have now successfully created and installed a new component. You can now create a new application and add this component to a Lazarus form. The component in its current state does not perform any action. Let us now look at adding properties and events to the component that will be accessible in the Object Inspector window at design time. Adding properties Properties of a component that you would like to have visible in the Object Inspector window must be declared as published. Properties are attributes that determine an object's status and behavior. A property is a name that is mapped to read and write methods or access data directly. This means, when you read or write a property, you are accessing a field or calling a method of the object. For example, let us add a FileName property to TMessageLog, which is the name of the file that messages will be written to. The actual field of the object that will store this data will be named fFileName. To the TMessageLog private declaration section, add: fFileName: String; To the TMessagLog published declaration section, add: property FileName: String read fFileName write fFileName; With these changes, when the packages are compiled and installed, the property FileName will be visible in the Object Inspector window when the TMessageLog declaration is added to a form in a project. You can do this now if you would like to verify this. Adding events Any interaction that a user has with a component, such as clicking it, generates an event. Events are also generated by the system in response to a method call or a change in a component's property, or if different component's property changes, such as the focus being set on one component causes the current component in focus to lose it, which triggers an event call. Event handlers are methods of the form containing the component; this technique is referred to as delegation. You will notice that when you double-click on a component's event in the object inspector it creates a new procedure of the form. Events are properties, and such methods are assigned to event properties, as we just saw with normal properties. Because events are the properties and use of delegation, multiple events can share the same event handler. The simplest way to create an event is to define a method of the type TNotifyEvent. For example, if we want to add an OnChange event to TMessageLog, we could add the following code: ...privateFonChange : TNotifyEvent;...publicproperty OnChange: TNotifyEvent read FOnChange write FOnChange;…end; When you double-click on the OnChange event in Object Inspector, the following method stub would be created in the form containing the TMessageLog component: procedure TForm.MessageLogChange(Sender: TObject);beginend; Some properties, such as OnChange or OnFocus, are sometimes called on the change of value of a component's property or the firing of another event. Traditionally, in this case, a method with the prefix of Do and with the suffix of the On event are called. So, in the case of our OnChange event, it would be called from the DoChange method (as called by some other method). Let us assume that, when a filename is set for the TMessageLog component, the procedure SetFileName is called, and that calls DoChange. The code would look as follows: procedure SetFileName(name : string);beginFFileName = name;//fire the eventDoChange;end;procedure DoChange;beginif Assigned(FOnChange) thenFOnChange(Self);end; The DoChange procedure checks to see if anything has been assigned to the FOnChange field. If it is assigned, then it executes what is assigned to it. What this means is that if you double-click on the OnChange event in Object Inspector, it assigns the method name you enter to FOnChange, and this is the method that is called by DoChange. Events with more parameters You probably noticed that the OnChange event only had one parameter, which was Sender and is of the type Object. Most of the time, this is adequate, but there may be times when we want to send other parameters into an event. In those cases, TNotifyEvent is not an adequate type, and we will need to define a new type. The new type will need to be a method pointer type, which is similar to a procedural type but has the keyword of object at the end of the declaration. In the case of TMessageLog, we may need to perform some action before or after a message is written to the file. To do this, we will need to declare two method pointers, TBeforeWriteMsgEvent and TAfterWriteMsgEvent, both of which will be triggered in another method named WriteMessage. The modification of our code will look as follows: typeTBeforeWriteMsgEvent = procedure(var Msg: String; var OKToWrite:Boolean) of Object;TAfterWriteMsgEvent = procedure(Msg: String) of Object;TmessageLog = class(TComponent)…publicfunction WriteMessage(Msg: String): Boolean;...publishedproperty OnBeforeWriteMsg: TBeforeWriteMsgEvent read fBeforeWriteMsgwrite fBeforeWriteMsg;property OnAfterWriteMsg: TAfterWriteMsgEvent read fAfterWriteMsgwrite fAfterWriteMsg;end;implementationfunction TMessageLog.WriteMessage(Msg: String): Boolean;varOKToWrite: Boolean;beginResult := FALSE;OKToWrite := TRUE;if Assigned(fBeforeWriteMsg) thenfBeforeWriteMsg(Msg, OKToWrite);if OKToWrite thenbegintryAssignFile(fLogFile, fFileName);if FileExists(fFileName) thenAppend(fLogFile)elseReWrite(fLogFile);WriteLn(fLogFile, DateTimeToStr(Now()) + ' - ' + Msg);if Assigned(fAfterWriteMsg) thenfAfterWriteMsg(Msg);Result := TRUE;CloseFile(fLogFile);exceptMessageDlg('Cannot write to log file, ' + fFileName + '!',mtError, [mbOK], 0);CloseFile(fLogFile);end; // try...exceptend; // ifend; // WriteMessage While examining the function WriteMessage, we see that, before the Msg parameter is written to the file, the FBeforeWriteMsg field is checked to see if anything is assigned to it, and, if so, the write method of that field is called with the parameters Msg and OKToWrite. The method pointer TBeforeWriteMsgEvent declares both of these parameters as var types. So if any changes are made to the method, the changes will be returned to WriteMessage function. If the Msg parameter is successfully written to the file, the FAfterWriteMsg parameter is checked for assigned and executed parameter (if it is). The file is then closed and the function's result is set to True. If the Msg parameter value is not able to be written to the file, then an error dialog is shown, the file is closed, and the function's result is set to False. With the changes that we have made to the TMessageLog unit, we now have a functional component. You can now save the changes, recompile, reinstall the package, and try out the new component by creating a small application using the TMessageLog component. Property editors Property editors are custom dialogs for editing special properties of a component. The standard property types, such as strings, images, or enumerated types, have default property editors, but special property types may require you to write custom property editors. Custom property editors must extend from the class TPropertyEditor or one of its descendant classes. Property editors must be registered in the Register procedure using the function RegisterPropertyEditor from the unit PropEdits. An example of property editor class declaration is given as follows: TPropertyEditor = classpublicfunction AutoFill: Boolean; Virtual;procedure Edit; Virtual; // double-clicking the property value toactivateprocedure ShowValue; Virtual; //control-clicking the propertyvalue to activatefunction GetAttributes: TPropertyAttributes; Virtual;function GetEditLimit: Integer; Virtual;function GetName: ShortString; Virtual;function GetHint(HintType: TPropEditHint; x, y: integer): String;Virtual;function GetDefaultValue: AnsiString; Virtual;function SubPropertiesNeedsUpdate: Boolean; Virtual;function IsDefaultValue: Boolean; Virtual;function IsNotDefaultValue: Boolean; Virtual;procedure GetProperties(Proc: TGetPropEditProc); Virtual;procedure GetValues(Proc: TGetStrProc); Virtual;procedure SetValue(const NewValue: AnsiString); Virtual;procedure UpdateSubProperties; Virtual;end; Having a class as a property of a component is a good example of a property that would need a custom property editor. Because a class has many fields with different formats, it is not possible for Lazarus to have the object inspector make these fields available for editing without a property editor created for a class property, as with standard type properties. For such properties, Lazarus shows the property name in parentheses followed by a button with an ellipsis (…) that activates the property editor. This functionality is handled by the standard property editor called TClassPropertyEditor, which can then be inherited to create a custom property editor, as given in the following code: TClassPropertyEditor = class(TPropertyEditor)publicconstructor Create(Hook: TPropertyEditorHook; APropCount: Integer);Override;function GetAttributes: TPropertyAttributes; Override;procedure GetProperties(Proc: TGetPropEditProc); Override;function GetValue: AnsiString; Override;property SubPropsTypeFilter: TTypeKinds Read FSubPropsTypeFilterWrite SetSubPropsTypeFilterDefault tkAny;end; Using the preceding class as a base class, all you need to do to complete a property editor is add a dialog in the Edit method as follows: TMyPropertyEditor = class(TClassPropertyEditor)publicprocedure Edit; Override;function GetAttributes: TPropertyAttributes; Override;end;procedure TMyPropertyEditor.Edit;varMyDialog: TCommonDialog;beginMyDialog := TCommonDialog.Create(NIL);try…//Here you can set attributes of the dialogMyDialog.Options := MyDialog.Options + [fdShowHelp];...finallyMyDialog.Free;end;end; Component editors Component editors control the behavior of a component when double-clicked or right-clicked in the form designer. Classes that define a component editor must descend from TComponentEditor or one of its descendent classes. The class should be registered in the Register procedure using the function RegisterComponentEditor. Most of the methods of TComponentEditor are inherited from it's ancestor TBaseComponentEditor, and, if you are going to write a component editor, you need to be aware of this class and its methods. Declaration of TBaseComponentEditor is as follows: TBaseComponentEditor = classprotectedpublicconstructor Create(AComponent: TComponent;ADesigner: TComponentEditorDesigner); Virtual;procedure Edit; Virtual; Abstract;procedure ExecuteVerb(Index: Integer); Virtual; Abstract;function GetVerb(Index: Integer): String; Virtual; Abstract;function GetVerbCount: Integer; Virtual; Abstract;procedure PrepareItem(Index: Integer; const AnItem: TMenuItem);Virtual; Abstract;procedure Copy; Virtual; Abstract;function IsInInlined: Boolean; Virtual; Abstract;function GetComponent: TComponent; Virtual; Abstract;function GetDesigner: TComponentEditorDesigner; Virtual;Abstract;function GetHook(out Hook: TPropertyEditorHook): Boolean;Virtual; Abstract;procedure Modified; Virtual; Abstract;end; Let us look at some of the more important methods of the class. The Edit method is called on the double-clicking of a component in the form designer. GetVerbCount and GetVerb are called to build the context menu that is invoked by right-clicking on the component. A verb is a menu item. GetVerb returns the name of the menu item. GetVerbCount gets the total number of items on the context menu. The PrepareItem method is called for each menu item after the menu is created, and it allows the menu item to be customized, such as adding a submenu or hiding the item by setting its visibility to False. ExecuteVerb executes the menu item. The Copy method is called when the component is copied to the clipboard. A good example of a component editor is the TCheckListBox component editor. It is a descendant from TComponentEditor so all the methods of the TBaseComponentEditor do not need to be implemented. TComponentEditor provides empty implementation for most methods and sets defaults for others. Using this, methods that are needed for the TCheckListBoxComponentEditor component are overwritten. An example of the TCheckListBoxComponentEditor code is given as follows: TCheckListBoxComponentEditor = class(TComponentEditor)protectedprocedure DoShowEditor;publicprocedure ExecuteVerb(Index: Integer); override;function GetVerb(Index: Integer): String; override;function GetVerbCount: Integer; override;end;procedure TCheckGroupComponentEditor.DoShowEditor;varDlg: TCheckGroupEditorDlg;beginDlg := TCheckGroupEditorDlg.Create(NIL);try// .. shortenedDlg.ShowModal;// .. shortenedfinallyDlg.Free;end;end;procedure TCheckGroupComponentEditor.ExecuteVerb(Index: Integer);begincase Index of0: DoShowEditor;end;end;function TCheckGroupComponentEditor.GetVerb(Index: Integer): String;beginResult := 'CheckBox Editor...';end;function TCheckGroupComponentEditor.GetVerbCount: Integer;beginResult := 1;end; Summary In this article, we learned how to create a new Lazarus package and add a new component to that using the New Package dialog window to create our own custom component, TMessageLog. We also learned about compiling and installing a new component into the IDE, which requires Lazarus to rebuild itself in order to do so. Moreover, we discussed component properties. Then, we became acquainted with the events, which are triggered by any interaction that a user has with a component, such as clicking it, or by a system response, which could be caused by the change in any component of a form that affects another component. We studied that Events are properties, and they are handled through a technique called delegation. We discovered the simplest way to create an event is to create a descendant of TNotifyEvent—if you needed to send more parameters to an event and a single parameter provided by TNotifyEvent, then you need to declare a method pointer. We learned that property editors are custom dialogs for editing special properties of a component that aren't of a standard type, such as string or integer, and that they must extend from TPropertyEditor. Then, we discussed the component editors, which control the behavior of a component when it is right-clicked or double- clicked in the form designer, and that a component editor must descend from TComponentEditor or a descendant class of it. Finally, we looked at an example of a component editor for the TCheckListBox. Resources for Article : Further resources on this subject: User Extensions and Add-ons in Selenium 1.0 Testing Tools [Article] 10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack [Article] Support for Developers of Spring Web Flow 2 [Article]
Read more
  • 0
  • 0
  • 8576

article-image-querying-and-selecting-data
Packt
17 Apr 2013
13 min read
Save for later

Querying and Selecting Data

Packt
17 Apr 2013
13 min read
(For more resources related to this topic, see here.) Constructing proper attribute query syntax The construction of property attribute queries is critical to your success in creating geoprocessing scripts that query data from feature classes and tables. All attribute queries that you execute against feature classes and tables will need to have the correct SQL syntax and also follow various rules depending upon the datatype that you execute the queries against. Getting ready Creating the syntax for attribute queries is one of the most difficult and time-consuming tasks that you'll need to master when creating Python scripts that incorporate the use of the Select by Attributes tool. These queries are basically SQL statements along with a few idiosyncrasies that you'll need to master. If you already have a good understanding of creating queries in ArcMap or perhaps an experience with creating SQL statements in other programming languages, then this will be a little easier for you. In addition to creating valid SQL statements, you also need to be aware of some specific Python syntax requirements and some datatype differences that will result in a slightly altered formatting of your statements for some datatypes. In this recipe, you'll learn how to construct valid query syntax and understand the nuances of how different datatypes alter the syntax as well as some Python-specific constructs. How to do it… Initially, we're going to take a look at how queries are constructed in ArcMap, so that you can get a feel of how they are structured. In ArcMap, open C:ArcpyBookCh8Crime_Ch8.mxd. Right-click on the Burglaries in 2009 layer and select Open Attribute Table. You should see an attribute table similar to the following screenshot. We're going to be querying the SVCAREA field: With the attribute table open, select the Table Options button and then Select by Attributes to display a dialog box that will allow you to construct an attribute query. Notice the Select * FROM Burglary WHERE: statement on the query dialog box (shown in the following screenshot). This is a basic SQL statement that will return all the columns from the attribute table for Burglary that meet the condition that we define through the query builder. The asterisk (*) simply indicates that all fields will be returned: Make sure that Create a new selection is the selected item in the Method dropdown list. This will create a new selection set. Double-click on SVCAREA from the list of fields to add the field to the SQL statement builder, as follows: Click on the = button. Click on the Get Unique Values button. From the list of values generated, double-click on 'North' to complete the SQL statement, as shown in the following screenshot: Click on the Apply button to execute the query. This should select 7520 records. Many people mistakenly assume that you can simply take a query that has been generated in this fashion and paste it into a Python script. That is not the case. There are some important differences that we'll cover next. Close the Select by Attributes window and the Burglaries in 2009 table. Clear the selected feature set by clicking on Selection | Clear Selected Features. Open the Python window and add the code to import arcpy. import arcpy Create a new variable to hold the query and add the exact same statement that you created earlier: qry = "SVCAREA" = 'North' Press Enter on your keyboard and you should see an error message similar to the following: Runtime error SyntaxError: can't assign to literal (<string>, line1) Python interprets SVCAREA and North as strings but the equal to sign between the two is not part of the string used to set the qry variable. There are several things we need to do to generate a syntactically correct statement for the Python interpreter. One important thing has already been taken care of though. Each field name used in a query needs to be surrounded by double quotes. In this case, SVCAREA is the only field used in the query and it has already been enclosed by double quotes. This will always be the case when you're working with shapefiles, file geodatabases, or ArcSDE geodatabases. Here is where it gets a little confusing though. If you're working with data from a personal geodatabase, the field names will need to be enclosed by square brackets instead of double quotes as shown in the following code example. This can certainly leads to confusion for script developers. qry = [SVCAREA] = 'North' Now, we need to deal with the single quotes surrounding 'North'. When querying data from fields that have a text datatype, the string being evaluated must be enclosed by quotes. If you examine the original query, you'll notice that we have in fact already enclosed the word North with quotes, so everything should be fine right? Unfortunately, it's not that simple with Python. Quotes, along with a number of other characters, must be escaped with a forward slash followed by the character being escaped. In this case, the escape sequence would be '. Alter your query syntax to incorporate the escape sequence: qry = "SVCAREA" = 'North' Finally, the entire query statement should be enclosed with quotes: qry = '"SVCAREA" = 'North'' In addition to the = sign, which tests for equality, there are a number of additional operators that you can use with strings and numeric data, including not equal (> <), greater than (<), greater than or equal to (<=), less than (>), and less than or equal to (>=). Wildcard characters including % and _ can also be used for shapefiles, file geodatabases, and ArcSDE geodatabases. These include % for representing any number of characters. The LIKE operator is often used with wildcard characters to perform partial string matching. For example, the following query would find all records with a service area that begins with N and has any number of characters after. qry = '"SVCAREA" LIKE 'N%'' The underscore character (_) can be used to represent a single character. For personal geodatabases the asterisk (*) is used to represent a wildcard character for any number of characters, while (?) represents a single character. You can also query for the absence of data, also known as NULL values. A NULL value is often mistaken for a value of zero, but that is not the case. NULL values indicate the absence of data, which is different from a value of zero. Null operators include IS NULL and IS NOT NULL. The following code example will find all records where the SVCAREA field contains no data: qry = '"SVCAREA" IS NULL' The final topic that we'll cover in this section are operators used for combining expressions where multiple query conditions need to be met. The AND operator requires that both query conditions be met for the query result to be true, resulting in selected records. The OR operator requires that at least one of the conditions be met. How it works… The creation of syntactically correct queries is one of the most challenging aspects of programming ArcGIS with Python. However, once you understand some basic rules, it gets a little easier. In this section, we'll summarize these rules. One of the more important things to keep in mind is that field names must be enclosed with double quotes for all datasets, with the exception of personal geodatabases, which require braces surrounding field names. There is also an AddFieldDelimiters() function that you can use to add the correct delimiter to a field based on the datasource supplied as a parameter to the function. The syntax for this function is as follows: AddFieldDelimiters(dataSource,field) Additionally, most people, especially those new to programming with Python, struggle with the issue of adding single quotes to string values being evaluated by the query. In Python, quotes have to be escaped with a single forward slash followed by the quote. Using this escape sequence will ensure that Python does in fact see that as a quote rather than the end of the string. Finally, take some time to familiarize yourself with the wildcard characters. For datasets other than personal geodatabases, you'll use the (%) character for multiple characters and an underscore (_) character for a single character. If you're using a personal geodatabase, the (*) character is used to match multiple characters and the (?) character is used to match a single character. Obviously, the syntax differences between personal geodatabases and all other types of datasets can lead to some confusion. Creating feature layers and table views Feature layers and table views serve as intermediate datasets held in memory for use specifically with tools such as Select by Location and Select Attributes. Although these temporary datasets can be saved, they are not needed in most cases. Getting ready Feature classes are physical representations of geographic data and are stored as files (shapefiles, personal geodatabases, and file geodatabases) or within a geodatabase. ESRI defines a feature class as "a collection of features that shares a common geometry (point, line, or polygon), attribute table, and spatial reference." Feature classes can contain default and user-defined fields. Default fields include the SHAPE and OBJECTID fields. These fields are maintained and updated automatically by ArcGIS. The SHAPE field holds the geometric representation of a geographic feature, while the OBJECTID field holds a unique identifier for each feature. Additional default fields will also exist depending on the type of feature class. A line feature class will have a SHAPE_LENGTH field. A polygon feature class will have both, a SHAPE_LENGTH and a SHAPE_AREA field. Optional fields are created by end users of ArcGIS and are not automatically updated by GIS. These contain attribute information about the features. These fields can also be updated by your scripts. Tables are physically represented as standalone DBF tables or within a geodatabase. Both, tables and feature classes, contain attribute information. However, a table contains only attribute information. There isn't a SHAPE field associated with a table, and they may or may not contain an OBJECTID field. Standalone Python scripts that use the Select by Attributes or Select by Location tool require that you create an intermediate dataset rather than using feature classes or tables. These intermediate datasets are temporary in nature and are called Feature Layers or Table Views. Unlike feature classes and tables, these temporary datasets do not represent actual files on disk or within a geodatabase. Instead, they are "in memory" representations of feature classes and tables. These datasets are active only while a Python script is running. They are removed from memory after the tool has executed. However, if the script is run from within ArcGIS as a script tool, then the temporary layer can be saved either by right-clicking on the layer in the table of contents and selecting Save As Layer File or simply by saving the map document file. Feature layers and table views must be created as a separate step in your Python scripts, before you can call the Select by Attributes or Select by Location tools. The Make Feature Layer tool generates the "in-memory" representation of a feature class, which can then be used to create queries and selection sets, as well as to join tables. After this step has been completed, you can use the Select by Attributes or Select by Location tool. Similarly, the Make Table View tool is used to create an "in-memory" representation of a table. The function of this tool is the same as Make Feature Layer. Both the Make Feature Layer and Make Table View tools require an input dataset, an output layer name, and an optional query expression, which can be used to limit the features or rows that are a part of the output layer. In addition, both tools can be found in the Data Management Tools toolbox. The syntax for using the Make Feature Layer tool is as follows: arcpy.MakeFeatureLayer_management(<input feature layer>, <output layer name>,{where clause}) The syntax for using the Make Table View tool is as follows: Arcpy.MakeTableView_management(<input table>, <output table name>, {where clause}) In this recipe, you will learn how to use the Make Feature Layer and Make Table View tools. These tasks will be done inside ArcGIS, so that you can see the in-memory copy of the layer that is created. How to do it… Follow these steps to learn how to use the Make Feature Layer and Make Table View tools: Open c:ArcpyBookCh8Crime_Ch8.mxd in ArcMap. Open the Python window. Import the arcpy module: import arcpy Set the workspace: arcpy.env.workspace = "c:/ArcpyBook/data/CityOfSanAntonio.gdb" Start a try block: try: Make an in-memory copy of the Burglary feature class using the Make Feature Layer tool. Make sure you indent this line of code: flayer = arcpy.MakeFeatureLayer_management("Burglary","Burglary_ Layer") Add an except block and a line of code to print an error message in the event of a problem: except: print "An error occurred during creation" The entire script should appear as follows: import arcpy arcpy.env.workspace = "c:/ArcpyBook/data/CityOfSanAntonio.gdb" try: flayer = arcpy.MakeFeatureLayer_management("Burglary","Burglary_ Layer") except: print "An error occurred during creation" Save the script to c:ArcpyBookCh8CreateFeatureLayer.py. Run the script. The new Burglary_Layer file will be added to the ArcMap table of contents: The Make Table View tool functionality is equivalent to the Make Feature Layer tool. The difference is that it works against standalone tables instead of feature classes. Remove the following line of code: flayer = arcpy.MakeFeatureLayer_management("Burglary","Burglary_ Layer") Add the following line of code in its place: tView = arcpy.MakeTableView_management("Crime2009Table", "Crime2009TView") Run the script to see the table view added to the ArcMap table of contents. How it works... The Make Feature Layer and Make Table View tools create in-memory representations of feature classes and tables respectively. Both the Select by Attributes and Select by Location tools require that these temporary, in-memory structures be passed in as parameters when called from a Python script. Both tools also require that you pass in a name for the temporary structures. There's more... You can also apply a query to either the Make Feature Layer or Make Table View tools to restrict the records returned in the feature layer or table view. This is done through the addition of a where clause when calling either of the tools from your script. This query is much the same as if you'd set a definition query on the layer through Layer Properties | Definition Query. The syntax for adding a query is as follows: MakeFeatureLayer(in_features, out_layer, where_clause) MakeTableView(in_table, out_view, where_clause)
Read more
  • 0
  • 0
  • 5226
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-testing-your-app
Packt
12 Apr 2013
15 min read
Save for later

Testing your App

Packt
12 Apr 2013
15 min read
(For more resources related to this topic, see here.) Types of testing Testing can happen on many different levels. From the code level to integration and even testing individual functions of the user-facing implementation of an enterprise application, there are numerous tools and techniques to test your application. In particular, we will cover the following: Unit testing Functional testing Browser testing Black box versus white box testing Testing is often talked about within the context of black box versus white box testing. This is a useful metaphor in understanding testing at different levels. With black box testing, you look at your application as a black box knowing nothing of its internals—typically from the perspective of a user of the system. You simply execute functionality of the application and test whether the expected outcomes match the actual outcomes. White box differs from black box testing in that you know the internals of the application upfront and can thus pinpoint failures directly and test for specific conditions. In this case, you simply feed in data into specific parts of the system and test whether the expected output matches the actual output. Unit testing The first level of testing is at the code level. When you are te sting specific and individual units of code on whether they meet their stated goals, you are unit testing. Unit testing is often talked about in conjunction with test-driven development, the practice of writing unit tests first and then writing the minimal amount of code necessary to pass those tests. Having a suite of unit tests against your code and employing test-driven processes—when done right—can keep your code focused and help to ensure the stability of your enterprise application. Typically, unit tests are set up in a separate folder in your codebase. Each test case is composed of the following parts: Setup to build the test conditions under which the code or module is being tested An instantiation and invocation of the code or module being tested A verification of the results returned Setting up your unit test You usually start by setting up your test data. For example, if you are testing a piece of code that requires an authenticated account, you might consider creating a set of test users of your enterprise application. It is advisable that your test data be coupled with your test so that your tests are not dependent on your system being in a specific state. Invoking your target Once you have set up your test data and the conditions in which the code you are testing needs to run, you are ready to invoke it. This can be as simple as invoking a method. Mocking is a very important concept to understand when unit testing. Consider a set of unit tests for a business logic module that has a dependency on some external application programming interface (API). Now imagine if the API goes down. The tests would fail. While it is nice to get an indication that the API you are dependent upon is having issues, a failing unit test because of this is misleading because the goal of the unit test is to test the business logic rather than external resources on which you are dependent. This is where mock objects come into the picture. Mock objects are stubs that replicate the interface of a resource. They are set up to always return the same data the external resource would under normal conditions. This way you are isolating your test to just the unit of code you are testing. Mocking employs a pattern called dependency injection or inversion of control. Sure, the code you are testing may be dependent on an external resource. Yet how will you swap it in a mock resource? Code that is easy to unit test allows you to pass in or "inject" these dependencies when invoking it. Dependency injection is a design pattern where code that is dependent on an external resource has that dependency passed into it thereby decoupling your code from that dependency. The following code snippet is difficult to test since the dependency is encapsulated into the function being tested. We are at an impasse. var doSomething = function() {var api = getApi();//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();//What do I do now???} The following new code snippet uses dependency injection to circumvent the problem by instantiating the dependency and passing it into the function being tested: var doSomething = function(api) {//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();doSomething(mockApi);} In general, this is good practice not just for unit testing but for keeping your code clean and easy to manage. Instantiating a dependency once and injecting where it is needed makes it easier to change that dependency if the need occurs. There are many mocking frameworks available including JsMockito (http://jsmockito.org/ ) for JavaScript and Mockery (https://github.com/padraic/mockery)for PHP. Verifying the results Once you have invoked the code being tested, you need to capture the results and verify them. Verification comes in the form of assertions. Every unit testing framework comes with its own set of assertion methods, but the concept is the same: take a result and test it against an expectation. You can assert whether two things are equal. You can assert whether two things are not equal. You can assert whether a result is a valid number of a string. You can assert whether one value is greater than another. The general idea is you are testing actual data against your hypothesis. Assertions usually bubble up to the framework's reporting module and are manifested as a list of passed or failed tests. Frameworks and tools A bevy of tools have arisen in the past few years that aid in unit testing of JavaScript. What follows is a brief survey of notable frameworks and tools used to unit test JavaScript code. JsTestDriver JsTestDriver is a framework built at Google for unit testing. It has a server that runs on multiple browsers on a machine and will allow you to execute test cases in the Eclipse IDE. This screenshot shows the results of JsTestDriver. When run, it executes all tests configured to run and displays the results. More information about JsTestDriver can be found at http://code.google.com/p/js-test-driver/. QUnit QUnit is a JavaScript unit testing framework created by John Resig of jQuery fame. To use it, you need to create only a test harness web page and include the QUnit library as a script reference. There is even a hosted version of the library. Once included, you need to only invoke the test method, passing in a function and a set of assertions. It will then generate a nice report. Although QUnit has no dependencies and can test standard JavaScript code, it is oriented around jQuery. More information about QUnit can be found at http://qunitjs.com/. Sinon.JS Often coupled with QUnit, Sinon.JS introduces the concept of spying wherein it records function calls, the arguments passed in, the return value, and even the value of the this object. You can also create fake objects such as fake servers and fake timers to make sure your code tests in isolation and your tests run as quickly as possible. This is particularly useful when you need to make fake AJAX requests. More information about Sinon.JS can be found at http://sinonjs.org/. Jasmine Jasmine is a testing framework based on the concept of behavior-driven development. Much akin to test-driven development, it extends it by infusing domain-driven design principles and seeks to frame unit tests back to user-oriented behavior and business value. Jasmine as well as other behavior-driven design based frameworks build test cases—called specs—using as much English as possible so that when a report is generated, it reads more naturally than a conventional unit test report. More information about Jasmine can be found at http://pivotal.github.com/jasmine/. Functional testing Selenium has become the name in website functional testing. Its browser automation capabilities allow you to record test cases in your favorite web browser and run them across multiple browsers. When you have this, you can automate your browser tests, integrate them with your build and continuous integration server, and run them simultaneously to get quicker results when you need them. Selenium includes the Selenium IDE, a utility for recording and running Selenium scripts. Built as a Firefox add-on, it allows you to create Selenium test cases by loading and clicking on web pages in Firefox. You can easily record what you do in the browser and replay it. You can then add tests to determine whether actual behavior matches expected behavior. It is very useful for quickly creating simple test cases for a web application. Information on installing it can be found at http://seleniumhq.org/docs/02_selenium_ide.html. The following screenshot shows the Selenium IDE. Click on the red circle graphic on the right-hand side to set it to record, and then browse to http://google.com in the browser window and search for "html5". Click on the red circle graphic to stop recording. You can then add assertions to test whether certain properties of the page match expectations. In this case, we are asserting that the text of the first link in the search results is for the Wikipedia page for HTML5. When we run our test, we see that it passes (of course, if the search results for "html5" on Google change, then this particular test will fail). Selenium includes WebDriver, an API that allows you to drive a browser natively either locally or remotely. Coupled with its automation capabilities, WebDriver can run tests against browsers on multiple remote machines to achieve greater scale. For our MovieNow application, we will set up functional testing by using the following components: The Selenium standalone server The php-webdriver connector from Facebook PHPUnit The Selenium standalone server The Selenium standalone server routes requests to the HTML5 application. It needs to be started for the tests to run. It can be deployed anywhere, but by default it is accessed at http://localhost:4444/wd/hub. You can download the latest version of the standalone server at http://code.google.com/p/selenium/downloads/list or you can fire up the version included in the sample code under the test/lib folder. To start the server, execute the following line via the command line (you will need to have Java installed on your machine): java -jar lib/selenium-server-standalone-#.jar Here, # indicates the version number. You should see something akin to the following: At this point, it is listening for connections. You will see log messages here as you run your tests. Keep this window open. The php-webdriver connector from Facebook The php-webdriver connector serves as a library for WebDriver in PHP. It gives you the ability to make and inspect web requests using drivers for all the major web browsers as well as HtmlUnit. Thus it allows you to create test cases against any web browser. You can download it at https://github.com/facebook/php-webdriver.We have included the files in the webdriver folder. PHPUnit PHPUnit is a unit testing framework that provides the constructs necessary for running our tests. It has the plumbing necessary for building and validating test cases. Any unit testing framework will work with Selenium; we have chosen PHPUnit since it is lightweight and works well with PHP. You can download and install PHPUnit any number of ways (you can go to http://www.phpunit.de/manual/current/en/installation.html for more information on installing it). We have included the phpunit.phar file in the test/lib folder for your convenience. You can simply run it by executing the following via the command line: php lib/phpunit.phar <your test suite>.php To begin, we will add some PHP files to the test folder. The first file is webtest. php. Create this file and add the following code: <?phprequire_once "webdriver/__init__.php";class WebTest extends PHPUnit_Framework_TestCase {protected $_session;protected $_web_driver;public function __construct() {parent::__construct();$_web_driver = new WebDriver();$this->_session = $_web_driver->session('firefox');}public function __destruct() {$this->_session->close();unset($this->_session);}}?> The WebTest class integrated WebDriver into PHPUnit via the php-webdriver connector. This will serve as the base class for all of our test cases. As you can see, it starts with the following: require_once "webdriver/__init__.php"; This is a reference to __init__.php in the php-webdriver files. This brings in all the classes needed for WebDriver. In the constructor, WebTest initializes the driver and session objects used in all test cases. In the destructor, it cleans up its connections. Now that we have everything set up, we can create our first functional test. Add a file called generictest.php to the test folder. We will import WebTest and extend that class as follows: <?phprequire_once "webtest.php";class GenericTest extends WebTest {}?> Inside of the GenericTest class, add the following test case: public function testForData() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');sleep(5); //Wait for AJAX data to load$result = $this->_session->element("id", "movies-near-me")->text();//May need to change settings to always allow sharing of location$this->assertGreaterThan(0, strlen($result));} We will open a connection to our application (feel free to change the URL to wherever you are running your HTML5 application), wait 5 seconds for the initial AJAX to load, and then test for whether the movies-near-me div is populated with data. To run this test, go to the command line and execute the following lines: chmod +x lib/phpunit.pharphp lib/phpunit.phar generictest.php You should see the following: This indicates that the test is passed. Congratulations! Now let us see it fail. Add the following test case: public function testForTitle() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');$result = $this->_session->title();$this->assertEquals('Some Title', $result);} Rerun PHPUnit and you should see something akin to the following: As you can see, it was expecting 'Some Title' but actually found 'MovieNow'. Now that we have gotten you started, we will let you create your own tests. Refer to http://www.phpunit.de/manual/3.7/en/index.html for guidance on the different assertions you can make using PHPUnit. More information about Selenium can be found at http://seleniumhq.org/. Browser testing HTML5 enterprise applications must involve actually looking at the application on different web browsers. Thankfully, many web browsers are offered on multiple platforms. Google Chrome, Mozilla Firefox, and Opera all have versions that will install easily on Windows, Mac OSX, and flavors of Linux such as Ubuntu. Safari has versions for Windows and Mac OSX, and there are ways to install it on Linux with some tweaking. Nevertheless, Internet Explorer can only run on Windows. One way to work around this limitation is to install virtualization software. Virtualization allows you to run an entire operating system virtually within a host operating system. It allows you to run Windows applications on Mac OSX or Linux applications on Windows. There are a number of notable virtualization packages including VirtualBox, VMWare Fusion, Parallels, and Virtual PC. Although Virtual PC runs only on Windows, Microsoft does offer a set of prepackaged virtual hard drives that include specific versions of Internet Explorer for testing purposes. See the following URLs for details: http://www.microsoft. com/en-us/download/details.aspx?id=11575. Another common way to test for compatibility is to use web-based browser virtualization. There are a number of services such as BrowserStack (http://www.browserstack.com/), CrossBrowserTesting (http://crossbrowsertesting.com/), and Sauce Labs (https://saucelabs.com/) that offer a service whereby you can enter a URL and see it rendered in an assortment of web browsers and platforms (including mobile) virtually through the web. Many of them even work through a proxy to allow you to view, test, and debug web applications running on your local machine. Continuous integration With any testing solution, it is important to create and deploy your builds and run your tests in an automated fashion. Continuous integration solutions like Hudson, Jenkins, CruiseControl, and TeamCity allow you to accomplish this. They merge code from multiple developers, and run a number of automated functions from deploying modules to running tests. They can be invoked to run on a schedule basis or can be triggered by events such as a commitment of code to a code repository via a postcommit hook. Summary We covered several types of testing in this article including unit testing, functional testing, and browser testing. For each type of testing, there are many tools to help you make sure that your enterprise application runs in a stable way, most of which we covered bar a few. Because every minute change to your application code has the potential to destabilize it, we must assume that that every change does. To ensure that your enterprise applications remain stable and with minimal defect, having a testing strategy in place with a rich suite of tests—from unit to functional—combined with a continuous integration server running those tests is essential. One must, of course, weigh the investment in time for writing and executing tests against the time needed for writing production code, but the savings in long-term maintenance costs can make that investment worthwhile. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] Blocking versus Non blocking scripts [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 1589

article-image-scipy-computational-geometry
Packt
11 Apr 2013
8 min read
Save for later

SciPy for Computational Geometry

Packt
11 Apr 2013
8 min read
(For more resources related to this topic, see here.) >>> data = scipy.stats.randint.rvs(0.4,10,size=(10,2))>>> triangulation = scipy.spatial.Delaunay(data) Any Delaunay class has the basic search attributes such as points (to obtain the set of points in the triangulation), vertices (that offers the indices of vertices forming simplices in the triangulation), neighbors (for the indices of neighbor simplices for each simplex—with the convention that "-1" indicates no neighbor for simplices at the boundary). More advanced attributes, for example convex_hull, indicate the indices of the vertices that form the convex hull of the given points. If we desire to search for the simplices that share a given vertex, we may do so with the vertex_to_simplex method. If, instead, we desire to locate the simplices that contain any given point in the space, we do so with the find_simplex method. At this stage we would like to point out the intimate relationship between triangulations and Voronoi diagrams, and offer a simple coding exercise. Let us start by choosing first a random set of points, and obtaining the corresponding triangulation. >>> locations=scipy.stats.randint.rvs(0,511,size=(2,8))>>> triangulation=scipy.spatial.Delaunay(locations.T) We may use the matplotlib.pyplot routine triplot to obtain a graphical representation of this triangulation. We first need to obtain the set of computed simplices. Delaunay offers us this set, but by means of the indices of the vertices instead of their coordinates. We thus need to map these indices to actual points before feeding the set of simplices to the triplot routine: >>>assign_vertex = lambda index: triangulation.points[index]>>>triangle_set = map(assign_vertex, triangulation.vertices)>>>matplotlib.pyplot.triplot(locations[1], locations[0], ... triangles=triangle_set, color='r') We will now obtain the edge map of the Voronoi diagram in a similar fashion as we did before, and plot it below the triangulation (since the former needs to be with either a pcolormesh or imshow command). Note how the triangulation and the corresponding Voronoi diagrams are dual of each other; each edge in the triangulation (red) is perpendicular with an edge in the Voronoi diagram (white). How should we use this observation to code an actual Voronoi diagram for a cloud of points? The actual Voronoi diagram is the set of vertices and edges that composes it, rather than a binary image containing an approximation to the edges as we have computed. Let us finish this Article with two applications to scientific computing that use these techniques extensively, in combination with routines from other SciPy modules. Structural model of oxides In this example we will cover the extraction of the structural model of a molecule of a bronze-type Niobium oxide, from HAADF-STEM micrographs. The following diagram shows HAADF-STEM micrograph of a bronze-type Niobium oxide (taken from http://www.microscopy.ethz.ch/BFDF-STEM.htm, courtesy of ETH Zurich): For pedagogical purposes, we took the following approach to solving this problem: Segmentation of the atoms by thresholding and morphological operations. Connected component labeling to extract each single atom for posterior examination. Computation of the centers of mass of each label identified as an atom. This presents us with a lattice of points in the plane that shows a first insight in the structural model of the oxide. Computation of the Voronoi diagram of the previous lattice of points. The combination of information with the output of the previous step will lead us to a decent (approximation of the actual) structural model of our sample. Let us proceed in this direction. Once retrieved, our HAADF-STEM images will be stored as big matrices with float32 precision. For this project, it is enough to retrieve some tools from the scipy.ndimage module, and some procedures from the matplotlib library. The preamble then looks like the following code: import numpyimport scipyfrom scipy.ndimage import *from scipy.misc import imfilterimport matplotlib.pyplot as plt The image is loaded with the imread(filename) command. This stores the image as a numpy.array with dtype = float32. Notice that the maxima and minima are 1.0 and 0.0, respectively. Other interesting information about the image can be retrieved: img=imread('/Users/blanco/Desktop/NbW-STEM.png')print "Image dtype: %s"%(img.dtype)print "Image size: %6d"%(img.size)print "Image shape: %3dx%3d"%(img.shape[0],img.shape[1])print "Max value %1.2f at pixel %6d"%(img.max(),img.argmax())print "Min value %1.2f at pixel %6d"%(img.min(),img.argmin())print "Variance: %1.5fnStandard deviation:%1.5f"%(img.var(),img.std()) This outputs the following information: Image dtype: float32Image size: 87025Image shape: 295x295Max value 1.00 at pixel 75440Min value 0.00 at pixel 5703Variance: 0.02580Standard deviation: 0.16062 We perform thresholding by imposing an inequality in the array holding the data. The output is a Boolean array where True (white) indicates that the inequality is fulfilled, and False (black) otherwise. We may perform at this point several thresholding operations and visualize them to obtain the best threshold for segmentation purposes. The following images show several examples (different thresholdings applied to the oxide image): By visual inspection of several different thresholds, we choose 0.62 as one that gives us a good map showing what we need for segmentation. We need to get rid of "outliers", though; small particles that might fulfill the given threshold but are small enough not to be considered as an actual atom. Therefore, in the next step we perform a morphological operation of opening to get rid of those small particles. We decided that anything smaller than a square of size 2 x 2 is to be eliminated from the output of thresholding: BWatoms = (img> 0.62)BWatoms = binary_opening(BWatoms,structure=numpy.ones((2,2))) We are ready for segmentation, which will be performed with the label routine from the scipy.ndimage module. It collects one slice per segmented atom, and offers the number of slices computed. We need to indicate the connectivity type. For example, in the following toy example, do we want to consider that situation as two atoms or one atom? It depends; we would rather have it now as two different connected components, but for some other applications we might consider that they are one. The way we indicate the connectivity to the label routine is by means of a structuring element that defines feature connections. For example, if our criterion for connectivity between two pixels is that they are in adjacent edges, and then the structuring element looks like the image shown on the left-hand side from the images shown next. If our criterion for connectivity between two pixels is that they are also allowed to share a corner, then the structuring element looks like the image on the right-hand side. For each pixel we impose the chosen structuring element and count the intersections; if there are no intersections, then the two pixels are not connected. Otherwise, they belong to the same connected component. We need to make sure that atoms that are too close in a diagonal direction are counted as two, rather than one, so we chose the structuring element on the left. The script then reads as follows: structuring_element = [[0,1,0],[1,1,1],[0,1,0]]segmentation,segments = label(BWatoms,structuring_element) The segmentation object contains a list of slices, each of them with a Boolean matrix containing each of the found atoms of the oxide. We may obtain for each slice a great deal of useful information. For example, the coordinates of the centers of mass of each atom can be retrieved with the following commands: coords = center_of_mass(img, segmentation, range(1,segments+1))xcoords = numpy.array([x[1] for x in coords])ycoords = numpy.array([x[0] for x in coords]) Note that, because of the way matrices are stored in memory, there is a transposition of the x and y coordinates of the locations of the pixels. We need to take it into account. Notice the overlap of the computed lattice of points over the original image (the left-hand side image from the two images shown next). We may obtain it with the following commands: >>>plt.imshow(img); plt.gray(); plt.axis('off')>>>plt.plot(xcoords,ycoords,'b.') We have successfully found the centers of mass for most atoms, although there are still about a dozen regions where we are not too satisfied with the result. It is time to fine-tune by the simple method of changing the values of some variables; play with the threshold, with the structuring element, with different morphological operations, and so on. We can even add all the obtained information for a wide range of those variables, and filter out outliers. An example with optimized segmentation is shown, as follows (look at the right-hand side image): For the purposes of this exposition, we are happy to keep it simple and continue working with the set of coordinates that we have already computed. We will be now offering an approximation to the lattice of the oxide, computed as the edge map of the Voronoi diagram of the lattice. L1,L2 = distance_transform_edt(segmentation==0,return_distances=False,return_indices=True)Voronoi = segmentation[L1,L2]Voronoi_edges= imfilter(Voronoi,'find_edges')Voronoi_edges=(Voronoi_edges>0) Let us overlay the result of Voronoi_edges with the locations of the found atoms: >>>plt.imshow(Voronoi_edges); plt.axis('off'); plt.gray()>>>plt.plot(xcoords,ycoords,'r.',markersize=2.0) This gives the following output, which represents the structural model we were searching for:
Read more
  • 0
  • 0
  • 3261

article-image-getting-started-zeromq
Packt
04 Apr 2013
5 min read
Save for later

Getting Started with ZeroMQ

Packt
04 Apr 2013
5 min read
(For more resources related to this topic, see here.) The message queue A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready. Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages. However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages. Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let's have a quick overview on the difference between synchronous and asynchronous systems. In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done. Synchronous system We could also implement this system with threads. In this case threads process each task in parallel. Threaded synchronous system In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores. Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread. Asynchronous system The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue. You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let's consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However: If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS) You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system. Introduction to ZeroMQ Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ. The community identifies ZeroMQ as "sockets on steroids". The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications. The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library. It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages. Simplicity ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets. Performance ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations. The brokerless design Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other. However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle. ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP. Summary This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] BizTalk Server: Standard Message Exchange Patterns and Types of Service [Article] AJAX Chat Implementation: Part 1 [Article]  
Read more
  • 0
  • 0
  • 3828

article-image-collaborative-work-svn-and-git
Packt
19 Mar 2013
11 min read
Save for later

Collaborative Work with SVN and Git

Packt
19 Mar 2013
11 min read
(For more resources related to this topic, see here.) Working with SVN At first we will take a look at the SVN perspective.The SVN perspective provides a group of views that help us to work with a Subversion server. You can open this perspective by using the Perspective menu in the top-right of the Aptana Studio window. The important and most frequently used views related to SVN, which we will take a look at, are the SVN Repositories view, the Team | History view, and the SVN | Console view. These views are categorized as the views selection into the SVN and Team folder, as shown in the following screenshot:     The SVN Repositories view allows you to add new repositories and manage all available repositories. Additionally, you have the option to create new tags or branches of the Repository. These views belong to the SVN views, as shown in the following screenshot: The History view allows you to get an overview about the project and revisions history. This view is used by SVN and Git projects; for this reason the view is stored in the Team views group. The History view can be opened by the menu under Window | Show View | History. Here you can see all the revisions with their comments and data creation. Furthermore, you can get a view into all revisions of a file and you also have, the ability to compare all revisions. The following screenshot shows the History view: Within the SVN Console view, you will find the output from all the SVN actions that are executed by Aptana Studio. Therefore, if you have an SVN conflict or something else, you can take a look at this Console view's output and you might locate the problem a bit faster. The SVN Console view was automatically integrated in the Aptana Studio Console, while the SVN plugin was installed. So if you need the SVN Console view, just open the general Console view from Window | Show view | Console. If the Console view is open, just use the View menu to select the Console type, which in this case is the SVN Console entry. The following screenshot shows the Console view and how you can select SVN Console: However, before we can start work with SVN, we have to add the related SVN Repository. Time for action – adding an SVN Repository Open the SVN perspective by using the Perspective menu in the top-right corner of the Aptana Studio window. Now, you should see the SVN Repositories view on the left-hand side of the Aptana Studio window. If it does not open automatically, open it by selecting the view from the navigation Window | Show view | SVN Repositories. In order to add a new SVN Repository, click on the small SVN icon with the plus sign at the top of the SVN Repositories view. You will now have to enter the address of the Subversion server in the pop up that appears, for example, svn://219.199.99.99/svn_codeSnippets. After you have clicked on the Finish button, Aptana Studio tries to reach the Subversion server in order to complete the process of adding a new Repository. If the Subversion server was reached and the SVN Repository is password protected, you will have to enter the access data for reading the SVN data. If you don't have the required access data available currently, you can abort the process and Aptana Studio will ask you whether you want to keep the location. If you click on NO, the newly added SVN Repository will be deleted, but if you click on YES, the location will remain. This allows you to retrieve the required access data later, enter them, and begin to work with the SVN Repository. Regardless of whether you keep the location or enter the required access data, the new SVN Repository will be listed in the SVN Repository view. What just happened? We have added a new SVN Repository into Aptana Studio. The new Repository is now listed in our SVN Repositories view and we can check this out from there, or create new tags or branches. Checking out an SVN Repository After we have seen how to add a new SVN Repository to Aptana Studio, we also want to know how we can check this Repository in order to work with the contained source code. You can do this, like many other things are done in Aptana Studio, in different ways. We will take a look at how we can do this directly from the SVN Repositories view, because every time we add a new Repository to Aptana Studio, we will also want to check it and use it as a project. Time for action – checking out an SVN Repository Open the SVN Repositories view. Expand the SVN Repository that you wish to check out. We do this because we want to check out the trunk directory from the Repository, not the tags and branches directory. Now, right-click on the trunk directory and select the Check Out... entry. Aptana Studio will now read the properties of the SVN Repository directly from the Subversion server. When all the required properties are received, the following window will appear on your screen: First of all, we select the Check out as a project in the workspace option and enter the name of the new SVN project. After this, we select the revision that we want to check out. This is usually the head revision. This means that you want to check out the last committed one—called the head revision. But you can check out any revision number you want from the past. If this is so, just deselect the Check out HEAD revision checkbox and enter the number of the revision that you want to check out. In the last section, we select the Fully recursive option within the Depth drop-down list and uncheck the Ignore externals checkbox, but select the Allow unversioned obstructions checkbox. After you have selected these settings, click on the Next button. Finally, you can select the location where the project should be created. Normally, this is the current workspace, but sometimes the location is different from the workspace. Maybe you have a web server installed and want to place the source code directly into the web root, in order to run the web application directly on your local machine. Finally, whether you select a different location for the project or not, you have to click on the Finish button to finalize the "Check out" into a new project. What just happened? We have checked out an SVN Repository from the SVN Repositories view. In addition to that, we have seen how we can also check out the Repository source code into another location other than the workspace. Finally, you should now have a ready SVN project where you can start working. File states If you're now changing some lines within a source code file, the Project Explorer view and the App Explorer view change the files' icon, so that you see a small white star on the black background. This means the file has changed since the last commit/update. There are some more small icons, which give you information about the related files and directories. Let's take a closer look at the Label Decorations tab as shown in the following screenshot: Now, we will discuss the symbols in the order shown in the previous screenshot: The small rising arrow shows you that the file or directory is an external one. The small yellow cylinder shows you that the file or directory is already under version control. The red X shows you that this file or directory is marked for deletion. The next time you commit your changes, the file will be deleted. The small blue cylinder shows you that the file or directory is switched. These are files or directories that belong to a different working copy other than their local parent directory. The small blue plus symbol shows you that this already versioned file or directory needs to be added to the repository. These could be files or directories you may have renamed or moved to a different directory. The small cornered square shows you that these files have a conflict with the repository. The small white star on the black background shows you that these files or directories have been changed since the last commit. If the file's or directory's icon has no small symbol, it means the file is ignored by the SVN Repository. The small white hook on the black background shows you that this file or directory is locked. The small red stop sign shows you that this file or directory is read-only. The small yellow cylinder shows you that this file or directory is already under version control and unchanged since the last commit. The small question mark shows you that this new file or directory isn't currently under version control. If you didn't find your icons in this list, or your icons look different, no problem. Just navigate to Window | Preferences and select the Label Decorations entry under Team | SVN within the tree. Here you will find all of the icons which are used. Committing an SVN Repository If you have finished extending your web application with some new features, you can now commit these changes so that the changes are stored in the Repository, and other developers can also update their working copies and get the new features. But how can you simply commit the changed files? Unlike a Git Repository, SVN allows you to commit changes in a tree from the Repository. By using Git, you can only commit changes in the complete Repository at once. But for now, we want to commit our SVN Repository changes, therefore just follow the steps mentioned in the following Time for action – updating and committing an SVN Repository section. Time for action – updating and committing an SVN Repository The first step, before performing a commit, is to perform an update on your working copy. Therefore, we will start by doing this, Aptana Studio reads all new revisions from the Subversion server and merges them with your local working copy. In order to do this update, right-click on your project root and select Team | Update to HEAD. When your working copy is up to date, navigate to the App Explorer view or the Project Explorer view and right-click on the files or directories that you want to commit, and then select the Commit... entry in the Team option. If you select a few directories or the whole project, the Commit window lists only those files within the selection that have changed since the last commit. So, you are able to select just the files and directories that you want to commit. Compose the selected files and directories as you need, and enter a comment in the top of the window. Why do you have to enter a comment while committing a change? Because, by committing the SVN Repository, it automatically saves the date, time, and your username; with this data the revision history stores information about who has changed which file at what time. In addition to that comes the commenting part. The comment should describe what kind of changes were made and what is their purpose. To finalize the commit, you just have to click on the OK button and the commit process will start. As described previously, you can see the output from all your SVN processes within the SVN Console view. In the following screenshot you can see the result of our commit process: What just happened? We have updated our working copy in order to commit our changes. Now the other developers can update their working copies too and can then work with your extensions. It should be noted again that it's recommended to perform an update before every commit. You can perform an update in a single file tree node. You don't have to update your whole project every time, a single node can also be committed. Updating an SVN Repository Additionally, similar to the SVN check out, you have the option to update your working copy not only to the Head revision, but also to a special revision number. In order to do this, right-click on the project root within the Project Explorer view and select the Update to Head... option or the Update to Version... option under the Team tab. After selecting one of these entries, Aptana Studio determines all the new files and files to be updated, downloads them from the Repository, and merges them with your local working copy. Now you should have all the source code from your current project. But, how can you identify which parts of a file are new or have been changed? No problem! Aptana Studio allows you not only to compare two different local files, you can also compare files from different revisions in your Repository. Refer to the following Time for action section to understand how this works:
Read more
  • 0
  • 0
  • 1454
Packt
12 Mar 2013
12 min read
Save for later

Parallel Dimensions – Branching with Git

Packt
12 Mar 2013
12 min read
(For more resources related to this topic, see here.) What is branching Branching in Git is a function that is used to launch a separate, similar copy of the present workspace for different usage requirements. In other words branching means diverging from whatever you have been doing to a new lane where you can continue working on something else without disturbing your main line of work. Let's understand it better with the help of the following example Suppose you are maintaining a checklist of some process for a department in your company, and having been impressed with how well it's structured, your superior requests you to share the checklist with another department after making some small changes specific to the department. How will you handle this situation? An obvious way without a version control system is to save another copy of your file and make changes to the new one to fit the other department's needs. With a version control system and your current level of knowledge, perhaps you'd clone the repository and make changes to the cloned one, right? Looking forward, there might be requirements/situations where you want to incorporate the changes that you have made to one of the copies with another one. For example, if you have discovered a typo in one copy, it's likely to be there in the other copy because both share the same source. Another thought – as your department evolves, you might realize that the customized version of the checklist that you created for the other department fits your department better than what you used to have earlier, so you want to integrate all changes made for the other department into your checklist and have a unified one. This is the basic concept of a branch – a line of development which exists independent of another line both sharing a common history/source, which when needed can be integrated. Yes, a branch always begins life as a copy of something and from there begins a life of its own. Almost all VCS have some form of support for such diverged workflows. But it's Git's speed and ease of execution that beats them all. This is the main reason why people refer to branching in Git as its killer feature. Why do you need a branch To understand the why part, let's think about another situation where you are working in a team where different people contribute to different pieces existing in your project. Your entire team recently launched phase one of your project and is working towards phase two. Unfortunately, a bug that was not identified by the quality control department in the earlier phases of testing the product pops up after the release of phase one (yeah, been there, faced that!). All of a sudden your priority shifts to fixing the bug first, thereby dropping whatever you've been doing for phase two and quickly doing a hot fix for the identified bug in phase one. But switching context derails your line of work; a thought like that might prove very costly sometimes. To handle these kind of situations you have the branching concept (refer to the next section for visuals), which allows you to work on multiple things without stepping on each other's toes. There might be multiple branches inside a repository but there's only one active branch, which is also called current branch. By default, since the inception of the repository, the branch named master is the active one and is the only branch unless and until changed explicitly. Naming conventions There are a bunch of naming conventions that Git enforces on its branch names; here's a list of frequently made mistakes: A branch name cannot contain the following: A space or a white space character Special characters such as colon (:), question mark (?), tilde (~), caret (^), asterisk (*), and open bracket ([) Forward slash (/) can be used to denote a hierarchical name, but the branch name cannot end with a slash For example, my/name is allowed but myname/ is not allowed, and myname\ will wait for inputs to be concatenated Strings followed by a forward slash cannot begin with a dot (.) For example, my/.name is not valid Names cannot contain two continuous dots (..) anywhere When do you need a branch With Git, There are no hard and fast rules on when you can/need to create a branch. You can have your own technical, managerial, or even organizational reasons to do so. Following are a few to give you an idea: A branch in development of software applications is often used for self learning/ experimental purposes where the developer needs to try a piece of logic on the code without disturbing the actual released version of the application Situations like having a separate branch of source code for each customer who requires a separate set of improvements to your present package And the classic one – few people in the team might be working on the bug fixes of the released version, whereas the others might be working on the next phase/release For few workflows, you can even have separate branches for people providing their inputs, which are finally integrated to produce a release candidate Following are flow diagrams for few workflows to help us understand the utilization of branching: Branching for a bug fix can have a structure as shown the following diagram:     This explains that when you are working on P2 and find a bug in P1, you need not drop your work, but switch to P1, fix it, and return back to P2. Branching for each promotion is as shown in the following diagram:     This explains how the same set of files can be managed across different phases/ promotions. Here, P1 from development has been sent to the testing team (a branch called testing will be given to the testing team) and the bugs found are reported and fixed in the development branch (v1.1 and v1.2) and merged with the testing branch. This is then branched as production or release, which end users can access. Branching for each component development is as shown in the following diagram:     Here every development task/component build is a new independent branch, which when completed is merged into the main development branch. Practice makes perfect: branching with Git I'm sure you have got a good idea about what, why, and when you can use branches when dealing with a Git repository. Let's fortify the understanding by creating a few use cases. Scenario Suppose you are the training organizer in your organization and are responsible for conducting trainings as and when needed. You are preparing a list of people who you think might need business communication skills training based on their previous records. As a first step, you need to send an e-mail to the nominations and check their availability on the specified date, and then get approval from their respective managers to allot the resource. Having experience in doing this, you are aware that the names picked by you from the records for training can have changes even at the last minute based on situations within the team. So you want to send out the initial list for each team and then proceed with your work while the list gets finalized. Time for action – creating branches in GUI mode Whenever you want to create a new branch using Git Gui, execute the following steps: Open Git Gui for the specified repository. Select the Create option from the Branch menu (or use the shortcut keys Ctrl + N), which will give you a dialog box as follows:     In the Name field, enter a branch name, leave the remaining fields as default for now, and then click on the Create button. What just happened? We have learned to create a branch using Git Gui. Now let's go through the process mentioned for the CLI mode and perform relevant actions in Git Gui. Time for action – creating branches in CLI mode Create a directory called BCT in your desktop. BCT is the acronym for Business Communication Training. Let's create a text file inside the BCT directory and name it participants. Now open the participants.txt file and paste the following lines in it: Finance team Charles Lisa John Stacy Alexander Save and close the file. Initiate it as a Git repository, add all the files, and make a commit as follows: git init git add . git commit –m 'Initial list for finance team' Now, e-mail those people followed by an e-mail to their managers and wait for the finalized list. While they take their time to respond, you should go ahead and work on the next list, say for the marketing department. Create a new branch called marketing using the following syntax: git checkout –b marketing Now open the participants.txt file and start entering the names for the marketing department below the finance team list, as follows: Marketing team Collins Linda Patricia Morgan Before you finish finding the fifth member of the marketing team, you receive a finalized list from the finance department manager stating he can afford only three people for the training as the remaining (Alexander and Stacy) need to take care of other critical tasks. Now you need to alter the finance list and fill in the last member of the marketing department. Before going back to the finance list and altering it, let's add the changes made for the marketing department and commit it. git add . git commit –m 'Unfinished list of marketing team' git checkout master Open the file and delete the names Alexander and Stacy, save, close, add the changes, and commit with the commit message Final list from Finance team. git add . git commit –m "Final list from Finance team" git checkout marketing Open the file and add the fifth name, Amanda, for the marketing team, save, add, and commit. ggit add . git commit –m "Initial list of marketing team" Say the same names entered for marketing have been confirmed; now we need to merge these two lists, which can be done by the following command. git merge master You will get a merge conflict as shown in the following screenshot:     Open the participants.txt ?le and resolve the merge then add the changes, and finally commit them. What just happened? Without any loss of thought or data, we have successfully adopted the changes on the first list, which came in while working on the second list, with the concept of branching – without one interfering with another As discussed, a branch begins its life as a copy of something else and then has a life of its own. Here, by performing git checkout –b branch_name we have created a new branch from the existing position. Technically, the so-called existing position is termed as the position of HEAD and this type of lightweight branches, which we create locally, are called topic branches. Another type of branch would be the remote branch or remote-tracking branch, which tracks somebody else's work from some other repository. We already got exposed to this while learning the concept of cloning. The command git checkout –b branch_name is equivalent to executing the following two commands: git branch branch_name: Creates a new branch of the given name at the given position, but stays in the current branch git checkout branch_name: Switches you to the specified branch from the current/active branch When a branch is created using Git Gui, the checkout process is automatically taken care of, which results in it being in the created branch. The command git merge branch_name merges the current/active branch with the specified branch to incorporate the content. Note that even after the merge the branch will exist until it's deleted with the command git branch –d branch_name. In cases where you have created and played with a branch whose content you don't want to merge with any other branch and want to simply delete the entire branch, use –D instead of –d in the command mentioned earlier. To view a list of branches available in the system, use the command git branch as shown in the following screenshot: As shown in the screenshot, the branches available in our BCT repository right now are marketing and master, with master being the default branch when you create a repository. The branch with a star in front of it is the active branch. To ease the process of identifying the active branch, Git displays the active branch in brackets (branch_name) as indicated with an arrow. By performing this exercise we have learned to create, add content, and merge branches when needed. Now, to visually see how the history has shaped up, open gitk (by typing gitk in the command-line interface or by selecting Visualize All Branch History from the Repository menu of Git Gui) and view the top left corner. It will show a history like in the following screenshot: Homework Try to build a repository alongside the idea explained with the last flow diagram given in the When do you need a branch section. Have one main line branch called development and five component development branches, which should be merged in after the customizations are made to its source.
Read more
  • 0
  • 0
  • 1518

Packt
08 Mar 2013
19 min read
Save for later

Painting – Multi-finger Paint

Packt
08 Mar 2013
19 min read
(For more resources related to this topic, see here.) What is multi-touch? The genesis of multi-touch on Mac OS X was the ability to perform two finger scrolling on a trackpad. The technology was further refined on mobile touch screen devices such as the iPod Touch, iPhone, and iPad. And it has also matured on the Mac OS X platform to allow the use of multi-touch or magic trackpad combined with one or more fingers and a motion to interact with the computer. Gestures are intuitive and allow us to control what is on the screen with fluid motions. Some of the things that we can do using multi-touch are as follows: Two finger scrolling: This is done by placing two fingers on the trackpad and dragging in a line Tap or pinch to zoom : This is done by tapping once with a single finger, or by placing two fingers on the trackpad and dragging them closer to each other Swipe to navigate: This is done by placing one or more fingers on the trackpad and quickly dragging in any direction followed by lifting all the fingers Rotate : This is done by placing two fingers on the trackpad and turning them in a circular motion while keeping them on the trackpad But these gestures just touch the surface of what is possible with multi-touch hardware. The magic trackpad can detect and track all 10 of our fingers with ease. There are plenty of new things that can be done with multi-touch — we are just waiting for someone to invent them. Implementing a custom view Multi-touch events are sent to the NSView objects. So before we can invent that great new multi-touch thing, we first need to understand how to implement a custom view. Essentially, a custom view is a subclass of NSView that overrides some of the behavior of the NSView object. Primarily, it will override the drawRect: method and some of the event handling methods. Time for action — creating a GUI with a custom view By now we should be familiar with creating new Xcode projects so some of the steps here are very high level. Let's get started! Create a new Xcode project with Automatic Reference Counting enabled and these options enabled as follows: Option Value Product Name Multi-Finger Paint Company Identifier com.yourdomain Class Prefix Your initials After Xcode creates the new project, design an icon and drag it in to the App Icon field on the TARGET Summary. Remember to set the Organization in the Project Document section of the File inspector. Click on the filename MainMenu.xib in the project navigator. Select the Multi-Finger Paint window and in the Size inspector change its Width and Height to 700 and 600 respectively. Enable both the Minimum Size and Maximum Size Constraints values. From the Object Library , drag a custom view into the window. In the Size inspector , change the Width and Height of the custom view to 400 and 300 respectively. Center the window using the guides that appear. From the File menu, select New>, then select the File…option. Select the Mac OS X Cocoa Objective-C class template and click on the Next button. Name the class BTSFingerView and select subclass of NSView. It is very important that the subclass is NSView. If we make a mistake and select the wrong subclass, our App won't work. Click on the button titled Create to create the .h and .m files. Click on the filename BTSFingerView.m and look at it carefully. It should look similar to the following code: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. } return self; } - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. } @end By default, custom views do not receive events (keyboard, mouse, trackpad, and so on) but we need our custom view to receive events. To ensure our custom view will receive events, add the following code to the BTSFingerView.m file to accept the first responder: /* ** - (BOOL) acceptsFirstResponder ** ** Make sure the view will receive ** events. ** ** Input: none ** ** Output: YES to accept, NO to reject */ - (BOOL) acceptsFirstResponder { return YES; } And, still in the BTSFingerView.m file, modify the initWithFrame method to allow the view to accept touch events from the trackpad as follows: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Accept trackpad events [self setAcceptsTouchEvents: YES]; } return self; } Once we are sure our custom view will receive events, we can start the process of drawing its content. This is done in the drawRect: method. Add the following code to the drawRect: method to clear it with a transparent color and draw a focus ring if the view is first responder: /* ** - (void)drawRect:(NSRect)dirtyRect ** ** Draw the view content ** ** Input: dirtyRect - the rectangle to draw ** ** Output: none */ - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. // Preserve the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext saveGraphicsState]; // color the background transparent [[NSColor clearColor] set]; // If this view has accepted first responder // it should draw the focus ring if ([[self window] firstResponder] == self) { NSSetFocusRingStyle(NSFocusRingAbove); } // Fill the view with fully transparent // color so that we can see through it // to whatever is below [[NSBezierPath bezierPathWithRect:[self bounds]] fill]; // Restore the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext restoreGraphicsState]; } Next, we need to go back into the .xib file, and select our custom view, and then select the Identity Inspector where we will see that in the section titled Custom Class, the Class field contains NSView as the class. Finally, to connect this object to our new custom view program code, we need to change the Class to BTSFingerView as shown in the following screenshot: What just happened? We created our Xcode project and implemented a custom NSView object that will receive events. When we run the project we notice that the focus ring is drawn so that we can be confident the view has accepted the firstResponder status. How to receive multi-touch events Because our custom view accepts first responder, the Mac OS will automatically send events to it. We can override the methods that process the events that we want to handle in our view. Specifically, we can override the following events and process them to handle multi-touch events in our custom view: - (void)touchesBeganWithEvent:(NSEvent *)event - (void)touchesMovedWithEvent:(NSEvent *)event - (void)touchesEndedWithEvent:(NSEvent *)event - (void)touchesCancelledWithEvent:(NSEvent *)event Time for action — drawing our fingers When the multi-touch or magic trackpad is touched, our custom view methods will be invoked and we will be able to draw the placement of our fingers on the trackpad in our custom view. In Xcode, click on the filename BTSFingerView.h in the project navigator and add the following highlighted property: // // BTSFingerView.h // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import <Cocoa/Cocoa.h> @interface BTSFingerView : NSView // A reference to the object that will // store the currently active touches @property (strong) NSMutableDictionary *m_activeTouches; @end In Xcode, click on the file BTSFingerView.m in the project navigator and add the following program code to synthesize the property: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView // Synthesize the object that will // store the currently active touches @synthesize m_activeTouches; Add the following code to the initWithFrame: method in the BTSFingerView.m file to create the dictionary object that will be used to store the active touch objects: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Create the mutable dictionary that // will hold the list of currently active // touch events m_activeTouches = [[NSMutableDictionary alloc] init]; } return self; } Add the following code to the BTSFingerView.m file to add BeganWith touch events to the dictionary of active touches: /** ** - (void)touchesBeganWithEvent:(NSEvent *)event ** ** Invoked when a finger touches the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesBeganWithEvent:(NSEvent *)event { // Get the set of began touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseBegan inView:self]; // For each began touch, add the touch // to the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches setObject:l_touch forKey:l_touch. identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to add moved touch events to the dictionary of active touches: /** ** - (void)touchesMovedWithEvent:(NSEvent *)event ** ** Invoked when a finger moves on the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesMovedWithEvent:(NSEvent *)event { // Get the set of move touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseMoved inView:self]; // For each move touch, update the touch // in the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { // Update the touch only if it is found // in the active touches dictionary if ([m_activeTouches objectForKey:l_touch.identity]) { [m_activeTouches setObject:l_touch forKey:l_touch.identity]; } } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch ends: /** ** - (void)touchesEndedWithEvent:(NSEvent *)event ** ** Invoked when a finger lifts off the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesEndedWithEvent:(NSEvent *)event { // Get the set of ended touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseEnded inView:self]; // For each ended touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch is cancelled: /** ** - (void)touchesCancelledWithEvent:(NSEvent *)event ** ** Invoked when a touch is cancelled ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesCancelledWithEvent:(NSEvent *)event { // Get the set of cancelled touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseCancelled inView:self]; // For each cancelled touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } When we touch the trackpad we are going to draw a "finger cursor" in our custom view. We need to decide how big we want that cursor to be and the color that we want the cursor to be. Then we can add a series of #define to the file named BTSFingerView.h to define that value: // Define the size of the cursor that // will be drawn in the view for each // finger on the trackpad #define D_FINGER_CURSOR_SIZE 20 // Define the color values that will // be used for the finger cursor #define D_FINGER_CURSOR_RED 1.0 #define D_FINGER_CURSOR_GREEN 0.0 #define D_FINGER_CURSOR_BLUE 0.0 #define D_FINGER_CURSOR_ALPHA 0.5 Now we can add the program code to our drawRect: implementation that will draw the finger cursors in the custom view. // For each active touch for (NSTouch *l_touch in m_activeTouches.allValues) { // Create a rectangle reference to hold the // location of the cursor NSRect l_cursor; // Determine where the touch point NSPoint l_touchNP = [l_touch normalizedPosition]; // Calculate the pixel position of the touch point l_touchNP.x = l_touchNP.x * [self bounds].size.width; l_touchNP.y = l_touchNP.y * [self bounds].size.height; // Calculate the rectangle around the cursor l_cursor.origin.x = l_touchNP.x - (D_FINGER_CURSOR_SIZE / 2); l_cursor.origin.y = l_touchNP.y - (D_FINGER_CURSOR_SIZE / 2); l_cursor.size.width = D_FINGER_CURSOR_SIZE; l_cursor.size.height = D_FINGER_CURSOR_SIZE; // Set the color of the cursor [[NSColor colorWithDeviceRed: D_FINGER_CURSOR_RED green: D_FINGER_CURSOR_GREEN blue: D_FINGER_CURSOR_BLUE alpha: D_FINGER_CURSOR_ALPHA] set]; // Draw the cursor as a circle [[NSBezierPath bezierPathWithOvalInRect: l_cursor] fill]; } What just happened? We implemented the methods required to keep track of the touches and to draw the location of the touches in our custom view. If we run the App now, and move the mouse pointer over the view area, and then touch the trackpad, we will see red circles that track our fingers being drawn in the view as shown in the following screenshot: What is an NSBezierPath? A Bezier Path consists of straight and curved line segments that can be used to draw recognizable shapes. In our program code, we use Bezier Paths to draw a rectangle and a circle but a Bezier Path can be used to draw many other shapes. How to manage the mouse cursor One of the interesting things about the trackpad and the mouse is the association between a single finger touch and the movement of the mouse cursor. Essentially, Mac OS X treats a single finger movement as if it was a mouse movement. The problem with this is that when we move just a single finger on the trackpad, the mouse cursor will move away from our NSView causing it to lose focus so that when we lift our finger we need to move the mouse cursor back to our NSView to receive touch events. Time for action — detaching the mouse cursor from the mouse hardware The solution to this problem is to detach the mouse cursor from the mouse hardware (typically called capturing the mouse) whenever a touch event is active so that the cursor is not moved by touch events. In addition, since a "stuck" mouse cursor may be cause for concern to our App user, we can hide the mouse cursor when touches are active. In Xcode, click on the file named BTSFingerView.h in the project navigator and add the following flag to the interface: @interface BTSFingerView : NSView { // Define a flag so that touch methods can behave // differently depending on the visibility of // the mouse cursor BOOL m_cursorIsHidden; } In Xcode, click on the file named BTSFingerView.m in the project navigator. Add the following code to the beginning of the touchesBeganWithEvent: method to detach and hide the mouse cursor when a touch begins. We only want to do this one time so it is guarded by a BOOL flag and an if statement to make sure we don't do it for every touch that begins. - (void)touchesBeganWithEvent:(NSEvent *)event { // If the mouse cursor is not already hidden, if (NO == m_cursorIsHidden) { // Detach the mouse cursor from the mouse // hardware so that moving the mouse (or a // single finger) will not move the cursor CGAssociateMouseAndMouseCursorPosition(false); // Hide the mouse cursor [NSCursor hide]; // Remember that we detached and hid the // mouse cursor m_cursorIsHidden = YES; } Add the following code to the end of the touchesEndedWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the end of the touchesCancelledWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } While we are looking at the movement of the mouse, we also notice that the focus ring for our custom view is being drawn regardless of whether or not the mouse cursor is over our view. Since touch events will only be sent to our view if the mouse cursor is over it, we want to change the program code so that the focus ring only appears when the mouse cursor is over the custom view. This is something we can do with another BOOL flag. Add the following code to the file to define a BOOL flag that will allow us to determine if the mouse cursor is over our custom view: // Define a flag so that view methods can behave // differently depending on the position of the // mouse cursor BOOL m_mouseIsInFingerView; In the file named BTSFingerView.m, add the following code to create a tracking rectangle that matches the bounds of our custom view. Once the tracking rectangle is active, the methods mouseEntered: and mouseExited: will be automatically invoked as the mouse cursor enters and exits our custom view. /** ** - (void)viewDidMoveToWindow ** ** Informs the receiver that it has been added to ** a new view hierarchy. ** ** We need to make sure the view window is valid ** and when it is, we can add the tracking rect ** ** Once the tracking rect is added the mouseEntered: ** and mouseExited: events will be sent to our view ** */ - (void)viewDidMoveToWindow { // Is the views window valid if ([self window] != nil) { // Add a tracking rect such that the // mouseEntered; and mouseExited: methods // will be automatically invoked [self addTrackingRect:[self bounds] owner:self userData:NULL assumeInside:NO]; } } In the file named BTSFingerView.m, add the following code to implement the mouseEntered: and mouseExited: methods. In those methods, we set the BOOL flag so that the drawRect: method knows whether or not to draw the focus ring. /** ** - (void)mouseEntered: ** ** Informs the receiver that the mouse cursor ** entered a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is over our custom view ** */ - (void)mouseEntered:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is over our view m_mouseIsInFingerView = YES; // Redraw the view so that the focus ring // will appear [self setNeedsDisplay:YES]; } /** ** - (void)mouseExited: ** ** Informs the receiver that the mouse cursor ** exited a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is not over our custom view ** */ - (void)mouseExited:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is not over our view m_mouseIsInFingerView = NO; // Redraw the view so that the focus ring // will not appear [self setNeedsDisplay:YES]; } Finally, in the drawRect: method, change the program code that draws the focus ring to only do so if the mouse cursor is in the tracking rectangle: // If this view has accepted first responder // it should draw the focus ring but only if // the mouse cursor is over this view if ( ([[self window] firstResponder] == self) && (YES == m_mouseIsInFingerView) ) { NSSetFocusRingStyle(NSFocusRingAbove); } What just happened? We implemented the program code that will prevent the mouse cursor from moving out of our custom view when touch events are active. In doing so we noticed that our focus ring behavior could be improved. Therefore we added additional program code to ensure the focus ring is visible only when the mouse pointer is over our view. Performing 2D drawing in a custom view Mac OS X provides a number of ways to perform drawing. The methods provided range from very simple methods to very complex methods. For our multi-finger painting program we are going to use the core graphics APIs designed to draw a path. We are going to collect each stroke as a series of points and construct a path from those points so that we can draw the stroke. Each active touch event will have a corresponding active stroke object that needs to be drawn in our custom view. When a stroke is finished, and the App user lifts the finger, we are going to send the finished stroke to another custom view so that it is drawn only one time and not each time fingers move. The optimization of using the second view will ensure our finger tracking is not slowed down too much by drawing. Before we can begin drawing, we need to create two new objects that will be used to store individual points and strokes. The program code for these two objects is not shown but the objects are included in the Multi-Finger Paint Xcode project. The two objects are as follows: BTSPoint BTSStroke The BTSPoint object is a wrapper for an NSPoint structure. The NSPoint structure needs to be wrapped in an object so that it can be stored in an NSArray object. It has a single instance variable: NSPoint m_point; It implements the following methods which allows it to be initialized: return the point (x and y), return just the x value, or return just the y value. For more information on the object, we can read the source code file in the project: - (id) initWithNSPoint:(NSPoint)a_point; - (NSPoint) point; - (CGFloat)x; - (CGFloat)y; The BTSStroke object is a wrapper for an array of BTSPoint objects, a color, and a stroke width. It is used to store strokes that are drawn in our custom NSView. It has the following instance variables and properties: float m_red; float m_green; float m_blue; float m_alpha; float m_width; @property (strong) NSMutableArray *m_points; It implements the following methods which allows it to be initialized: a new point to be added, return the array of points, return any of the color components, and return the stroke width. For more information on the object, we can read the source code file in the project: - (id) initWithWidth:(float)a_width red:(float)a_red green:(float)a_green blue:(float)a_blue alpha:(float)a_alpha; - (void) addPoint:(BTSPoint *)a_point; - (NSMutableArray *) points; - (float)red; - (float)green; - (float)blue; - (float)alpha; - (float)width;
Read more
  • 0
  • 0
  • 1225

article-image-getting-started-geoserver
Packt
07 Mar 2013
8 min read
Save for later

Getting Started with GeoServer

Packt
07 Mar 2013
8 min read
(For more resources related to this topic, see here.) Installing Java GeoServer is a Java application. So, we need to ensure that you have it installed and properly working on your machine, but you don't need to know how to write Java™ to install or to get started using GeoServer. There are two main packages of Java. Depending on what you are planning to do with Java, you may want to install a JDK (Java Development Kit) or JRE (Java Runtime Environment). The former enables you to compile Java™ code, while the latter has all you need to run most Java applications. Starting from release 2.0, GeoServer does not need a full JDK installation and you can go safely with JRE. It works well with Java 6 but as Java 7 is not deeply tested by developers, it should work but you may experience minor issues. Unless you have some strong reasons to use Java 7, you should use JRE 6. In the 90s, Java development was started by Sun Microsystems. Sun has developed each new release until it merged into Oracle Corporation. While Oracle did not change the Java license to a commercial one, there are some license issues (maybe it would be worthy to add some reference here) preventing Oracle Java™ from being available on an Ubuntu repository. On Ubuntu current releases, you will find OpenJDK already installed in the desktop edition; in the server, you need to choose it at setup. While there are a few users running GeoServer on OpenJDK with no issues, the developers community does not test it intensively and hence you can expect some performance loss. Oracle Java™ should be your first choice unless you have some specific issues. In the following steps, we will use Oracle Java™ JRE. If your installation machine is a new one, then chances are that there is no Java runtime pre-installed. Let's check. Time for action — checking the presence of Java on Windows We will verify the presence of a JRE/JDK installation on Windows, using the following steps: From the Start menu, select Control Panel. Then select Programs. If your system has a JRE/JDK installed, you should see an icon with the Java logo as shown in the following screenshot. It is a shortcut to the Java control panel. Open the Java control panel and select the Java tab. Here you will find settings for JRE. Press the Show Me button to visualize the installed release and the installation folder. What just happened? You checked for the presence of Java on your computer. In case you didn't find it, we are going to install it in the next section. (If you did find it, skip to the Installing Apache Tomcat section.) Time for action — checking the presence of Java on Ubuntu We will check JRE/JDK installation from the command line. Log in to your server and run this command: ~ $ sudo update-alternatives --config java If there is no Java properly configured you should see an output like the following: update-alternatives: error: no alternatives for java. In case there is one or more Java installed the output will be similar to: There is only one alternative in link group java: /usr/lib/jvm/ java-7-openjdk-amd64/jre/bin/java Nothing to configure. Or There are 2 choices for the alternative java (providing /usr/bin/ java). Selection Path Priority Status ———————————————————— * 0 /usr/lib/jvm/java-6-openjdk/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/java-6-sun/jre/bin/java 63 manual mode Press enter to keep the current choice[*], or type selection number: What just happened? requirement for our installation. We had the opportunity to check if the installed release, in case we found it, is suitable for running GeoServer. Now we will go through the installation of JRE. Time for action — installing JRE on Windows We will install Oracle JRE 1.6. We are assuming that you didn't find any previous Java installation. Navigate to the Downloads tab at http://www.oracle.com/technetwork/java/javase/downloads/jre6u37-downloads-1859589.html. Select the installer for Windows 64-bit, that is, jre-6u37-windows-x64.exe, and save it in a convenient folder. Select the downloaded file and run it as an administrator; press the Yes button when asked from the User Account control. Go with the default settings and press the Install button. After it has been downloaded, you should see a window informing you about the success of installation. What just happened? We installed JRE on your Windows computer. The first requirement is now fulfilled and you can go over to the Tomcat installation. Time for action — installing JRE on Ubuntu We will install Oracle JRE 1.6. As mentioned previously, there is no Ubuntu package for Java 6; we are going to perform a manual installation. Visit the download area at http://www.oracle.com/technetwork/java/javase/downloads/jre6u37-downloads-1859589.html. Download the tar.gz archive, choosing the 32-bit or 64-bit archive, depending on the Ubuntu edition you are working with. You must accept the license agreement (reading it might be a nice idea) before you can select one of the tar.gz archives (be sure to avoid rpm archives as they are not for Debian-based Linux distribution). Save the archive to your home folder and extract it. ~ $ chmod a+x jre-6u37-linux-x64.bin ~ $ ./jre-6u37-linux-x64.bin The JRE 6 package is extracted into ./jre1.6.0_37 folder. Now move the JRE 6 directory to /opt and create a symbolic link to it in the default folder for libraries. ~ $ sudo mv ./jre1.6.0_37* /opt ~ $ sudo ln -s /opt/jre1.6.0_37 /usr/lib/jvm/ Let's check the installation: ~ $ /opt/jre1.6.0_37/bin/java -version java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06) Java HotSpot(TM) Client VM (build 20.12-b01, mixed mode) Although not strictly requested by the GeoServer installation, it is worth configuring the JRE as the primary Java alternative in your system: ~$ sudo update-alternatives --install /usr/bin/java java /usr/lib/ jvm/jre1.6.0_37/bin/java 0 Now you need to configure the Oracle JRE as default: ~ $ sudo update-alternatives --config java There are 2 choices for the alternative java (providing /usr/bin/ java). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/jre1.6.0_37/bin/java 0 manual mode Press enter to keep the current choice[*], or type selection number: 2 update-alternatives: using /usr/lib/jvm/jre1.6.0_37/bin/java to provide /usr/bin/java (java) in manual mode. Clean your box by deleting the archive: ~$ rm jre-6u37-linux-x64.bin What just happened? We installed JRE. Now we can run a Java application on the JVM contained in the JRE. The JVM supports several different kinds of Java application; for example, a console-only application, an applet running in a browser, or a full desktop application. For GeoServer (a web application), we need another component on top of the JVM, that is, a servlet container. Installing Apache Tomcat Having correctly installed the JRE you can now pass on and install the servlet container. Servlet container, or web container, is the component server that interacts with the servlets. It is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet, and ensuring access security. It should implement Java servlet and JavaServer Pages technologies. As for JRE, you have a few choices here; a brief list is at http://en.wikipedia.org/wiki/Web_container. Apache Tomcat, GlassFish, and JBoss are most popular and are all available in an open source edition. You may wonder which one is the best choice for running GeoServer. In a production environment, usually the same container is shared among several web applications. You are not going to choose the container; the architects and system administrators made their choices and you have to conform to them. As a beginner, you have the opportunity of selecting it! Apache Tomcat should be your first choice as it is widely adopted in the Geoserver developer's community. If you run into any issues, the answer is probably waiting for you in the mailing list archive. We are going to install Apache Tomcat. It is an open source project of Apache foundation (http://tomcat.apache.org) and there are reasons for installing it such as it is widely used, well-documented, and relatively simple to configure. So let's start the Apache Tomcat installation.
Read more
  • 0
  • 0
  • 2784
article-image-osgi-life-cycle
Packt
07 Mar 2013
5 min read
Save for later

OSGi life cycle

Packt
07 Mar 2013
5 min read
(For more resources related to this topic, see here.) OSGi applications are described as living entities; by this we mean that these applications appear to evolve as the lifecycles of their constituent bundles are lived. The lifecycle layer facilitates this functionality. OSGi bundles are dynamically installed, resolved, started, updated, stopped, and uninstalled. The framework enforces the transitions between states, one cannot directly install a bundle and jump to an Active state without first passing through the resolved and starting states. The transitions between each state are illustrated in the following figure: Installed Bundles came into existence in an OSGi framework in the installed state. A bundle in this state cannot be immediately started, as the preceding diagram depicts that there is no direct transition from the installed state to the starting state. An installed bundle is also not active. There are three possible transitions: the bundle may become resolved, uninstalled, or refreshed. Apache Karaf command To install a bundle in Karaf, issue the osgi:install (bundle:install on Karaf 3.x) command, as follows: karaf@root> osgi:install URLs Having a bundle installed to the OSGi framework does not mean it is ready to be used; next we must resolve its dependencies. Resolved Entering the resolved state requires the framework to ensure that all the dependencies of a bundle have been met. Upon having its dependencies ensured, the bundle is now a candidate to be transitioned to the starting state. A resolved bundle may be refreshed, transitioning the bundle back to the installed state. A resolved bundle may also be transitioned to the uninstalled state. A resolved bundle is not active; however, it is ready to be activated. Apache Karaf command To resolve an installed bundle in Karaf, issue the osgi:resolve (bundle:resolve on Karaf 3.x) command, as follows: karaf@root> osgi:resolve BundleID Starting A resolved bundle may be started. The starting state is transitory; the framework is initializing the resolved bundle into a running active state. In fact, the transition from the starting to active state is implicit. Apache Karaf command To start a resolved bundle in Karaf, issue the osgi:start (bundle:start on Karaf 3.x) command, as follows: karaf@root> osgi:start BundleID Active The bundle is fully resolved, providing and consuming services in the OSGi environment. To perform any more transitions on an active bundle, it must first be stopped. Updating Bundle updates occur when the framework is instructed to re-evaluate a bundle's dependencies; this action is synonymous with refreshing a bundle. When this action occurs, all of the wiring to and from the bundle is broken, so care must be taken before refreshing to avoid starting a bundle storm (one bundle refreshing causes a domino effect of other bundles refreshing). Apache Karaf command To update a bundle in Karaf, issue the osgi:update (bundle:update on Karaf 3.x) command, as follows: karaf@root> osgi:update BundleID [location] The location option allows you to update the bundle via its predefined updated location or to specify a new location to find bundle updates. Stopping Stopping a bundle transitions it from the active to the resolved state. The bundle can be restarted while it remains in the resolved state. Apache Karaf command To stop an active bundle in Karaf, issue the osgi:stop (bundle:stop on Karaf 3.x) command, as follows: karaf@root> osgi:stop BundleID Uninstalled Uninstalling a bundle transitions an installed or resolved bundle out of the OSGi environment; however, the bundle is not removed from the environment! Why is this? While the bundle is no longer available for use, references to the bundle may still exist and used for introspection. To help leverage these states in your bundles, the OSGi specification provides a hook into your bundle state via the Activator interface. Apache Karaf command To uninstall a bundle in Karaf, issue the osgi:uninstall (bundle:uninstall on Karaf 3.x) command, as follows: karaf@root> osgi:uninstall BundleID BundleActivator bundle may optionally declare an Activator class implementing the org.osgi.framework.BundleActivator interface. This class must be referenced in the bundle manifest file via the BundleActivator header. Implementing the activator allows the bundle developer to specify actions to be performed upon starting or stopping a bundle. Generally, such operations include gaining access to or freeing resources, and registering and unregistering services. The entry in manifest.mf will appear as follows: Bundle-Activator: com.packt.osgi.starter.sample.Activator When building with maven-bundle-plugin, the following configuration instruction is added: <Bundle-Activator> com.packt.osgi.starter.sample.Activator </Bundle-Activator> The process can be seen in the following screenshot: Summary: This article covered the various states involved in the OSGi lifecycle. We also learnt about the transitions from one state to another. Resources for Article : Further resources on this subject: Koha's Web Installer, Crontab, and Other Server Configurations [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article] Getting Started with Bookshelf Project in Apache Felix [Article]
Read more
  • 0
  • 0
  • 3912

article-image-asynchrony-action
Packt
06 Mar 2013
17 min read
Save for later

Asynchrony in Action

Packt
06 Mar 2013
17 min read
(For more resources related to this topic, see here.) Asynchrony When we talk about C# 5.0, the primary topic of conversation is the new asynchronous programming features. What does asynchrony mean? Well, it can mean a few different things, but in our context, it is simply the opposite of synchronous. When you break up execution of a program into asynchronous blocks, you gain the ability to execute them side-by-side, in parallel. As you can see in the following diagram, executing multiple ac-tions concurrently can bring various positive qualities to your programs: Parallel execution can bring performance improvements to the execution of a program. The best way to put this into context is by way of an example, an example that has been experienced all too often in the world of desktop software. Let's say you have an application that you are developing, and this software should fulfill the following requirements: When the user clicks on a button, initiate a call to a web service. Upon completion of the web service call, store the results into a database. Finally, bind the results and display them to the user. There are a number of problems with the naïve way of implementing this solution. The first is that many developers write code in such a way that the user interface will be completely unresponsive while we are waiting to receive the results of these web service calls. Then, once the results finally arrive, we continue to make the user wait while we store the results in a database, an operation that the user does not care about in this case. The primary vehicle for mitigating these kinds of problems in the past has been writing multithreaded code. This is of course nothing new, as multi-threaded hardware has been around for many years, along with software capabilities to take advantage of this hardware. Most of the programming languages did not provide a very good abstraction layer on top of this hardware, often letting (or requiring) you program directly against the hardware threads. Thankfully, Microsoft introduced a new library to simplify the task of writing highly concurrent programs, which is explained in the next section. Task Parallel Library The Task Parallel Library (TPL) was introduced in .NET 4.0 (along with C# 4.0). Firstly, it is a huge topic and could not have been examined properly in such a small space. Secondly, it is highly relevant to the new asynchrony features in C# 5.0, so much so that they are the literal foundation upon which the new features are built. So, in this section, we will cover the basics of the TPL, along with some of the background information about how and why it works. TPL introduces a new type, the Task type, which abstracts away the concept of something that must be done into an object. At first glance, you might think that this abstraction already exists in the form of the Thread class. While there are some similarities between Task and Thread, the implementations have quite different implications. With a Thread class, you can program directly against the lowest level of parallelism supported by the operating system, as shown in the following code: Thread thread = new Thread(new ThreadStart(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Thread"); })); thread.Start(); Console.WriteLine("Hello, from the main thread"); thread.Join(); In the previous example, we create a new Thread class, which when started will sleep for a second and then write out the text Hello, from the Thread. After we call thread.Start(), the code on the main thread immediately continues and writes Hello, from the main thread. After a second, we see the text from the background thread printed to the screen. In one sense, this example of using the Thread class shows how easy it is to branch off the execution to a background thread, while allowing execution of the main thread to continue, unimpeded. However, the problem with using the Thread class as your "concurrency primitive" is that the class itself is an indication of the implementation, which is to say, an operating system thread will be created. As far as abstractions go, it is not really an abstraction at all; your code must both manage the lifecycle of the thread, while at the same time dealing with the task the thread is executing. If you have multiple tasks to execute, spawning multiple threads can be disastrous, because the operating system can only spawn a finite number of them. For performance intensive applications, a thread should be considered a heavyweight resource, which means you should avoid using too many of them, and keep them alive for as long as possible. As you might imagine, the designers of the .NET Framework did not simply leave you to program against this without any help. The early versions of the frameworks had a mechanism to deal with this in the form of the ThreadPool, which lets you queue up a unit of work, and have the thread pool manage the lifecycle of a pool of threads. When a thread becomes available, your work item is then executed. The following is a simple example of using the thread pool: int[] numbers = { 1, 2, 3, 4 }; foreach (var number in numbers) { ThreadPool.QueueUserWorkItem(new WaitCallback(o => { Thread.Sleep(500); string tabs = new String('t', (int)o); Console.WriteLine("{0}processing #{1}", tabs, o); }), number); } This sample simulates multiple tasks, which should be executed in parallel. We start with an array of numbers, and for each number we want to queue a work item that will sleep for half a second, and then write to the console. This works much better than trying to manage multiple threads yourself because the pool will take care of spawning more threads if there is more work. When the configured limit of concurrent threads is reached, it will hold work items until a thread becomes available to process it. This is all work that you would have done yourself if you were using threads directly. However, the thread pool is not without its complications. First, it offers no way of synchronizing on completion of the work item. If you want to be notified when a job is completed, you have to code the notification yourself, whether by raising an event, or using a thread synchronization primitive, such as ManualResetEvent. You also have to be careful not to queue too many work items, or you may run into system limitations with the size of the thread pool. With the TPL, we now have a concurrency primitive called Task. Consider the following code: Task task = Task.Factory.StartNew(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Task"); }); Console.WriteLine("Hello, from the main thread"); task.Wait(); Upon first glance, the code looks very similar to the sample using Thread, but they are very different. One big difference is that with Task, you are not committing to an implementation. The TPL uses some very interesting algorithms behind the scenes to manage the workload and system resources, and in fact, allows you customize those algorithms through the use of custom schedulers and synchronization contexts. This allows you to control the parallel execution of your programs with a high degree of control. Dealing with multiple tasks, as we did with the thread pool, is also easier because each task has synchronization features built-in. To demonstrate how simple it is to quickly parallelize an arbitrary number of tasks, we start with the same array of integers, as shown in the previous thread pool example: int[] numbers = { 1, 2, 3, 4 }; Because Task can be thought of as a primitive type that represents an asynchronous task, we can think of it as data. This means that we can use things such as Linq to project the numbers array to a list of tasks as follows: var tasks = numbers.Select(number => Task.Factory.StartNew(() => { Thread.Sleep(500); string tabs = new String('t', number); Console.WriteLine("{0}processing #{1}", tabs, number); })); And finally, if we wanted to wait until all of the tasks were done before continuing on, we could easily do that by calling the following method: Task.WaitAll(tasks.ToArray()); Once the code reaches this method, it will wait until every task in the array completes before continuing on. This level of control is very convenient, especially when you consider that, in the past, you would have had to depend on a number of different synchronization techniques to achieve the very same result that was accomplished in just a few lines of TPL code. With the usage patterns that we have discussed so far, there is still a big disconnect between the process that spawns a task, and the child process. It is very easy to pass values into a background task, but the tricky part comes when you want to retrieve a value and then do something with it. Consider the following requirements: Make a network call to retrieve some data. Query the database for some configuration data. Process the results of the network data, along with the configuration data. The following diagram shows the logic: Both the network call and query to the database can be done in parallel. With what we have learned so far about tasks, this is not a problem. However, acting on the results of those tasks would be slightly more complex, if it were not for the fact that the TPL provides support for exactly that scenario. There is an additional kind of Task, which is especially useful in cases like this called Task<T>. This generic version of a task expects the running task to ultimately return a value, whenever it is finished. Clients of the task can access the value through the .Result property of the task. When you call that property, it will return immediately if the task is completed and the result is available. If the task is not done, however, it will block execution in the current thread until it is. Using this kind of task, which promises you a result, you can write your programs such that you can plan for and initiate the parallelism that is required, and handle the response in a very logical manner. Look at the following code: varwebTask = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); vardbTask = Task.Factory.StartNew(() => { // do a lengthy database query return new { WriteToConsole=true }; }); if (dbTask.Result.WriteToConsole) { Console.WriteLine(webTask.Result); } else { ProcessWebResult(webTask.Result); } In the previous example, we have two tasks, the webTask, and dbTask, which will execute at the same time. The webTask is simply downloading the HTML from http://bing.com Accessing things over the Internet can be notoriously flaky due to the dynamic nature of accessing the network so you never know how long that is going to take. With the dbTask task, we are simulating accessing a database to return some stored settings. Although in this simple example we are just returning a static anonymous type, database access will usually access a different server over the network; again, this is an I/O bound task just like downloading something over the Internet. Rather than waiting for both of them to execute like we did with Task.WaitAll, we can simply access the .Result property of the task. If the task is done, the result will be returned and execution can continue, and if not, the program will simply wait until it is. This ability to write your code without having to manually deal with task synchronization is great because the fewer concepts a programmer has to keep in his/her head, the more resources he/she can devote to the program. If you are curious about where this concept of a task that returns a value comes from, you can look for resources pertaining to "Futures", and "Promises" at: http://en.wikipedia.org/wiki/Promise_%28programming%29 At the simplest level, this is a construct that "promises" to give you a result in the "future", which is exactly what Task<T> does. Task composability Having a proper abstraction for asynchronous tasks makes it easier to coordinate multiple asynchronous activities. Once the first task has been initiated, the TPL allows you to compose a number of tasks together into a cohesive whole using what are called continuations. Look at the following code: Task<string> task = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); task.ContinueWith(webTask => { Console.WriteLine(webTask.Result); }); Every task object has the .ContinueWith method, which lets you chain another task to it. This continuation task will begin execution once the first task is done. Unlike the previous example, where we relied on the .Result method to wait until the task was done—thus potentially holding up the main thread while it completed—the continuation will run asynchronously. This is a better approach for composing tasks because you can write tasks that will not block the UI thread, which results in very responsive applications. Task composability does not stop at providing continuations though, the TPL also provides considerations for scenarios, where a task must launch a number of subtasks. You have the ability to control how completion of those child tasks affects the parent task. In the following example, we will start a task, which will in turn launch a number of subtasks: int[] numbers = { 1, 2, 3, 4, 5, 6 }; varmainTask = Task.Factory.StartNew(() => { // create a new child task foreach (intnum in numbers) { int n = num; Task.Factory.StartNew(() => { Thread.SpinWait(1000); int multiplied = n * 2; Console.WriteLine("Child Task #{0}, result {1}", n, multiplied); }); } }); mainTask.Wait(); Console.WriteLine("done"); Each child task will write to the console, so that you can see how the child tasks behave along with the parent task. When you execute the previous program, it results in the following output: Child Task #1, result 2 Child Task #2, result 4 done Child Task #3, result 6 Child Task #6, result 12 Child Task #5, result 10 Child Task #4, result 8 Notice how even though you have called the .Wait() method on the outer task before writing done, the execution of the child task continues a bit longer after the task is concluded. This is because, by default, child tasks are detached, which means their execution is not tied to the task that launched it. An unrelated, but important bit in the previous example code, is you will notice that we assigned the loop variable to an intermediary variable before using it in the task. int n = num; Task.Factory.StartNew(() => { int multiplied = n * 2; This is actually related to the way closures work, and is a common misconception when trying to "pass in" values in a loop. Because the closure actually creates a reference to the value, rather than copying the value in, using the loop value will end up changing every time the loop iterates, and you will not get the behavior you expect. As you can see, an easy way to mitigate this is to set the value to a local variable before passing it into the lambda expression. That way, it will not be a reference to an integer that changes before it is used. You do however have the option to mark a child task as Attached, as follows: Task.Factory.StartNew( () =>DoSomething(), TaskCreationOptions.AttachedToParent); The TaskCreationOptions enumeration has a number of different options. Specifically in this case, the ability to attach a task to its parent task means that the parent task will not complete until all child tasks are complete. Other options in TaskCreationOptions let you give hints and instructions to the task scheduler. From the documentation, the following are the descriptions of all these options: None: This specifies that the default behavior should be used. PreferFairness: This is a hint to a TaskScheduler class to schedule a task in as fair a manner as possible, meaning that tasks scheduled sooner will be more likely to be run sooner, and tasks scheduled later will be more likely to be run later. LongRunning: This specifies that a task will be a long-running, coarsegrained operation. It provides a hint to the TaskScheduler class that oversubscription may be warranted. AttachedToParent: This specifies that a task is attached to a parent in the task hierarchy. DenyChildAttach: This specifies that an exception of the type InvalidOperationException will be thrown if an attempt is made to attach a child task to the created task. HideScheduler: This prevents the ambient scheduler from being seen as the current scheduler in the created task. This means that operations such as StartNew or ContinueWith that are performed in the created task, will see Default as the current scheduler. The best part about these options, and the way the TPL works, is that most of them are merely hints. So you can suggest that a task you are starting is long running, or that you would prefer tasks scheduled sooner to run first, but that does not guarantee this will be the case. The framework will take the responsibility of completing the tasks in the most efficient manner, so if you prefer fairness, but a task is taking too long, it will start executing other tasks to make sure it keeps using the available resources optimally. Error handling with tasks Error handling in the world of tasks needs special consideration. In summary, when an exception is thrown, the CLR will unwind the stack frames looking for an appropriate try/catch handler that wants to handle the error. If the exception reaches the top of the stack, the application crashes. With asynchronous programs, though, there is not a single linear stack of execution. So when your code launches a task, it is not immediately obvious what will happen to an exception that is thrown inside of the task. For example, look at the following code: Task t = Task.Factory.StartNew(() => { throw new Exception("fail"); }); This exception will not bubble up as an unhandled exception, and your application will not crash if you leave it unhandled in your code. It was in fact handled, but by the task machinery. However, if you call the .Wait() method, the exception will bubble up to the calling thread at that point. This is shown in the following example: try { t.Wait(); } catch (Exception ex) { Console.WriteLine(ex.Message); } When you execute that, it will print out the somewhat unhelpful message One or more errors occurred, rather than the fail message that is the actual message contained in the exception. This is because unhandled exceptions that occur in tasks will be wrapped in an AggregateException exception, which you can handle specifically when dealing with task exceptions. Look at the following code: catch (AggregateException ex) { foreach (var inner in ex.InnerExceptions) { Console.WriteLine(inner.Message); } } If you think about it, this makes sense, because of the way that tasks are composable with continuations and child tasks, this is a great way to represent all of the errors raised by this task. If you would rather handle exceptions on a more granular level, you can also pass a special TaskContinuationOptions parameter as follows: Task.Factory.StartNew(() => { throw new Exception("Fail"); }).ContinueWith(t => { // log the exception Console.WriteLine(t.Exception.ToString()); }, TaskContinuationOptions.OnlyOnFaulted); This continuation task will only run if the task that it was attached to is faulted (for example, if there was an unhandled exception). Error handling is, of course, something that is often overlooked when developers write code, so it is important to be familiar with the various methods of handling exceptions in an asynchronous world.
Read more
  • 0
  • 0
  • 1554