Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-enabling-and-configuring-snmp-windows
Packt
14 Nov 2013
5 min read
Save for later

Enabling and configuring SNMP on Windows

Packt
14 Nov 2013
5 min read
This article by Justin M. Brant, the author of SolarWinds Server & Application Monitor: Deployment and Administration, covers enabling and configuring SNMP on Windows. (For more resources related to this topic, see here.) Procedures in this article are not required pre-deployment, as it is possible after deployment to populate SolarWinds SAM with nodes; however, it is recommended. Even after deployment, you should still enable and configure advanced monitoring services on your vital nodes. SolarWinds SAM uses three types of protocols to poll management data: Simple Network Management Protocol (SNMP): This is the most common network management service protocol. To utilize it, SNMP must be enabled and an SNMP community string must be assigned on the server, device, or application. The community string is essentially a password that is sent between a node and SolarWinds SAM. Once the community string is set and assigned, the node is permitted to expose management data to SolarWinds SAM, in the form of variables. Currently, there are three versions of SNMP: v1, v2c, and v3. SolarWinds SAM uses SNMPv2c by default. To poll using SNMPv1, you must disable SNMPv2c on the device. Similarly, to poll using SNMPv3, you must configure your devices and SolarWinds SAM accordingly. Windows Management Instrumentation (WMI): This has added functionality by incorporating Windows specifi c communications and security features. WMI comes preinstalled on Windows by default but is not automatically enabled and confi gured. WMI is not exclusive to Windows server platforms; it comes installed on all modern Microsoft operating systems, and can also be used to poll desktop operating systems, such as Windows 7. Internet Control Message Protocol (ICMP): This is the most basic of the three; it simply sends echo requests (pings) to a server or device for status, response time, and packet loss. SolarWinds SAM uses ICMP in conjunction with SNMP and WMI. Nodes can be confi gured to poll with ICMP exclusively, but you miss out on CPU, memory, and volume data. Some devices can only be polled with ICMP, although in most instances you will rarely use ICMP exclusively. Trying to decide between SNMP and WMI? SNMP is more standardized and provides data that you may not be able to poll with WMI, such as interface information. In addition, polling a single WMI-enabled node uses roughly five times the resources required to poll the same node with SNMP. This article will explain how to prepare for SolarWinds SAM deployment, by enabling and configuring network management services and protocols on: Windows servers. In this article we will reference service accounts. A service account is an account created to handoff credentials to SolarWinds SAM. Service accounts are a best practice primarily for security reasons, but also to ensure that user accounts do not become locked out. Enabling and configuring SNMP on Windows Procedures listed in this article will explain how to enable SNMP and then assign a community string, on Windows Server 2008 R2. All Windows server-related procedures in this book are performed on Windows Server 2008 R2. Procedures vary slightly in other supported versions. Installing an SNMP service on Windows This procedure explains how to install the SNMP service on Windows Server 2008 R2. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Server Manager. In order to see Administrative Tools in the Control Panel, you may need to select View by: Small Icons or Large Icons. Select Features and click on Add Features. Check SNMP Services, then click on Next and Install. Click on Close. Assigning an SNMP community string on Windows This procedure explains how to assign a community string on Windows 2008 R2, and ensure that the SNMP service is configured to run automatically on start up. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Services. Double-click on SNMP Service. On the General tab, select Automatic under Startup type. Select the Agent tab and ensure Physical, Applications, Internet, and End-to-end are all checked under the Service area. Optionally, enter a Contact person and system Location. Select the Security tab and click on the Add button under Accepted community names. Enter a Community Name and click on the Add button. For example, we used S4MS3rv3r. We recommend using something secure, as this is a password. Community String and Community Name mean the same thing.   READ ONLY community rights will normally suffice. A detailed explanation of community rights can be found on the author's blog: http://justinmbrant.blogspot.com/ Next, tick the Accept SNMP packets from these hosts radio button. Click on the Add button underneath the radio buttons and add the IP of the server you have designated as the SolarWinds SAM host. Once you complete these steps, the SNMP Service Properties Security tab should look something like the following screenshot. Notice that we used 192.168.1.3, as that is the IP of the server where we plan to deploy SolarWinds SAM. Summary In this article, we learned different types of protocols to poll management data. We also learned how to install SNMP as well as assign SNMP community string on Windows. Resources for Article: Further resources on this subject: The OpenFlow Controllers [Article] The GNS3 orchestra [Article] Network Monitoring Essentials [Article]
Read more
  • 0
  • 0
  • 9160

article-image-package-management
Packt
14 Nov 2013
12 min read
Save for later

Package Management

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) Using NuGet with source control Source control systems are an integral part of software development. As soon as there is more than one person working on the project, it becomes an invaluable tool for sharing source code. Even when we are on the project on our own, there is no better way for tracking versions and source code changes. The question arises: how should we put the installed packages into source control? The first impulse would be to simply add the packages folder to the repository. Though this will work, it isn't the best possible approach. Packages can grow quite large and they can be obtained from elsewhere; therefore, we would only "pollute" the repository with redundant files. Many source control systems don't handle large binary files well. Even for those that don't have such problems, having packages in the repository doesn't add much value; it does noticeably increase the repository size, though. Fortunately, NuGet offers a feature called Package Restore, which can be used to avoid adding packages to source control. Let's see how it works. The following sample will use Team Foundation Service (TFS) for source control. If you don't have an account yet, you can sign up for free at http://tfs.visualstudio.com. You need a Microsoft account for authentication. If you decide to use a different source control system instead, just skip all the steps dealing with TFS, replacing them with equivalent actions in your source control system of choice. We'll start by creating a sample project: Create a new Console Application project in Visual Studio. Install the Json.NET NuGet package into the project. Add the following code to the Main method so that the project won't compile without a valid reference to the Newtonsoft.Json.dll assembly: var jsonString = @"{ ""title"": ""NuGet 2 Essentials"", ""authors"":""Damir Arh & Dejan Dakic"", ""publisher"": ""Packt Publishing"" }"; var parsedJson = Newtonsoft.Json.JsonConvert .DeserializeObject(jsonString); Compile and run the project to make sure the code works. It's time to create a source code repository. If you already have a repository, you can skip the following steps. Just make sure you have a repository ready and know how to connect to it before moving on. You will need Visual Studio 2012 or Visual Studio 2010 with Service Pack 1 and KB2662296 installed to connect to TFS. In a browser, navigate to https://<accountname>.visualstudio.com/ (replacing <accountname> with the name you used when signing up for TFS). Click on New team project. In the CREATE NEW TEAM PROJECT dialog box, enter the project name (for example, PackageRestore) and click on Create project, leaving the rest of the fields unchanged. Click on Navigate to project once the creation process is complete. On the project page, click on Open new instance of Visual Studio in the Activities menu on the right to connect Visual Studio to your TFS account. You can close this Visual Studio instance once the connection is established. Now we're ready to add the project to the repository. Return to Visual Studio, right-click on the solution node in the Solution Explorer window and click on the Add Solution to Source Control… menu item. Make sure you select Team Foundation Version Control as the source control system if a dialog box pops up and asks you to make a selection. In the Connect to Team Foundation Server dialog box which opens next, select your TFS account (for example, accountname.visualstudio.com) from the drop-down list and check the created repository in the Team Projects list box. Click on Connect to move to the next step and confirm the default settings by clicking on OK in the dialog box that follows. We still need to select the right set of files to add to source control and check them in so that they will be available for others. Open the Team Explorer window and click on the Source Control Explorer link inside it. Look in the tree view on the left side of the Source Control Explorer window. You will notice that apart from your project, the packages folder is also included. We need to exclude it since we don't want to add packages to source control. Right-click on the packages folder and select Undo Pending Changes… from the context menu. Click on the Undo Changes button in the Undo Pending Changes dialog box that pops up. Click on the Check In button in the toolbar and click on Check In again in the Pending Changes pane inside the Team Explorer window. Close the confirmation dialog box if it pops up. The packages folder should now be removed from the tree view. Navigate to https://<accountname>.visualstudio.com/DefaultCollection/<PackageRestore>/_versionControl from your browser to check which files have been successfully checked in. Replace <accountname> and <PackageRestore> with appropriate values as necessary for your case. Let's retrieve the code and place it in a different location and see how the packages are going to get restored: With TFS, a new workspace needs to be created for that purpose in the Manage Workspaces dialog box, which can be accessed by clicking on Workspaces… from the Workspace drop-down list in the Source Control Explorer toolbar. Click on Add… to add a new workspace. You need to specify both Source Control Folder (solution folder, that is, $/<PackageRestore>/<SolutionName>) and Local Folder where you want to put the files. After adding the workspace, confirm the next dialog box to get the files from the repository to your selected local folder. Check the contents of that folder after Visual Studio is done to see that there is no packages folder inside it. Open the solution from the new folder in Visual Studio and build it. You should notice a NuGet Package Manager dialog box popping up displaying the package restore progress and closing again once it is done. The application should build successfully. You can run it to see that it works as expected. If you check the contents of the solution folder once again, you will see that the packages folder has been restored with all the required packages inside it. Automatic Package Restore, which was described earlier has been available only since NuGet 2.7. In earlier versions, only MSBuild-Integrated Package Restore was supported. In case your repository will be accessed by users still using NuGet 2.6 or older, it might be a better idea to use this instead; otherwise, package restore won't work for them. You can enable it by following these steps (if you do this in NuGet 2.7, the Automatic Package Restore will get disabled): Right-click on the solution node in the Solution Explorer window and click on the Enable NuGet Package Restore menu item. Confirm the confirmation dialog box that pops up briefly explaining what is going to happen. Another dialog box will pop up once the process is complete. Notice that a .nuget folder containing three files has been added to the solution shown as follows: When adding a solution configured like this to the source control, don't forget to include the .nuget folder as well. The packages folder of course still remains outside source control. If you encounter a repository with MSBuild-Integrated Package Restore, which was enabled with NuGet 2.6 or older, the restoring of packages before build might fail with the following error: Package restore is disabled by default. To give consent, open the VisualStudio Options dialog, click on Package Manager node and check 'AllowNuGet to download missing packages during build.' You can also giveconsent by setting the environment variable 'EnableNuGetPackageRestore'to 'true'. This happens because the Allow NuGet to download missing packages during build setting was disabled by default in NuGet versions before 2.7. To fix the problem navigate to Tools | Library | Package Manager | Package Manager Settings to open the option dialog box on the right node; then uncheck and recheck the mentioned setting and click on OK to explicitly set it. A more permanent solution is to either update NuGet.exe in the .nuget folder to the latest version or to switch to Automatic Package Restore instead as described at http://bit.ly/NuGetAutoRestore. Using NuGet on a build server Automatic Package Restore only works within Visual Studio. If you try to build such a solution on a build server by using MSBuild only, it will fail if the packages are missing. To solve this problem, you should use the Command-Line Package Restore approach by executing the following command as a separate step before building the solution file: C:> NuGet.exe restore pathtoSolutionFile.sln This will restore all of the packages in the solution, making sure that the build won't fail because of missing packages. Even if the solution is using MSBuild-Integrated Package Restore, this approach will still work because all of the packages will already be available when it is invoked; and this will just silently be skipped. The exact procedure for adding the extra step will depend on your existing build server configuration. You should either call it from within your build script or add it as an additional step to your build server configuration. In any case, you need to make sure you have installed NuGet 2.7 on your build server. The NuGet Package Restore feature can be optimized even more on a build server by defining a common package repository for all solutions. This way each package will be downloaded only once even if it is used in multiple solutions; saving both the download time and the storage space. To achieve this, save a NuGet.config file with the following content at the root folder containing all your solutions in its subfolders: <?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="repositorypath" value="C:pathtorepository" /> </config> </configuration> You can even have more control of your repository location and other NuGet settings by taking advantage of the hierarchical or machine-wide NuGet.config file as explained at http://bit.ly/NuGetConfig. Using Package Manager Console We have already used Package Manager Console twice to achieve something that couldn't have been done using the graphical user interface. It's time to take a closer look at it and the commands that are available. The Package Manager Console window is accessible by either navigating to Tools | Library Package Manager | Package Manager Console or by navigating to View | Other Windows | Package Manager Console. The most important commands are used to install, update, and uninstall packages on a project. By default, they operate on Default project selected from a drop-down list in the window's toolbar. The target project name can be specified using the -ProjectName parameter. To get a list of all commands, type Get-Help NuGet in the console. To get more information about a command, type Get-Help CommandName in the console, replacing CommandName with the actual name of the command. You can also check the online PowerShell command reference at http://bit.ly/NuGetPsRef. Let's take a look at few of the examples: To install the latest version of the Newtonsoft.Json package to the default project, type: PM> Install-Package Newtonsoft.Json To install Version 5.0.1 of the Newtonsoft.Json package to the default project, type: PM> Install-Package Newtonsoft.Json –Version 5.0.1 To install the latest version of the Newtonsoft.Json package to the Net40 project, type: PM> Install-Package Newtonsoft.Json –ProjectName Net40 To update the Newtonsoft.Json package in all projects to its latest version, type: PM> Update-Package Newtonsoft.Json To update the Newtonsoft.Json package in all projects to Version 5.0.3 (this will fail for projects with a newer version already installed), type: PM> Update-Package Newtonsoft.Json –Version 5.0.3 To update the Newtonsoft.Json package in the Net40 project to the latest version, type: PM> Update-Package Newtonsoft.Json –ProjectName Net40 To update all packages in all projects to the latest available version with the same major and minor version component, type: PM> Update-Package –Safe To uninstall the Newtonsoft.Json package from the default project, type: PM> Uninstall-Package Newtonsoft.Json To uninstall the Newtonsoft.Json package from the Net40 project, type: PM> Uninstall-Package Newtonsoft.Json –ProjectName Net40 To list all packages in the online package source matching the Newtonsoft.Json search filter, type: PM> Get-Package –ListAvailable –Filter Newtonsoft.Json To list all installed packages having an update in the online package source, type: PM> Get-Package –Updates Installed packages can add their own commands. An example of such a package is EntityFramework. To get a list of all commands for a package, type Get-Help PackageName, replacing PackageName with the actual name of the package after it is installed, for example: PM> Get-Help EntityFramework Summary This article has covered various NuGet features in detail. We started out with package versioning support and the package update process. We then moved on to built-in support for different target platforms. A large part of the article was dedicated to the usage of NuGet in conjunction with source control systems. We have seen how to avoid adding packages to source control and still have them automatically restored when they are required during build. We concluded the article with a quick overview of the console and the commands that give access to features not available using the graphical user interface. This concludes our tour of NuGet from the package consumer point of view. In the following article, we will take on the role of a package creator and look at the basics of creating and publishing our own NuGet package. Resources for Article: Further resources on this subject: Lucene.NET: Optimizing and merging index segments [Article] Creating your first collection (Simple) [Article] Material nodes in Cycles [Article]
Read more
  • 0
  • 0
  • 955

article-image-business-layer-java-ee-7-first-look
Packt
13 Nov 2013
7 min read
Save for later

The Business Layer (Java EE 7 First Look)

Packt
13 Nov 2013
7 min read
Enterprise JavaBeans 3.2 The Enterprise JavaBeans 3.2 Specification was developed under JSR 345. This section just gives you an overview of improvements in the API. The complete document specification (for more information) can be downloaded from http://jcp.org/aboutJava/communityprocess/final/jsr345/index.html. The businesslayer of an application is the part of the application that islocated between the presentationlayer and data accesslayer. The following diagram presents a simplified Java EE architecture. As you can see, the businesslayer acts as a bridge between the data access and the presentationlayer. It implements businesslogic of the application. To do so, it can use some specifications such as Bean Validation for data validation, CDifor context and dependency injection, interceptors to intercept processing, and so on. As thislayer can belocated anywhere in the network and is expected to serve more than one user, it needs a minimum of non functional services such as security, transaction, concurrency, and remote access management. With EJBs, the Java EE platform provides to developers the possibility to implement thislayer without worrying about different non functional services that are necessarily required. In general, this specification does not initiate any new major feature. It continues the work started by thelast version, making optional the implementation of certain features that became obsolete and adds slight modification to others. Pruning some features After the pruning process introduced by Java EE 6 from the perspective of removing obsolete features, support for some features has been made optional in Java EE 7 platform, and their description was moved to another document called EJB 3.2 Optional Features for Evaluation. The features involved in this movement are: EJB 2.1 and earlier Entity Bean Component Contract for Container-Managed Persistence EJB 2.1 and earlier Entity Bean Component Contract for Bean-Managed Persistence Client View of EJB 2.1 and earlier Entity Bean EJB QL: Querylanguage for Container-Managed Persistence Query Methods JAX-RPC-based Web Service Endpoints JAX-RPC Web Service Client View The latest improvements in EJB 3.2 For those who have had to use EJB 3.0 and EJB 3.1, you will notice that EJB 3.2 has brought, in fact, only minor changes to the specification. However, some improvements cannot be overlooked since they improve the testability of applications, simplify the development of session beans or Message-Driven Beans, and improve control over the management of the transaction and passivation of stateful beans. Session bean enhancement A session bean is a type of EJB that allows us to implement businesslogic accessible tolocal, remote, or Web Service Client View. There are three types of session beans: stateless for processing without states, stateful for processes that require the preservation of states between different calls of methods, and singleton for sharing a single instance of an object between different clients. The following code shows an example of a stateless session bean to save an entity in the database: @Stateless public class ExampleOfSessionBean { @PersistenceContext EntityManager em; public void persistEntity(Object entity){ em.persist(entity); }} Talking about improvements of session beans, we first note two changes in stateful session beans: the ability to executelife-cycle callback interceptor methods in a user-defined transaction context and the ability to manually disable passivation of stateful session beans. It is possible to define a process that must be executed according to thelifecycle of an EJB bean (post-construct, pre-destroy). Due to the @TransactionAttribute annotation, you can perform processes related to the database during these phases and control how they impact your system. The following code retrieves an entity after being initialized and ensures that all changes made to the persistence context are sent to the database at the time of destruction of the bean. As you can see in the following code, TransactionAttributeType of init() method is NOT_SUPPORTED; this means that the retrieved entity will not be included in the persistence context and any changes made to it will not be saved in the database: @Stateful public class StatefulBeanNewFeatures { @PersistenceContext(type= PersistenceContextType.EXTENDED) EntityManager em; @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) @PostConstruct public void init(){ entity = em.find(...); } @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) @PreDestroy public void destroy(){ em.flush(); } } The following code demonstrates how to control passivation of the stateful bean. Usually, the session beans are removed from memory to be stored on the disk after a certain time of inactivity. This process requires data to be serialized, but during serialization all transient variables are skipped and restored to the default value of their data type, which is null for object, zero for int, and so on. To prevent theloss of this type of data, you can simply disable the passivation of stateful session beans by passing the false value to the passivationCapable attribute of the @Stateful annotation. @Stateful(passivationCapable = false) public class StatefulBeanNewFeatures { //... } For the sake of simplicity, EJB 3.2 has relaxed the rules to define the defaultlocal or remote business interface of a session bean. The following code shows how a simple interface can be considered aslocal or remote depending on the case: //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Local @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are remote interfaces public interface yellow { ... } public interface green { ... } @Remote @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface @Remote public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface public interface yellow { ... } public interface green { ... } @Remote(yellow.class) @Stateless public class Color implements yellow, green { ... } EJBlite improvements Before EJB 3.1, the implementation of a Java EE application required the use of a full Java EE server with more than twenty specifications. This could be heavy enough for applications that only need some specification (as if you were asked to take a hammer to kill a fl y). To adapt Java EE to this situation, JCP (Java Community Process) introduced the concept of profile and EJBlite. Specifically, EJBlite is a subset of EJBs, grouping essential capabilities forlocal transactional and secured processing. With this concept, it has become possible to make unit tests of an EJB application without using the Java EE server and it is also possible to use EJBs in web applications or Java SE effectively. In addition to the features already present in EJB 3.1, the EJB 3.2 Specification has added support forlocal asynchronous session bean invocations and non persistent EJB Timer Service. This enriches the embeddable EJBContainer, web profiles, and augments the number of testable features in an embeddable EJBContainer. The following code shows an EJB packaged in a WAR archive that contains two methods. The asynchronousMethod() is an asynchronous method that allows you to compare the time gap between the end of a method call on the client side and the end of execution of the method on the server side. The nonPersistentEJBTimerService() method demonstrates how to define a non persistent EJB Timer Service that will be executed every minute while the hour is one o'clock: @Stateless public class EjbLiteSessionBean { @Asynchronous public void asynchronousMethod() { try{ System.out.println("EjbLiteSessionBean - start : "+new Date()); Thread.sleep(1000*10); System.out.println("EjbLiteSessionBean - end : "+new Date()); }catch (Exception ex){ ex.printStackTrace(); } } @Schedule(persistent = false, minute = "*", hour = "1") public void nonPersistentEJBTimerService() { System.out.println("nonPersistentEJBTimerService method executed"); } } Changes made to the TimerService API The EJB 3.2 Specification enhanced the TimerService APiwith a new method called getAllTimers(). This method gives you the ability to access all active timers in an EJB module. The following code demonstrates how to create different types of timers, access their information, and cancel them; it makes use of the getAllTimers() method: @Stateless public class ChangesInTimerAPI implements ChangesInTimerAPILocal { @Resource TimerService timerService; public void createTimer() { //create a programmatic timer long initialDuration = 1000*5; long intervalDuration = 1000*60; String timerInfo = "PROGRAMMATIC TIMER"; timerService.createTimer(initialDuration, intervalDuration, timerInfo); } @Timeout public void timerMethodForProgrammaticTimer() { System.out.println("ChangesInTimerAPI - programmatic timer : "+new Date()); } @Schedule(info = "AUTOMATIC TIMER", hour = "*", minute = "*") public void automaticTimer(){ System.out.println("ChangesInTimerAPI - automatic timer : "+new Date()); } public void getListOfAllTimers(){ Collection alltimers = timerService.getAllTimers(); for(Timer timer : alltimers){ System.out.println("The next time out : "+timer. getNextTimeout()+", " + " timer info : "+timer.getInfo()); timer.cancel(); } } } In addition to this method, the specification has removed the restrictions that required the use of javax.ejb.Timer and javax.ejb.TimerHandlereferences only inside a bean.
Read more
  • 0
  • 0
  • 4852
Banner background image

article-image-adding-connectors-bonita
Packt
11 Nov 2013
7 min read
Save for later

Adding Connectors in Bonita

Packt
11 Nov 2013
7 min read
(For more resources related to this topic, see here.) Bonita connectors Bonita connectors are used to set variables or some other parameters inside Bonita. They can also be used to start a process or execute a step. These connectors equip the user to connect with different parameters of the Bonita work flow. The other kind of connectors are used to integrate with some other third-party tools. Most of the Bonita connectors are related to the documents and comments at a particular step. Although these may be useful in some cases, in a majority of the cases we will not find much use for them. The most useful ones are getting the users a step, executing a step, starting a new process, and setting variables. Click on any step on which you want to define the connector and click on Add.... Here, we will check the start an instance connector of Bonita. Give a name to this connector and click on Next. Here we have to fill in the name of the process that we want to invoke. We also have an option to specify different versions of the process. If we leave this blank, it will pick up the latest version. Next, we can specify the process variables that need to be copied from one pool to the other. Start an instance connector in Bonita Studio In the previous example, the process variables that we specify will be copied over to the target pool. We have to make sure that the target pool has the process variables mentioned in this connector. Make sure that you mention the name of the variable in the first column without the curly braces. If you select the names from the drop-down menu, make sure you remove the $ and the {} for filling in the name. The value field can be filled by the actual process variable. We can also use the set variable connector to set a value to a variable, either a process variable or a step variable. Here, we have two parameters: one is the variable whose value we have to set and the other parameter is the actual value of the variable. Note that this value may be a Groovy expression, too. Hence, it is similar to writing a Groovy script to assign a value to a variable. Another type of connector is the one to start or finish a step. In this connector, all we have to do is mention the name of the step we want to start or stop. Similarly, there is another connector to execute a step. Executing will run all the start and end Connectors of a particular step and then finish it. These connectors might be useful in the cases where some step may be waiting for another step, and at the end of the current step we might execute that step or mark it finished. We also have connectors to get the users from the workflow. There are connectors to find out the initiator of a process and the step submitter. Another useful connector is to get a user based on the username. This returns the User class that Bonita uses to implement the functionality of a user in the work flow. Select the connector to get a user from a username. Enter the username and click on Next. Here, we get the output of the connector and we can decide to save the output in a particular pool or step variable. Saving the connector output in a variable in Bonita The user class has methods to retrieve data, such as the e-mail, first name, last name, metadata, and password from the user. The e-mail connector We have a connector in the messaging group to send an e-mail. Now, we might use this connector for a variety of purposes: to send information about the work flow to an external e-mail, to send a notification to the person performing the task that he/she has some pending items in his/her inbox, and so on. We have to configure the e-mail connector on various parameters. In our TicketingWorkflow, let us send an e-mail to the person in whose name the tickets are booked. He/she enters his/her e-mail address in the Payment step of the workflow. Hence, let us send an e-mail at the end of the Payment step to the person at his/her e-mail address with which the tickets have been booked. For this, let us configure the e-mail connector: Click on the Payment step of the work flow. Click on the Connectors tab to add a connector. Select the connector as a medium to send an e-mail. Then name the connector as SendEmail and make sure that this connector is at the finish event of the step. In the next step, we are required to enter the configuration details of the SMTP server we will use for sending the e-mail. By default, it is set to the Gmail configuration with the host as smtp.gmail.com and the port as 465. Let us stick to the default option and send an e-mail from a Gmail hosted server. Leave the Security option as it is, but enter your credentials in the Authentication section. Here, you should enter your full e-mail address, not just your username. You can also use your own domain e-mail address if it is hosted on a Gmail server. Next, we define the parameters of the e-mail notification that has to be sent. After entering the From address as the ticketing admin address or some similar address, enter the To address as the variable in which we have saved the e-mail address: email. In the title field, we have to specify the subject of the e-mail. We have already seen that we can use Java inside the Groovy editor. Here, we will have a look at a simple Java code that is executed inside the editor. Enter the following code in the Groovy editor: import java.text.SimpleDateFormat; return "Flight ticket from " + from + " to " + to + " on " + new SimpleDateFormat("MM-dd-yyyy").format(departOn); The overview of the flight details is mentioned in the subject of the e-mail. We know that the departOn variable is a Date object. For printing the date, we have to convert it into a String by using the SimpleDateFormat class. Next, we have to write the actual e-mail that we will send to the customer. Below the Title field, make sure that the e-mail body is in HTML and not plain text. We can insert Groovy scripts in between the text, which will be substituted with the actual variable value when the e-mail is sent. Write the following in the body of the e-mail: Hi ${passenger1}, Your ${from} to ${to} flight is confirmed. The flight details are given below: Date Departure  Arrival Duration Price ${import java.text. SimpleDateFormat; return new SimpleDateFormat ("MM-dd-yyyy"). format(departOn); ${departure} ${arrival} ${duration} ${price} Travelers: ${passenger1} ${passenger2} ${passenger3} Payment Details: Card Holder - ${cardHolder} Card Number - ${cardNumber} Thank you for booking with TicketingWorkflow! Configuring the e-mail connector Clicking on Next will get you to the advanced options. Generally it's not really required to configure these options, and we can make do with the default settings. Summary This article looked at the various connector integration options available in Bonita Studio. It showed how connectors can be used to fetch data into the workflow and how to export data, too. We have a close look at the Bonita inbuilt connectors and e-mail connectors. Resources for Article: Further resources on this subject: Oracle BPM Suite 11gR1: Creating a BPM Application [Article] Managing Oracle Business Intelligence [Article] Setting Up Oracle Order Management [Article]
Read more
  • 0
  • 0
  • 3420

article-image-building-ladder-diagram-programs-simple
Packt
31 Oct 2013
7 min read
Save for later

Building Ladder Diagram programs (Simple)

Packt
31 Oct 2013
7 min read
(For more resources related to this topic, see here.) There are several editions of RSLogix 5000 available today, which are similar to Microsoft Windows' home and professional versions. The more "basic" (less expensive) editions of RSLogix 5000 have many features disabled. For example, only the full and professional editions, which are more expensive, support the editing of Function Block Diagrams, Graphical Structured Text, and Sequential Function Chart. In my experience, Ladder Logic is the most commonly used language. Refer to http://www.rockwellautomation.com/rockwellsoftware/design/rslogix5000/orderinginfo.html for more on this. Getting ready You will need to have added the cards and tags from the previous recipes to complete this exercise. How to do it... Open Controller Organizer and expand the leaf Tasks | Main Tasks | Main Program. Right-click on Main Program and select New Routine as shown in the following screenshot: Configure a new Ladder Logic program by setting the following values: Name: VALVES Description: Valve Control Program Type: Ladder Diagram For our newly created routine to be executed with each scan of the PLC, we will need to add a reference to it in MainRoutine that is executed with each scan of the MainTask task. Double-click on our MainRoutine program to display the Ladder Logic contained within it. Next, we will add a Jump To Subroutine (JSR) element that will add our newly added Ladder Diagram program to the main task and ensure that it is executed with each scan. Above the Ladder Diagram, there are tab buttons that organize Ladder Elements into Element Groups. Click on the left and right arrows that are on the left side of Element Groups and find the one labeled Program Control. After clicking on the Program Control element group, you will see the JSR element. Click on the JSR element to add it to the current Ladder Logic Rung in MainRoutine. Next, we will make some modifications to the JSR routine so that it calls our newly added Ladder Diagram. Click on the Routine Name parameter of the JSR element and select the VALVES routine from the list as shown in the following screenshot: There are three additional parameters that we are not using as part of the JSR element, which can be removed. Select the Input Par parameter and then click on the Remove Parameter icon in the toolbar above the Ladder Diagram. This icon looks as shown in the following screenshot: Repeat this process for the other optional parameter: Return Par. Now that we have ensured that our newly added Ladder Logic routine will be scanned, we can add the elements to our Ladder Logic routine. Double-click on our VALVES routine in the Controller Organizer tab under the MainTask task. Find the Timer/Counter element group and click on the TON (Timer On Delay) element to add it to our Ladder Diagram. Now we will create the Timer object. Enter the name in the Timer field as FC1001_TON. Right-click on the TIMER object tag name we just entered and select New "FC1001_TON" (or press Ctrl + W). In the New Tag form that appears, enter in the description FAULT TIMER FOR FLOW CONTROL VALVE 1001 and click on OK to create the new TIMER tag. Next, we will configure our TON element to count to five seconds (5,000 milliseconds). Double-click on the Preset parameter and enter in the value 5000, which is in milliseconds. Now, we will need to add the condition that will start the TIMER object. We will be adding a Less Than (LES) element from the Compare element group. Be sure to add the element to the same Ladder Logic Rung as the Timer on Delay element. The LES element will compare the valve position with the valve set point and return true if the values do not match. So set the two parameters of the LES element to the following: FC1001_PV FC1001_SP Now, we will add a second Ladder Logic Rung where a latched fault alarm is triggered after TIMER reaches five seconds. Right-click under the first Ladder Logic Rung and select Add Rung (or press Ctrl + R). Find the Favorites element group and select the Examine On icon as shown in the following screenshot: Click on ? above the Examine On tab and select the TIMER object's Done property, FC1001_TON.DN, as shown in the following screenshot. Now, once the valve values are not equal, and the TIMER has completed its count to five seconds, this Ladder Logic Rung will be activated as shown in the following screenshot: Next, we will add an Output Latched element to this Ladder Logic Rung. Click on the Output Latched element from the Favorites element group with our new rung selected. Click on ? above the Output Latched element and type in the name of a new base tag we are going to add as FC1001_FLT. Press Enter or click on the element to complete the text entry. Right-click on FC1001_FLT and select New "FC1001_FLT" (or press Ctrl + W). Set the following values in the New Tag form that appears: Description: FLOW CONTROL VALVE 1001 POSITION FAULT Type: Base Scope: FirstController Data Type: Bool Click on OK to add the new tag. Our new tag will look like the following screenshot: It is considered bad practice to latch a bit without having the code to unlatch the bit directly below it. Create a new BOOL type tag called ALARM_RESET with the following properties: Name: ALARM_RESET Description: RESET ALARMS Type: Base Scope: FirstController Data Type: BOOL Click on OK to add the new tag. Then add the following coil and OTU to unlatch the fault when the master alarm reset is triggered. Finally, we will add a comment so that we can see what our Ladder Diagram is doing at a glance. Right-click in the far-right area of the first Ladder Logic Rung (where the 0 is) and select Edit Rung Comment (Ctrl + D). Enter the following helpful comment: TRIGGER FAULT IF THE SETPOINT OF THE FLOW CONTROL VALVE 1001 IS NOT EQUAL TO THE VALVE POSITION How it works... We have created our first Ladder Logic Diagram and linked it to the MainTask task. Now, each time that the task is scanned (executed), our Ladder Logic routine will be run from left to right and top to bottom. There's more... More information on Ladder Logic can be found in the Rockwell publication Logix5000 Controllers Ladder Diagram available at http://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm008_-en-p.pdf. Ladder Logic is the most commonly used programming language in RSLogix 5000. This recipe describes a few more helpful hints to get you started. Understanding Ladder Rung statuses Did you notice the vertical output eeeeeee on the left-hand side of your Ladder Logic Rung? This indicates that an error is present in your Ladder Logic code. After making changes to your controller project, it is a good practice to Verify your project using the drop-down menu item Logic | Verify | Controller. Once Verify has been run, you will see the error pane appear with any errors that it has detected. Element help You can easily get detailed documentation on Ladder Logic Elements, Function Block Diagram Elements, Structured Text Code, and other element types by selecting the object and pressing F1. Copying and pasting Ladder Logic Ladder Logic Rungs and elements can be copied and pasted within your ladder routine. Simply select the rung or element you wish to copy and press Ctrl + C. Then, to paste the rung or element, select the location where you would like to paste it and press Ctrl + V. Summary This article took a first look at creating new routines using ladder logic diagrams. The reader was introduced to the concept of Tasks and also learns how to link routines. In this article, we learned how to navigate the ladder elements that are available, how to find help on each element, and how to create a simple alarm timer using ladder logic. Resources for Article: Further resources on this subject: DirectX graphics diagnostic [Article] Flash 10 Multiplayer Game: Game Interface Design [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 2710

article-image-mocking-static-methods-simple
Packt
30 Oct 2013
7 min read
Save for later

Mocking static methods (Simple)

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready The use of static methods is usually considered a bad Object Oriented Programming practice, but if we end up in a project that uses a pattern such as active record (see http://en.wikipedia.org/wiki/Active_record_pattern), we will end up having a lot of static methods. In such situations, we will need to write some unit tests and PowerMock could be quite handy. Start your favorite IDE (which we set up in the Getting and installing PowerMock (Simple) recipe), and let's fire away. How to do it... We will start where we left off. In the EmployeeService.java file, we need to implement the getEmployeeCount method; currently it throws an instance of UnsupportedOperationException. Let's implement the method in the EmployeeService class; the updated classes are as follows: /** * This class is responsible to handle the CRUD * operations on the Employee objects. * @author Deep Shah */ public class EmployeeService { /** * This method is responsible to return * the count of employees in the system. * It does it by calling the * static count method on the Employee class. * @return Total number of employees in the system. */ public int getEmployeeCount() { return Employee.count(); } } /** * This is a model class that will hold * properties specific to an employee in the system. * @author Deep Shah */ public class Employee { /** * The method that is responsible to return the * count of employees in the system. * @return The total number of employees in the system. * Currently this * method throws UnsupportedOperationException. */ public static int count() { throw new UnsupportedOperationException(); } } The getEmployeeCount method of EmployeeService calls the static method count of the Employee class. This method in turn throws an instance of UnsupportedOperationException. To write a unit test of the getEmployeeCount method of EmployeeService, we will need to mock the static method count of the Employee class. Let's create a file called EmployeeServiceTest.java in the test directory. This class is as follows: /** * The class that holds all unit tests for * the EmployeeService class. * @author Deep Shah */ @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTheCountOfEmployeesUsingTheDomainClass() { PowerMockito.mockStatic(Employee.class); PowerMockito.when(Employee.count()).thenReturn(900); EmployeeService employeeService = newEmployeeService(); Assert.assertEquals(900,employeeService.getEmployeeCount()); } } If we run the preceding test, it passes. The important things to notice are the two annotations (@RunWith and @PrepareForTest) at the top of the class, and the call to the PowerMockito.mockStatic method. The @RunWith(PowerMockRunner.class) statement tells JUnit to execute the test using PowerMockRunner. The @PrepareForTest(Employee.class) statement tells PowerMock to prepare the Employee class for tests. This annotation is required when we want to mock final classes or classes with final, private, static, or native methods. The PowerMockito.mockStatic(Employee.class) statement tells PowerMock that we want to mock all the static methods of the Employee class. The next statements in the code are pretty standard, and we have looked at them earlier in the Saying Hello World! (Simple) recipe. We are basically setting up the static count method of the Employee class to return 900. Finally, we are asserting that when the getEmployeeCount method on the instance of EmployeeService is invoked, we do get 900 back. Let's look at one more example of mocking a static method; but this time, let's mock a static method that returns void. We want to add another method to the EmployeeService class that will increment the salary of all employees (wouldn't we love to have such a method in reality?). Updated code is as follows: /** * This method is responsible to increment the salary * of all employees in the system by the given percentage. * It does this by calling the static giveIncrementOf method * on the Employee class. * @param percentage the percentage value by which * salaries would be increased * @return true if the increment was successful. * False if increment failed because of some exception* otherwise. */ public boolean giveIncrementToAllEmployeesOf(intpercentage) { try{ Employee.giveIncrementOf(percentage); return true; } catch(Exception e) { return false; } } The static method Employee.giveIncrementOf is as follows: /** * The method that is responsible to increment * salaries of all employees by the given percentage. * @param percentage the percentage value by which * salaries would be increased * Currently this method throws * UnsupportedOperationException. */ public static void giveIncrementOf(int percentage) { throw new UnsupportedOperationException(); } The earlier syntax would not work for mocking a void static method . The test case that mocks this method would look like the following: @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTrueWhenIncrementOf10PercentageIsGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doNothing().when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertTrue(employeeService.giveIncrementToAllEmployeesOf(10)); } @Test public void shouldReturnFalseWhenIncrementOf10PercentageIsNotGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertFalse(employeeService.giveIncrementToAllEmployeesOf(10)); } } Notice that we still need the two annotations @RunWith and @PrepareForTest, and we still need to inform PowerMock that we want to mock the static methods of the Employee class. Notice the syntax for PowerMockito.doNothing and PowerMockito.doThrow: The PowerMockito.doNothing method tells PowerMock to literally do nothing when a certain method is called. The next statement of the doNothing call sets up the mock method. In this case it's the Employee.giveIncrementOf method. This essentially means that PowerMock will do nothing when the Employee.giveIncrementOf method is called. The PowerMockito.doThrow method tells PowerMock to throw an exception when a certain method is called. The next statement of the doThrow call tells PowerMock about the method that should throw an exception; in this case, it would again be Employee.giveIncrementOf. Hence, when the Employee.giveIncrementOf method is called, PowerMock will throw an instance of IllegalStateException. How it works... PowerMock uses custom class loader and bytecode manipulation to enable mocking of static methods. It does this by using the @RunWith and @PrepareForTest annotations. The rule of thumb is whenever we want to mock any method that returns a non-void value , we should be using the PowerMockito.when().thenReturn() syntax. It's the same syntax for instance methods as well as static methods. But for methods that return void, the preceding syntax cannot work. Hence, we have to use PowerMockito.doNothing and PowerMockito.doThrow. This syntax for static methods looks a bit like the record-playback style. On a mocked instance created using PowerMock, we can choose to return canned values only for a few methods; however, PowerMock will provide defaults values for all the other methods. This means that if we did not provide any canned value for a method that returns an int value, PowerMock will mock such a method and return 0 (since 0 is the default value for the int datatype) when invoked. There's more... The syntax of PowerMockito.doNothing and PowerMockito.doThrow can be used on instance methods as well. .doNothing and .doThrow on instance methods The syntax on instance methods is simpler compared to the one used for static methods. Let's say we want to mock the instance method save on the Employee class. The save method returns void, hence we have to use the doNothing and doThrow syntax. The test code to achieve is as follows: /** * The class that holds all unit tests for * the Employee class. * @author Deep Shah */ public class EmployeeTest { @Test() public void shouldNotDoAnythingIfEmployeeWasSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doNothing().when(employee.save(); try { employee.save(); } catch(Exception e) { Assert.fail("Should not have thrown anexception"); } } @Test(expected = IllegalStateException.class) public void shouldThrowAnExceptionIfEmployeeWasNotSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(employee).save(); employee.save(); } } To inform PowerMock about the method to mock, we just have to invoke it on the return value of the when method. The line PowerMockito.doNothing().when(employee).save() essentially means do nothing when the save method is invoked on the mocked Employee instance. Similarly, PowerMockito.doThrow(new IllegalStateException()).when(employee).save() means throw IllegalStateException when the save method is invoked on the mocked Employee instance. Notice that the syntax is more fluent when we want to mock void instance methods. Summary In this article, we saw how easily we can mock static methods. Resources for Article: Further resources on this subject: Important features of Mockito [Article] Python Testing: Mock Objects [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 13177
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-multiserver-installation
Packt
29 Oct 2013
7 min read
Save for later

Multiserver Installation

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) The prerequisites for Zimbra Let us dive into the prerequisites for Zimbra: Zimbra supports only 64-bit LTS versions of Ubuntu, release 10.04 and above. If you would like to use a 32-bit version, you should use Ubuntu 8.04.x LTS with Zimbra 7.2.3. Having a clean and freshly installed system is preferred for Zimbra; it requires a dedicated system and there is no need to install components such as Apache and MySQL since the Zimbra server contains all the components it needs. Note that installing Zimbra with another service (such as a web server) on the same server can cause operational issues. The dependencies (libperl5.14, libgmp3c2, build-essential, sqlite3, sysstat, and ntp) should be installed beforehand. Configure a fixed IP address on the server. Have a domain name and a well-configured DNS (A and MX entries) that points to the server. The system clocks should be synced on all servers. Configure the file /etc/resolv.conf on all servers to point at the server on which we installed the bind (it can be installed on any Zimbra server or on a separate server). We will explain this point in detail later. Preparing the environment Before starting the Zimbra installation process, we should prepare the environment. In the first part of this section, we will see the different possible configurations and then, in the second part, we will present the needed assumptions to apply the chosen configuration. Multiserver configuration examples One of the greatest advantages of Zimbra is its scalability; we can deploy it for a small business with few mail accounts as well as for a huge organization with thousands of mail accounts. There are many possible configuration options; the following are the most used out of those: Small configuration: All Zimbra components are installed on only one server. Medium configuration: Here, LDAP and message store are installed on one server and Zimbra MTA on a separate server. Note here that we can use more Zimbra MTA servers so we can scale easier for large incoming or outgoing e-mail volume. Large configuration: In this case, LDAP will be installed on a dedicated server and we will have multiple mailbox and MTA servers, so we can scale easier for a large number of users. Very large configuration: The difference between this configuration and large one is the existence of an additional LDAP server, so we will have a Master LDAP and its replica. We choose the medium configuration; so, we will install LDAP and mailbox in one server and MTA on the other server. Install different servers in the following order (for medium configuration, 1 and 2 are combined in only one step): 1. First of all, install and configure the LDAP server. 2. Then, install and configure Zimbra mailbox servers. 3. Finally, install Zimbra MTA servers and finish the whole installation configuration. New installations of Zimbra limit spam/ham training to the first installed MTA. If you uninstall or move this MTA, you should enable spam/ham training on another MTA as one host should have this enabled to run zmtrainsa --cleanup. To do this, execute the following command: zmlocalconfig -e zmtrainsa_cleanup_host=TRUE Assumptions In this article, we will use some specific information as input in the Zimbra installation process, which, in most cases, will be different for each user. Therefore, we will note some of the most redundant ones in this section. Remember that you should specify your own values rather than using the arbitrary values that I have provided. The following is the list of assumptions used : OS version: ubuntu-12.04.2-server-amd64 Zimbra version: zcs-8.0.3_GA_5664.UBUNTU12_64.20130305090204 MTA server name: mta MTA hostname: mta.zimbra-essentials.com Internet domain: zimbra-essentials.com MTA server IP address: 172.16.126.141 MTA server IP subnet mask: 255.255.255.0 MTA server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 MTA admin ID: abdelmonam MTA admin Password: Z!mbra@dm1n Zimbra admin Password: zimbrabook MTA server name: ldap MTA hostname: ldap.zimbra-essentials.com LDAP server IP address: 172.16.126.140 LDAP server IP subnet mask: 255.255.255.0 LDAP server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 LDAP admin ID: abdelmonam LDAP admin password: Z!mbra@dm1n To be able to follow the steps described in the next sections, especially each time we need to perform a configuration, the reader should know how to harness the vi editor. If not, you should develop your skill set for using the vi editor or use another editor instead. You can find good basic training for the vi editor at http://www.cs.colostate.edu/helpdocs/vi.html System requirements For the various system requirements, please refer to the following link: http://www.zimbra.com/docs/os/8.0.0/multi_server_install/wwhelp/wwhimpl/common/html/wwhelp.htm#href=ZCS_Multiserver_Open_8.0.System_Requirements_for_VMware_Zimbra_Collaboration_Server_8.0.html&single=true If you are using another version of Zimbra, please check the correct requirements on the Zimbra website. Ubuntu server installation First of all, choose the appropriate language. Choose Install Ubuntu Server and then press Enter. When the installation prompts you to provide a hostname, configure only a one-word hostname; in the Assumptions section, we've chosen ldap for the LDAP and mailstore server and mta for the MTA server—don't give the fully qualified domain name (for example, mta.zimbra-essentials.com). On the next screen that calls for the domain name, assign it zimbra-essentials.com (without the hostname). The hard disk setup is simple if you are using a single drive; however, in the case of a server, it's not the best way to do things. There are a lot of options for partitioning your drives. In our case, we just make a little partition (2x RAM) for swapping, and what remains will be used for the whole system. Others can recommend separate partitions for mailstore, system, and so on. Feel free to use the recommendation you want depending on your IT architecture; use your own judgment here or ask your IT manager. After finishing the partitioning task, you will be asked to enter the username and password; you can choose what you want except admin and zimbra. When asked if you want to encrypt the home directory, select No and then press Enter. Press Enter to accept an empty entry for the HTTP proxy. Choose Install security updates automatically and then press Enter. On the Software Selection screen, you must select the DNS Server and the OpenSSH Server choices for installation; no other options. This will authorize remote administration (SSH) and mandatorily set up bind9 for a split DNS. For bind9, you can install it on only one server, which is what we've done in this article. Select Yes and then press Enter to install the GRUB boot loader to the master boot record. The installation should have completed successfully. Preparing Ubuntu for Zimbra installation In order to prepare the Ubuntu for the Zimbra installation, the following steps need to be performed: Log in to the newly installed system and update and upgrade Ubuntu using the following commands: sudo apt-get update sudo apt-get upgrade Install the dependencies as follows: sudo apt-get install libperl5.14 libgmp3c2 build-essential sqlite3 sysstat ntp Zimbra recommends (but there's no obligation) to disable and remove Apparmor. sudo /etc/init.d/apparmor stop sudo /etc/init.d/apparmor teardown sudo update-rc.d -f apparmor remove sudo aptitude remove apparmor apparmor-utils Set the static IP for your server as follows: Open the network interfaces file using the following command: sudo vi /etc/network/interfaces Then replace the following line: iface eth0 inet dhcp With: iface eth0 inet static address 172.16.126.14 netmask 255.255.255.0 gateway 172.16.126.1 network 172.16.126.0 broadcast 172.16.126.255 Restart the network process by typing in the following: sudo /etc/init.d/networking restart Sanity test! To verify that your network configuration is configured properly, type in ifconfig and ensure that the settings are correct. Then try to ping any working website (such as google.com) to see if that works. On each server, pay attention when you set the static IP address (172.16.126.140 for the LDAP server and 172.16.126.141 for the MTA server). Summary In this article, we learned the prerequisites for Zimbra multiserver installation and preparing the environment for the installation of the Zimbra server in a multiserver environment. Resources for Article : Further resources on this subject: Routing Rules in AsteriskNOW - The Calling Rules Tables [Article] Users, Profiles, and Connections in Elgg [Article] Integrating Zimbra Collaboration Suite with Microsoft Outlook [Article]
Read more
  • 0
  • 0
  • 1669

article-image-miscellaneous-tips
Packt
29 Oct 2013
24 min read
Save for later

Miscellaneous Tips

Packt
29 Oct 2013
24 min read
(For more resources related to this topic, see here.) Mission Briefing The topics covered here include: Tracing Tkinter variables Widget traversal Validating user input Formatting widget data More on fonts Working with Unicode characters Tkinter class hierarchy Custom-made mixins Tips for code cleanup and program optimization Distributing the Tkinter application Limitations of Tkinter Tkinter alternatives Getting interactive help Tkinter in Python 3. x Tracing Tkinter variables When you specify a Tkinter variable as a textvariable for a widget (textvariable = myvar), the widget automatically gets updated whenever the value of the variable changes. However, there might be times when, in addition to updating the widget, you need to do some extra processing at the time of reading or writing (or modifying) the variable. Tkinter provides a method to attach a callback method that would be triggered every time the value of a variable is accessed. Thus, the callback acts as a variable observer. The callback method is named trace_variable(self, mode, callback), or simply trace(self, mode, callback). The mode argument can take any one of 'r', 'w', 'u' values, which stand for read, write, or undefined. Depending upon the mode specifications, the callback method is triggered if the variable is read or written. The callback method gets three arguments by default. The arguments in order of their position are: Name of the Tkinter variable The index of the variable, if the Tkinter variable is an array, else an empty string The access modes ('w', 'r', or 'u') Note that the triggered callback function may also modify the value of the variable. This modification does not, however, trigger any additional callbacks. Let's see a small example of variable tracing in Tkinter, where writing into the Tkinter variable into an entry widget triggers a callback function (refer to the 8.01 trace variable.py Python file available in the code bundle): from Tkinter import * root = Tk() myvar = StringVar() def trace_when_myvar_written(var,indx,mode): print"Traced variable %s"%myvar.get() myvar.trace_variable("w", trace_when_myvar_written) Label(root, textvariable=myvar).pack(padx=5, pady=5) Entry(root, textvariable=myvar).pack(padx=5, pady=5) root.mainloop() The description of the preceding code is as follows: This code creates a trace variable on the Tkinter variable myvar in the write ("w") mode The trace variable is attached to a callback method named trace_when_myvar_written (this means that every time the value of myvar is changed, the callback method will be triggered) Now, every time you write into the entry widget, it modifies the value of myvar. Because we have set a trace on myvar, it triggers the callback method, which in our example, simply prints the new value into the console. The code creates a GUI window similar to the one shown here: It also produces a console output in IDLE, which shows like the following once you start typing in the GUI window: Traced variable T Traced variable Tr Traced variable Tra Traced variable Trac Traced variable Traci Traced variable Tracin Traced variable Tracing The trace on a variable is active until it is explicitly deleted. You can delete a trace using: trace_vdelete(self, mode, callbacktobedeleted) The trace method returns the name of the callback method. This can be used to get the name of the callback method that is to be deleted. Widget traversal When a GUI has more than one widget, a given widget can come under focus by an explicit mouse-click on the widget. Alternatively, the focus can be shifted to another given widget by pressing the Tab key on the keyboard in the order the widgets were created in the program. It is therefore vital to create widgets in the order we want the user to traverse through them, or else the user will have a tough time navigating between the widgets using the keyboard. Different widgets are designed to behave differently to different keyboard strokes. Let's therefore spend some time trying to understand the rules of traversing through widgets using the keyboard. Let's look at the code of the 8.02 widget traversal.py Python file to understand the keyboard traversal behavior for different widgets. Once you run the mentioned .py file, it shows a window something like the following: The code is simple. It adds an entry widget, a few buttons, a few radio buttons, a text widget, and a scale widget. However, it also demonstrates some of the most important keyboard traversal behaviors for these widgets. Here are some important points to note (refer to 8.02 widget traversal.py): The Tab key can be used to traverse forward, and Shift + Tab can be used to traverse backwards. The text widget cannot be traversed using the Tab key. This is because the text widget can contain tab characters as its content. Instead, the text widget can be traversed using Ctrl + Tab. Buttons on the widget can be pressed using the spacebar. Similarly, check buttons and radio buttons can also be toggled using the spacebar. You can go up and down the items in a Listbox widget using the up and down arrows. The Scale widget responds to both the left and right keys or up and down keys. Similarly, the Scrollbar widget responds to both the left/right or up/down keys, depending on their orientation. Most of the widgets (except Frame, Label, and Menus) get an outline by default when they have the focus set on them. This outline normally displays as a thin black border around the widget. You can even set the Frame and Label widgets to show this outline by specifying the highlightthickness option to a non-zero Integer value for these widgets. We change the color of the outline using highlightcolor= 'red' in our code. Frame, Label, and Menu are not included in the tab navigation path. However, they can be included in the navigation path by using the takefocus = 1 option. You can explicitly exclude a widget from the tab navigation path by setting the takefocus= 0 option. The Tab key traverses widgets in the order they were created. It visits a parent widget first (unless it is excluded using takefocus = 0) followed by all its children widgets. You can use widget.focus_force() to force the input focus to the widget. Validating user input Let's now discuss input data validation. Most of the applications we have developed in this article are point and click-based (drum machine, chess, drawing application), where validation of user input is not required. However, data validation is a must in programs like our phonebook application, where the user enters some data, and we store it in a database. Ignoring the user input validation can be dangerous in such applications because input data can be misused for SQL injection. In general, any application where an user can enter textual data, is a good candidate for validating user input. In fact, it is almost considered a maxim not to trust user inputs. A wrong user input may be intentional or accidental. In either case, if you fail to validate or sanitize the data, you may cause unexpected error in your program. In worst cases, user input can be used to inject harmful code that may be capable of crashing a program or wiping out an entire database. Widgets such as Listbox, Combobox, and Radiobuttons allow limited input options, and hence, cannot normally be misused to input wrong data. On the other hand, widgets such as Entry widget, Spinbox widget, and Text widget allow a large possibility of user inputs, and hence, need to be validated for correctness. To enable validation on a widget, you need to specify an additional option of the form validate = 'validationmode' to the widget. For example, if you want to enable validation on an entry widget, you begin by specifying the validate option as follows: Entry( root, validate="all", validatecommand=vcmd) The validation can occur in one of the following validation modes: Validation Mode Explanation none This is the default mode. No validation occurs if validate is set to "none" focus When validate is set to "focus", the validate command is called twice; once when the widget receives focus and once when the focus is lost focusin The validate command is called when the widget receives focus focusout The validate command is called when the widget loses focus key The validate command is called when the entry is edited all The validate command is called in all the above cases The code of the 8.03 validation mode demo.py file demonstrates all these validation modes by attaching them to a single validation method. Note the different ways different Entry widgets respond to different events. Some Entry widgets call the validation method on focus events while others call the validation method at the time of entering key strokes into the widget, while still others use a combination of focus and key events. Although we did set the validation mode to trigger the validate method, we need some sort of data to validate against our rules. This is passed to the validate method using percent substitution. For instance, we passed the mode as an argument to our validate method by performing a percent substitution on the validate command, as shown in the following: vcmd = (self.root.register(self.validate), '%V') We followed by passing the value of v as an argument to our validate method: def validate(self, v) In addition to %V, Tkinter recognizes the following percent substitutions: Percent substitutions Explanation %d Type of action that occurred on the widget-1 for insert, 0 for delete, and -1 for focus, forced, or textvariable validation. %i Index of char string inserted or deleted, if any, else it will be -1. %P The value of the entry if the edit is allowed. If you are configuring the Entry widget to have a new textvariable, this will be the value of that textvariable. %s The current value of entry, prior to editing. %S The text string being inserted/deleted, if any, {} otherwise. %v The type of validation currently set. %V The type of validation that triggered the callback method (key, focusin,  focusout, and forced). %W The name of the Entry widget. These validations provide us with the necessary data we can use to validate the input. Let's now pass all these data and just print them through a dummy validate method just to see the kind of data we can expect to get for carrying out our validations (refer to the code of 8.04 percent substitutions demo.py): Take particular note of data returned by %P and %s, because they pertain to the actual data entered by the user in the Entry widget. In most cases, you will be checking either of these two data against your validation rules. Now that we have a background of rules of data validation, let's see two practical examples that demonstrate input validation. Key Validation Let's assume that we have a form that asks for a user's name. We want the user to input only alphabets or space characters in the name. Thus, any number or special character is not to be allowed, as shown in the following screenshot of the widget: This is clearly a case of 'key' validation mode, because we want to check if an entry is valid after every key press. The percent substitution that we need to check is %S, because it yields the text string being inserted or deleted in the Entry widget. Accordingly, the code that validates the entry widget is as follows (refer to 8.05 key validation.py): import Tkinter as tk class KeyValidationDemo(): def __init__(self): root = tk.Tk() tk.Label(root, text='Enter your name').pack() vcmd = (root.register(self.validate_data), '%S') invcmd = (root.register(self.invalid_name), '%S') tk.Entry(root, validate="key", validatecommand=vcmd,invalidcommand=invcmd).pack(pady=5, padx=5) self.errmsg = tk.Label(root, text= '', fg='red') self.errmsg.pack() root.mainloop() def validate_data (self, S): self.errmsg.config(text='') return (S.isalpha() or S =='') # always return True or False def invalid_name (self, S): self.errmsg.config(text='Invalid characters n name canonly have alphabets'%S) app= KeyValidationDemo() The description of the preceding code is as follows: We first register two options validatecommand (vcmd) and invalidcommand (invcmd). In our example, validatecommand is registered to call the validate_data method, and the invalidcommand option is registered to call another method named invalid_name. The validatecommand option specifies a method to be evaluated which would validate the input. The validation method must return a Boolean value, where a True signifies that the data entered is valid, and a False return value signifies that data is invalid. If the validate method returns False (invalid data), no data is added to the Entry widget and the script registered for invalidcommand is evaluated. In our case, a False validation would call the invalid_name method. The invalidcommand method is generally responsible for displaying error messages or setting back the focus to the Entry widget. Let's look at the code register(self, func, subst=None, needcleanup=1). The register method returns a newly created Tcl function. If this function is called, the Python function func is executed. If an optional function subst is provided it is executed before func. Focus Out Validation The previous example demonstrated validation in 'key' mode. This means that the validation method was called after every key press to check if the entry was valid. However, there are situations when you might want to check the entire string entered into the widget, rather than checking individual key stroke entries. For example, if an Entry widget accepts a valid e-mail address, we would ideally like to check the validity after the user has entered the entire e-mail address, and not after every key stroke entry. This would qualify as validation in 'focusout' mode. Check out the code of 8.06 focus out validation.py for a demonstration on e-mail validation in the focusout mode: import Tkinter as tk import re class FocusOutValidationDemo(): def __init__(self): self.master = tk.Tk() self.errormsg = tk.Label(text='', fg='red') self.errormsg.pack() tk.Label(text='Enter Email Address').pack() vcmd = (self.master. register(self.validate_email), '%P' ) invcmd = (self.master. register(self.invalid_email), '%P' ) self.emailentry = tk.Entry(self.master, validate ="focusout", validatecommand=vcmd , invalidcommand=invcmd ) self.emailentry.pack() tk.Button(self.master, text="Login").pack() tk.mainloop() def validate_email(self, P): self.errormsg.config(text='') x = re.match(r"[^@]+@[^@]+.[^@]+", P) return (x != None)# True(valid email)/False(invalid email) def invalid_email(self, P): self.errormsg.config(text='Invalid Email Address') self.emailentry.focus_set() app = FocusOutValidationDemo() The description of the preceding code is as follows: The code has a lot of similarities to the previous validation example. However, note the following differences: The validate mode is set to 'focusout' in contrast to the 'key' mode in the previous example. This means that the validation would be done only when the Entry widget loses focus. This program uses data provided by the %P percentage substitution, in contrast to %S, as used in the previous example. This is understandable as %P provides the value entered in the Entry widget, but %S provides the value of the last key stroke. This program uses regular expressions to check if the entered value corresponds to a valid e-mail format. Validation usually relies on regular expressions and a whole lot of explanation to cover this topic, but it is out of the scope of this project and the article. For more information on regular expression modules, visit the following link: http://docs.python.org/2/library/re.html This concludes our discussion on input validation in Tkinter. Hopefully, you should now be able to implement input validation to suit your custom needs. Formatting widget data Several input data such as date, time, phone number, credit card number, website URL, IP number, and so on have an associated display format. For instance, date is better represented in a MM/DD/YYYY format. Fortunately, it is easy to format the data in the required format as the user enters them in the widget (refer to 8.07 formatting entry widget to display date.py). The mentioned Python file formats the user input automatically to insert forward slashes at the required places to display user-entered date in the MM/DD/YYYY format. from Tkinter import * class FormatEntryWidgetDemo: def __init__(self, root): Label(root, text='Date(MM/DD/YYYY)').pack() self.entereddata = StringVar() self.dateentrywidget =Entry(textvariable=self.entereddata) self.dateentrywidget.pack(padx=5, pady=5) self.dateentrywidget.focus_set() self.slashpositions = [2, 5] root.bind('<Key>', self.format_date_entry_widget) def format_date_entry_widget(self, event): entrylist = [c for c in self.entereddata.get() if c != '/'] for pos in self.slashpositions: if len(entrylist) > pos: entrylist.insert(pos, '/') self.entereddata.set(''.join(entrylist)) # Controlling cursor cursorpos = self.dateentrywidget.index(INSERT) for pos in self.slashpositions: if cursorpos == (pos + 1): # if cursor is on slash cursorpos += 1 if event.keysym not in ['BackSpace', 'Right', 'Left','Up', 'Down']: self.dateentrywidget.icursor(cursorpos) root = Tk() FormatEntryWidgetDemo(root) root.mainloop() The description of the preceding code is as follows: The Entry widget is bound to the key press event, where every new key press calls the related callback format_date_entry_widget method. First, the format_date_entry_widget method breaks down the entered text into an equivalent list by the name entrylist, also ignoring any slash '/' symbol if entered by the user. It then iterates through the self.slashpositions list and inserts the slash symbol at all required positions in the entrylist argument. The net result of this is a list that has slash inserted at all the right places. The next line converts this list into an equivalent string using join(), and then sets the value of our Entry widget to this string. This ensures that the Entry widget text is formatted into the aforementioned date format. The remaining pieces of code simply control the cursor to ensure that the cursor advances by one position whenever it encounters a slash symbol. It also ensures that key presses, such as 'BackSpace', 'Right', 'Left', 'Up', and 'Down' are handled properly. Note that this method does not validate the date value and the user may add any invalid date. The method defined here will simply format it by adding forward slash at third and sixth positions. Adding date validation to this example is left as an exercise for you to complete. This concludes our brief discussion on formatting data within widgets. Hopefully, you should now be able to create formatted widgets for a wide variety of input data that can be displayed better in a given format. More on fonts Many Tkinter widgets let you specify custom font specifications either at the time of widget creation or later using the configure() option. For most cases, default fonts provide a standard look and feel. However, should you want to change font specifications, Tkinter lets you do so. There is one caveat though. When you specify your own font, you need to make sure it looks good on all platforms where the program is intended to be deployed. This is because a font might look good and match well on a particular platform, but may look awful on another. Unless you know what you are doing, it is always advisable to stick to Tkinter's default fonts. Most platforms have their own set of standard fonts that are used by the platform's native widgets. So, rather than trying to reinvent the wheel on what looks good on a given platform or what would be available for a given platform, Tkinter assigns these standard platform-specific fonts into its widget, thus providing a native look and feel on every platform. Tkinter assigns nine fonts to nine different names, which you can therefore use in your programs. The font names are as follows: TkDefaultFont TkTextFont TkFixedFont TkMenuFont TkHeadingFont TkCaptionFont TkSmallCaptionFont TkIconFont TkTooltipFont Accordingly, you can use them in your programs in the following way: Label(text="Sale Up to 50% Off !", font="TkHeadingFont 20") Label(text="**Conditions Apply", font="TkSmallCaptionFont 8") Using these kinds of fonts mark up, you can be assured that your font will look native across all platforms. Finer Control over Font In addition to the above method on handling fonts, Tkinter provides a separate Font class implementation. The source code of this class is located at the following link: <Python27_installtion_dir>Liblib-tktkfont.py. To use this module, you need to import tkFont into your namespace.(refer to 8.08 tkfont demo.py): from Tkinter import Tk, Label, Pack import tkFont root=Tk() label = Label(root, text="Humpty Dumpty was pushed") label.pack() currentfont = tkFont.Font(font=label['font']) print'Actual :' + str(currentfont. actual ()) print'Family :' + currentfont. cget ("family") print'Weight :' + currentfont.cget("weight") print'Text width of Dumpty : %d' %currentfont. measure ("Dumpty") print'Metrics:' + str(currentfont. metrics ()) currentfont.config(size=14) label.config (font=currentfont) print'New Actual :' + str(currentfont. actual ()) root.mainloop() The console output of this program is as follows: Actual :{'family': 'Segoe UI', 'weight': 'normal', 'slant': 'roman', 'overstrike': 0, 'underline': 0, 'size': 9} Family : Segoe UI Weight : normal Text width of Dumpty : 43 Metrics:{'fixed': 0, 'ascent': 12, 'descent': 3, 'linespace': 15} As you can see, the tkfont module provides a much better fine-grained control over various aspects of fonts, which are otherwise inaccessible. Font Selector Now that we have seen the basic features available in the tkfont module, let's use it to implement a font selector. The font selector would look like the one shown here: The code for the font selector is as follows (refer to 8.09 font selector.py): from Tkinter import * import ttk import tkFont class FontSelectorDemo (): def __init__(self): self.currentfont = tkFont.Font(font=('Times New Roman',12)) self.family = StringVar(value='Times New Roman') self.fontsize = StringVar(value='12') self.fontweight =StringVar(value=tkFont.NORMAL) self.slant = StringVar(value=tkFont.ROMAN) self.underlinevalue = BooleanVar(value=False) self.overstrikevalue= BooleanVar(value=False) self. gui_creator () The description of the preceding code is as follows: We import Tkinter (for all widgets), ttk (for the Combobox widget), and tkfont for handling font-related aspects of the program We create a class named FontSelectorDemo and use its __init_ method to initialize al attributes that we intend to track in our program. Finally, the __init__ method calls another method named gui_creator(), which is be responsible for creating all the GUI elements of the program Creating the GUI The code represented here is a highly abridged version of the actual code (refer to 8.09 font selector.py). Here, we removed all the code that creates basic widgets, such as Label and Checkbuttons, in order to show only the font-related code: def gui_creator(self): # create the top labels – code removed fontList = ttk.Combobox(textvariable=self.family) fontList.bind('<<ComboboxSelected>>', self.on_value_change) allfonts = list(tkFont.families()) allfonts.sort() fontList['values'] = allfonts # Font Sizes sizeList = ttk.Combobox(textvariable=self.fontsize) sizeList.bind('<<ComboboxSelected>>', self.on_value_change) allfontsizes = range(6,70) sizeList['values'] = allfontsizes # add four checkbuttons to provide choice for font style # all checkbuttons command attached to self.on_value_change #create text widget sampletext ='The quick brown fox jumps over the lazy dog' self.text.insert(INSERT,'%sn%s'% (sampletext,sampletext.upper()), 'fontspecs' ) self.text.config(state=DISABLED) The description of the preceding code is as follows: We have highlighted the code that creates two Combobox widgets; one for the Font Family, and the other for the Font Size selection. We use tkfont.families() to fetch the list of all the fonts installed on a computer. This is converted into a list format and sorted before it is inserted into the fontList Combobox widget. Similarly, we add a font size range of values from 6 to 70 in the Font Size combobox. We also add four Checkbutton widgets to keep track of font styles bold, italics, underline , and overstrike. The code for this has not been shown previously, because we have created similar check buttons in some of our previous programs. We then add a Text widget and insert a sample text into it. More importantly, we add a tag to the text named fontspec. Finally, all our widgets have a command callback method connecting back to a common method named on_value_change. This method will be responsible for updating the display of the sample text at the time of changes in the values of any of the widgets. Updating Sample Text def on_value_change(self, event=None): try: self.currentfont.config(family=self.family.get(), size=self.fontsize.get(), weight=self.fontweight.get(), slant=self.slant.get(), underline=self.underlinevalue.get(), overstrike=self.overstrikevalue.get()) self.text.tag_config('fontspecs', font=self.currentfont) except ValueError: pass ### invalid entry - ignored for now. You can use a tkMessageBox dialog to show an error The description of the preceding code is as follows: This method is called at the time of a state change for any of the widgets This method simply fetches all font data and configures our currentfont attribute with the updated font values Finally, it updates the text content tagged as fontspec with the values of the current font Working with Unicode characters Computers only understand binary numbers. Therefore, all that you see on your computer, for example, texts, images, audio, video, and so on need to be expressed in terms of binary numbers. This is where encoding comes into play. An encoding is a set of standard rules that assign unique numeral values to each text character. Python 2.x default encoding is ASCII (American Standard Code for Information Interchange). The ASCII character encoding is a 7-bit encoding that can encode 2 ^7 (128) characters. Because ASCII encoding was developed in America, it encodes characters from the English alphabet, namely, the numbers 0-9, the letters a-z and A-Z, some common punctuation symbols, some teletype machine control codes, and a blank space. It is here that Unicode encoding comes to our rescue. The following are the key features of Unicode encoding: It is a way to represent text without bytes It provides unique code point for each character of every language It defines more than a million code points, representing characters of all major scripts on the earth Within Unicode, there are several Unicode Transformation Formats (UTF) UTF-8 is one of the most commonly used encodings, where 8 means that 8-bit numbers are used in the encoding Python also supports UTF-16 encoding, but it's less frequently used, and UTF-32 is not supported by Python 2. x Say you want to display a Hindi character on a Tkinter Label widget. You would intuitively try to run a code like the following: from Tkinter import * root = Tk() Label( root, text = " भारतमेंआपकास्वागतहै " ).pack() root.mainloop() If you try to run the previous code, you will get an error message as follows: SyntaxError: Non-ASCII character 'xe0' in file 8.07.py on line 4, but no encoding declared; see http://www.Python.org/peps/pep-0263.html for details. This means that Python 2.x, by default, cannot handle non-ASCII characters. Python standard library supports over 100 encodings, but if you are trying to use anything other than ASCII encoding you have to explicitly declare the encoding. Fortunately, handling other encodings is very simple in Python. There are two ways in which you can deal with non-ASCII characters. Declaring line encoding The first way is to mark a string containing Unicode characters with the prefix u explicitly, as shown in the following code snippet (refer to 8.10 line encoding.py): from Tkinter import * root = Tk() Label(root, text = u"भारतमेंआपकास्वागतहै").pack() root.mainloop() When you try to run this program from IDLE, you get a warning message similar to the following one: Simply click on Ok to save this file as UTF-8 and run this program to display the Unicode label. Summary In this article, we discussed some vital aspects of GUI programming form a common theme in many GUI programs. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 5144

article-image-getting-started-codeblocks
Packt
28 Oct 2013
7 min read
Save for later

Getting Started with Code::Blocks

Packt
28 Oct 2013
7 min read
(For more resources related to this topic, see here.) Why Code::Blocks? Before we go on learning more about Code::Blocks let us understand why we shall use Code::Blocks over other IDEs. It is a cross-platform Integrated Development Environment (IDE). It supports Windows, Linux, and Mac operating system. It supports GCC compiler and GNU debugger on all supported platforms completely. It supports numerous other compilers to various degrees on multiple platforms. It is scriptable and extendable. It comes with several plugins that extend its core functionality. It is lightweight on resources and doesn't require a powerful computer to run it. Finally, it is free and open source. Installing Code::Blocks on Windows Our primary focus of this article will be on Windows platform. However, we'll touch upon other platforms wherever possible. Official Code::Blocks binaries are available from www.codeblocks.org. Perform the following steps for successful installation of Code::Blocks: For installation on Windows platform download codeblocks-12.11mingw-setup.exe file from http://www.codeblocks.org/downloads/26 or from sourceforge mirror http://sourceforge.net/projects/codeblocks/files/Binaries/12.11/Windows/codeblocks-12.11mingw-setup.exe/download and save it in a folder. Double-click on this file and run it. You'll be presented with the following screen: As shown in the following screenshot click on the Next button to continue. License text will be presented. The Code::Blocks application is licensed under GNU GPLv3 and Code::Blocks SDK is licensed under GNU LGPLv3. You can learn more about these licenses at this URL—https://www.gnu.org/licenses/licenses.html. Click on I Agree to accept the License Agreement. The component selection page will be presented in the following screenshot: You may choose any of the following options: Default install: This is the default installation option. This will install Code::Block's core components and core plugins. Contrib Plugins: Plugins are small programs that extend Code::Block's functionality. Select this option to install plugins contributed by several other developers. C::B Share Config: This utility can copy all/parts of configuration file. MinGW Compiler Suite: This option will install GCC 4.7.1 for Windows. Select Full Installation and click on Next button to continue. As shown in the following screenshot installer will now prompt to select installation directory: You can install it to default installation directory. Otherwise choose Destination Folder and then click on the Install button. Installer will now proceed with installation. As shown in the following screenshot Code::Blocks will now prompt us to run it after the installation is completed: Click on the No button here and then click on the Next button. Installation will now be completed: Click on the Finish button to complete installation. A shortcut will be created on the desktop. This completes our Code::Blocks installation on Windows. Installing Code::Blocks on Linux Code::Blocks runs numerous Linux distributions. In this section we'll learn about installation of Code::Blocks on CentOS linux. CentOS is a Linux distro based on Red Hat Enterprise Linux and is a freely available, enterprise grade Linux distribution. Perform the following steps to install Code::Blocks on Linux OS: Navigate to Settings | Administration | Add/Remove Software menu option. Enter wxGTK in the Search box and hit the Enter key. As of writing wxGTK-2.8.12 is the latest wxWidgets stable release available. Select it and click on the Apply button to install wxGTK package via the package manager, as shown in the following screenshot. Download packages for CentOS 6 from this URL—http://www.codeblocks.org/downloads/26. Unpack the .tar.bz2 file by issuing the following command in shell: tar xvjf codeblocks-12.11-1.el6.i686.tar.bz2 Right-click on the codeblocks-12.11-1.el6.i686.rpm file as shown in the following screenshot and choose the Open with Package Installer option. The following window will be displayed. Click on the Install button to begin installation, as shown in the following screenshot: You may be asked to enter the root password if you are installing it from a user account. Enter the root password and click on the Authenticate button. Code::Blocks will now be installed. Repeat steps 4 to 6 to install other rpm files. We have now learned to install Code::Blocks on the Windows and Linux platforms. We are now ready for C++ development. Before doing that we'll learn about the Code::Blocks user interface. First run On the Windows platform navigate to the Start | All Programs | CodeBlocks | CodeBlocks menu options to launch Code::Blocks. Alternatively you may double-click on the shortcut displayed on the desktop to launch Code::Blocks, as in the following screenshot: On Linux navigate to Applications | Programming | Code::Blocks IDE menu options to run Code::Blocks. Code::Blocks will now ask the user to select the default compiler. Code::Blocks supports several compilers and hence, is able to detect the presence of other compilers. The following screenshot shows that Code::Blocks has detected GNU GCC Compiler (which was bundled with the installer and has been installed). Click on it to select and then click on Set as default button, as shown in the following screenshot: Do not worry about the items highlighted in red in the previous screenshot. Red colored lines indicate Code::Blocks was unable to detect the presence of a particular compiler. Finally, click on the OK button to continue with the loading of Code::Blocks. After the loading is complete the Code::Blocks window will be shown. The following screenshot shows main window of Code::Blocks. Annotated portions highlight different User Interface (UI) components: Now, let us understand more about different UI components: Menu bar and toolbar: All Code::Blocks commands are available via menu bar. On the other hand toolbars provide quick access to commonly used commands. Start page and code editors: Start page is the default page when Code::Blocks is launched. This contains some useful links and recent project and file history. Code editors are text containers to edit C++ (and other language) source files. These editors offer syntax highlighting—a feature that highlights keywords in different colors. Management pane: This window shows all open files (including source files, project files, and workspace files). This pane is also used by other plugins to provide additional functionalities. In the preceding screenshot FileManager plugin is providing a Windows Explorer like facility and Code Completion plugin is providing details of currently open source files. Log windows: Log messages from different tools, for example, compiler, debugger, document parser, and so on, are shown here. This component is also used by other plugins. Status bar: This component shows various status information of Code::Blocks, for example, file path, file encoding, line numbers, and so on. Introduction to important toolbars Toolbars provide easier access to different functions of Code::Blocks. Amongst the several toolbars following ones are most important. Main toolbar The main toolbar holds core component commands. From left to right there are new file, open file, save, save all, undo, redo, cut, copy, paste, find, and replace buttons. Compiler toolbar The compiler toolbar holds commonly used compiler related commands. From left to right there are build, run, build and run, rebuild, stop build, build target buttons. Compilation of C++ source code is also called a build and this terminology will be used throughout the article. Debugger toolbar The debugger toolbar holds commonly used debugger related commands. From left to right there are debug/continue, run to cursor, next line, step into, step out, next instruction, step into instruction, break debugger, stop debugger, debugging windows, and various info buttons. Summary In this article we have learned to download and install Code::Blocks. We also learnt about different interface elements. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 7141

article-image-scratching-surface-zend-framework-2
Packt
25 Oct 2013
11 min read
Save for later

Scratching the Surface of Zend Framework 2

Packt
25 Oct 2013
11 min read
Bootstrap your app There are two ways to bootstrap your ZF2 app. The default is less flexible but handles the entire configuration, and the manual is really flexible but you have to take care of everything. The goal of the bootstrap is to provide to the application, ZendMvcApplication, with all the components and dependencies needed to successfully handle a request. A Zend Framework 2 application relies on the following six components: Configuration array ServiceManager instance EventManager instance ModuleManager instance Request object Response object As these are the pillars of a ZF2 application, we will take a look at how these components are configured to bootstrap the app. To begin with, we will see how the components interact from a high perspective and then we will jump into details of how each one works. When a new request arrives to our application, ZF2 needs to set up the environment to be able to fulfill it. This process implies reading configuration files and creating the required objects and services; attach them to the events that are going to be used and finally create the request object based on the request data. Once we have the request object, ZF2 will tell the router to do his job and will inspect the request object to determine who is responsible for processing the data. Once a controller and action has been identified as the one in charge of the request, ZF2 dispatches it and gives the controller/action the control of the program in order to execute the code that will interpret the request and will do something with it. This can be from accepting an uploaded image to showing a sign-up form and also changing data on an external database. When the controller processes the data, sometimes a view object is generated to encapsulate the data that we should send to the client who made the request, and a response object is created. After we have a response object, ZF2 sends it to the browser and the request ends. Now that we have seen a very simple overview of the lifecycle of a request we will jump into the details of how each object works, the options available and some examples of each one. Configuration array Let's dissect the first component of the list by taking a look at the index.php file: chdir(dirname(__DIR__)); // Setup autoloading require 'init_autoloader.php'; // Run the application! ZendMvcApplication::init(require 'config/application.config.php')->run(); As you can see we only do three things. The first thing is we change the current folder for the convenience of making everything relative to the root folder. Then we require the autoloader file; we will examine this file later. Finally, we initialize a ZendMvcApplication object by passing a configuration file and only then does the run method get called. The configuration file looks like the following code snippet: return array( 'modules' => array( 'Application', ), 'module_listener_options' => array( 'config_glob_paths' => array( 'config/autoload/{,*.}{global,local}.php', ), 'module_paths' => array( './module', './vendor', ), ), ); This file will return an array containing the configuration options for the application. Two options are used: modules and module_listener_options. As ZF2 uses a module organization approach, we should add the modules that we want to use on the application here. The second option we are using is passed as configuration to the ModuleManager object. The config_glob_path array is used when scanning the folders in search of config files and the module_paths array is used to tell ModuleManager a set of paths where the module resides. ZF2 uses a module approach to organize files. A module can contain almost anything, simple PHP files, view scripts, images, CSS, JavaScript, and so on. This approach will allow us to build reusable blocks of functionality and we will adhere to this while developing our project. PSR-0 and autoloaders Before continuing with the key components, let's take a closer look at the init_autoloader.php file used in the index.php file. As is stated on the first block comment, this file is more complicated than it's supposed to be. This is because ZF2 will try to set up different loading mechanisms and configurations. if (file_exists('vendor/autoload.php')) { $loader = include 'vendor/autoload.php'; } The first thing is to check if there is an autoload.php file inside the vendor folder; if it's found, we will load it. This is because the user might be using composer, in which case composer will provide a PSR-0 class loader. Also, this will register the namespaces defined by composer on the loader. PSR-0 is an autoloading standard proposed by the PHP Framework Interop Group (http://www.php-fig.org/) that describes the mandatory requirements for autoloader interoperability between frameworks. Zend Framework 2 is one of the projects that adheres to it. if (getenv('ZF2_PATH')) { $zf2Path = getenv('ZF2_PATH'); } elseif (get_cfg_var('zf2_path')) { $zf2Path = get_cfg_var('zf2_path'); } elseif (is_dir('vendor/ZF2/library')) { $zf2Path = 'vendor/ZF2/library'; } In the next section we will try to get the path of the ZF2 files from different sources. We will first try to get it from the environment, if not, we'll try from a directive value in the php.ini file. Finally, if the previous methods fail the code, we will try to check whether a specific folder exists inside the vendor folder. if ($zf2Path) { if (isset($loader)) { $loader->add('Zend', $zf2Path); } else { include $zf2Path . '/Zend/Loader/AutoloaderFactory.php'; ZendLoaderAutoloaderFactory::factory(array( 'ZendLoaderStandardAutoloader' => array( 'autoregister_zf' => true ) )); } } Finally, if the framework is found by any of these methods, based on the existence of the composer autoloader, the code will just add the Zend namespace or will instantiate an internal autoloader, ZendLoaderAutoloader, and use it as a default. As you can see, there are multiple ways to set up the autoloading mechanism on ZF2 and at the end what matters is which one you prefer, as all of them in essence will behave the same. ServiceManager After all this execution of code, we arrive at the last section of the index.php file where we actually instantiate the ZendMvcApplication object. As we said, there are two methods of creating an instance of ZendMvcApplication. In the default approach, we call the static method init of the class by passing an optional configuration as the first parameter. This method will take care of instantiating a new ServiceManager object, storing the configuration inside, loading the modules specified in the configuration, and getting a configured ZendMvcApplication. ServiceManager is a service/object locator that implements the Service Locator design pattern; its responsibility is to retrieve other objects. $serviceManager = new ServiceManager( new ServiceServiceManagerConfig($smConfig) ); $serviceManager->setService('ApplicationConfig', $configuration); $serviceManager->get('ModuleManager')->loadModules(); return $serviceManager->get('Application')->bootstrap(); As you can see, the init method calls the bootstrap() method of the ZendMvcApplication instance. Service Locator is a design pattern used in software development to encapsulate the process of obtaining other objects. The concept is based on a central repository that stores the objects and also knows how to create them if required. EventManager This component is designed to provide multiple functionalities. It can be used to implement simple observer patterns, and also can be used to do aspect-oriented design or even create event-driven architectures. The basic operations you can do over these components is attaching and detaching listeners to named events, trigger events, and interrupting the execution of listeners when an event is fired. Let's see a couple of examples on how to attach to an event and how to fire them: //Registering an event listener $events = new EventManager(); $events->attach(array('EVENT_NAME'), $callback); //Triggering an event $events->trigger('EVENT_NAME', $this, $params); Inside the bootstrap method of ZendMvcApplication, we are registering the events of RouteListener, DispatchListener, and ViewManager. After that, the code is instantiating a new custom event called MvcEvent that will be used as the target when firing events. Finally, this piece of code will fire the bootstrap event. ModuleManager Zend Framework 2 introduces a completely redesigned ModuleManager. This new module has been built with simplicity, flexibility, and reuse in mind. These modules can hold everything from PHP to images, CSS, library code, views, and so on. The responsibility of this component in the bootstrap process of an app is loading the available modules specified by the config file. This is accomplished by the following code line located in the init method of ZendMvcApplication: $serviceManager->get('ModuleManager')->loadModules(); This line, when executed, will retrieve the list of modules located at the config file and will load each module. Each module has to contain a file called Module.php with the initialization of the components of the module if needed. This will allow the module manager to retrieve the configuration of the module. Let's see the usual content of this file: namespace MyModule; class Module { public function getAutoloaderConfig() { return array( 'ZendLoaderClassMapAutoloader' => array( __DIR__ . '/autoload_classmap.php', ), 'ZendLoaderStandardAutoloader' => array( 'namespaces' => array( __NAMESPACE__ => __DIR__ . '/src/' . __NAMESPACE__, ), ), ); } public function getConfig() { return include __DIR__ . '/config/module.config.php'; } } As you can see we are defining a method called getAutoloaderConfig() that provides the configuration for the autoloader to ModuleManager. The last method getConfig() is used to provide the configuration of the module to ModuleManager; for example, this will contain the routes handled by the module. Request object This object encapsulates all data related to a request and allows the developer to interact with the different parts of a request. This object is used in the constructor of ZendMvcApplication and also is set inside MvcEvent to be able to retrieve when some events are fired. Response object This object encapsulates all the parts of an HTTP response and provides the developer with a fluent interface to set all the response data. This object is used in the same way as the request object. Basically it is instantiated on the constructor and added to MvcEvent to be able to interact with it across all the events and classes. The request object As we said, the request object will encapsulate all the data related to a request and provide the developer with a fluent API to access the data. Let's take a look at the details of the request object in order to understand how to use it and what it can offer to us: use ZendHttpRequest; $string = "GET /foo HTTP/1.1rnrnSome Content"; $request = Request::fromString($string); $request->getMethod(); $request->getUri(); $request->getUriString(); $request->getVersion(); $request->getContent(); This example comes directly from the documentation and shows how a request object can be created from a string, and then access some data related with the request using the methods provided. So, every time we need to know something related to the request, we will access this object to get the data we need. If we check the code on ZendHttpPhpEnvironmentRequest.php, the first thing we can notice is that the data is populated on the constructor using the superglobal arrays. All this data is processed and then populated inside the object to be able to expose it in a standard way using methods. To manipulate the URI of the request you can get/set the data with three methods, two getters and one setter. The only difference with the getters is that one returns a plain string and the other returns an HttpUri object. getUri() and getUriString() setUri() To retrieve the data passed in the request, there are a few specialized methods depending on the data you want to get: getQuery() getPost() getFiles() getHeader() and getHeaders() About the request method, the object has a general way to know the method used, returning a string or nine specialized functions that will test specific methods based on the RFC 2616, which defines the standard methods for an HTTP request. getMethod() isOptions() isGet() isHead() isPost() isPut() isDelete() isTrace() isConnect() isPatch() Finally, two more methods are available in this object that will test special requests such as AJAX and requests made by a flash object. isXmlHttpRequest() isFlashRequest() Notice that the data stored on the superglobal arrays when populated on the object are converted from an Array to a Parameters object. The Parameters object lives in the Stdlib section of ZF2, a folder where common objects can be found and used across the framework. In this case, the Parameters class is an extension of ArrayObject and implements ParametersInterface that will bring ArrayAccess, Countable, Serializable, and Traversable functionality to the parameters stored inside the object. The goal with this object is to provide a common interface to access data stored in the superglobal arrays. This expands the ways you can interact with the data in an object-oriented approach.
Read more
  • 0
  • 0
  • 999
article-image-image-classification-and-feature-extraction-images
Packt
25 Oct 2013
3 min read
Save for later

Image classification and feature extraction from images

Packt
25 Oct 2013
3 min read
(For more resources related to this topic, see here.) Classifying images Automated Remote Sensing ( ARS ) is rarely ever done in the visible spectrum. The most commonly available wavelengths outside of the visible spectrum are infrared and near-infrared. The following scene is a thermal image (band 10) from a fairly recent Landsat 8 flyover of the US Gulf Coast from New Orleans, Louisiana to Mobile, Alabama. Major natural features in the image are labeled so you can orient yourself: Because every pixel in that image has a reflectance value, it is information. Python can "see" those values and pick out features the same way we intuitively do by grouping related pixel values. We can colorize pixels based on their relation to each other to simplify the image and view related features. This technique is called classification. Classifying can range from fairly simple groupings based only on some value distribution algorithm derived from the histogram to complex methods involving training data sets and even computer learning and artificial intelligence. The simplest forms are called unsupervised classifications, whereas methods involving some sort of training data to guide the computer are called supervised. It should be noted that classification techniques are used across many fields, from medical doctors trying to spot cancerous cells in a patient's body scan, to casinos using facial-recognition software on security videos to automatically spot known con-artists at blackjack tables. To introduce remote sensing classification we'll just use the histogram to group pixels with similar colors and intensities and see what we get. First you'll need to download the Landsat 8 scene here: http://geospatialpython.googlecode.com/files/thermal.zip Instead of our histogram() function from previous examples, we'll use the version included with NumPy that allows you to easily specify a number of bins and returns two arrays with the frequency as well as the ranges of the bin values. We'll use the second array with the ranges as our class definitions for the image. The lut or look-up table is an arbitrary color palette used to assign colors to classes. You can use any colors you want. import gdalnumeric # Input file name (thermal image) src = "thermal.tif" # Output file name tgt = "classified.jpg" # Load the image into numpy using gdal srcArr = gdalnumeric.LoadFile(src) # Split the histogram into 20 bins as our classes classes = gdalnumeric.numpy.histogram(srcArr, bins=20)[1] # Color look-up table (LUT) - must be len(classes)+1. # Specified as R,G,B tuples lut = [[255,0,0],[191,48,48],[166,0,0],[255,64,64], [255,115,115],[255,116,0],[191,113,48],[255,178,115], [0,153,153],[29,115,115],[0,99,99],[166,75,0], [0,204,0],[51,204,204],[255,150,64],[92,204,204],[38,153,38], [0,133,0],[57,230,57],[103,230,103],[184,138,0]] # Starting value for classification start = 1 # Set up the RGB color JPEG output image rgb = gdalnumeric.numpy.zeros((3, srcArr.shape[0], srcArr.shape[1],), gdalnumeric.numpy.float32) # Process all classes and assign colors for i in range(len(classes)): mask = gdalnumeric.numpy.logical_and(start <= srcArr, srcArr <= classes[i]) for j in range(len(lut[i])): rgb[j] = gdalnumeric.numpy.choose(mask, (rgb[j], lut[i][j])) start = classes[i]+1 # Save the image gdalnumeric.SaveArray(rgb.astype(gdalnumeric.numpy.uint8), tgt, format="JPEG") The following image is our classification output, which we just saved as a JPEG. We didn't specify the prototype argument when saving as an image, so it has no georeferencing information. This result isn't bad for a very simple unsupervised classification. The islands and coastal flats show up as different shades of green. The clouds were isolated as shades of orange and dark blues. We did have some confusion inland where the land features were colored the same as the Gulf of Mexico. We could further refine this process by defining the class ranges manually instead of just using the histogram.
Read more
  • 0
  • 0
  • 4461

Packt
24 Oct 2013
13 min read
Save for later

Using Media Files – playing audio files

Packt
24 Oct 2013
13 min read
(For more resources related to this topic, see here.) Playing audio files JUCE provides a sophisticated set of classes for dealing with audio. This includes: sound file reading and writing utilities, interfacing with the native audio hardware, audio data conversion functions, and a cross-platform framework for creating audio plugins for a range of well-known host applications. Covering all of these aspects is beyond the scope of this article, but the examples in this section will outline the principles of playing sound files and communicating with the audio hardware. In addition to showing the audio features of JUCE, in this section we will also create the GUI and autogenerate some other aspects of the code using the Introjucer application. Creating a GUI to control audio file playback Create a new GUI application Introjucer project of your choice, selecting the option to create a basic window. In the Introjucer application, select the Config panel, and select Modules in the hierarchy. For this project we need the juce_audio_utils module (which contains a special Component class for configuring the audio device hardware); therefore, turn ON this module. Even though we created a basic window and a basic component, we are going to create the GUI using the Introjucer application. Navigate to the Files panel and right-click (on the Mac, press control and click) on the Source folder in the hierarchy, and select Add New GUI Component… from the contextual menu. When asked, name the header MediaPlayer.h and click on Save. In the Files hierarchy, select the MediaPlayer.cpp file. First select the Class panel and change the Class name from NewComponent to MediaPlayer. We will need four buttons for this basic project: a button to open an audio file, a Play button, a Stop button, and an audio device settings button. Select the Subcomponents panel, and add four TextButton components to the editor by right-clicking to access the contextual menu. Space the buttons equally near the top of the editor, and configure each button as outlined in the table as follows: Purpose member name name text background (normal) Open file openButton open Open... Default Play/pause file playButton play Play Green Stop playback stopButton stop Stop Red Configure audio settingsButton settings Audio Settings... Default Arrange the buttons as shown in the following screenshot: For each button, access the mode pop-up menu for the width setting, and choose Subtracted from width of parent. This will keep the right-hand side of the buttons the same distance from the right-hand side of the window if the window is resized. There are more customizations to be done in the Introjucer project, but for now, make sure that you have saved the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project before you open your native IDE project. Make sure that you have saved all of these files in the Introjucer application; otherwise the files may not get correctly updated in the file system when the project is opened in the IDE. In the IDE we need to replace the MainContentComponent class code to place a MediaPlayer object within it. Change the MainComponent.h file as follows: #ifndef __MAINCOMPONENT_H__#define __MAINCOMPONENT_H__#include "../JuceLibraryCode/JuceHeader.h"#include "MediaPlayer.h"class MainContentComponent : public Component{public:MainContentComponent();void resized();private:MediaPlayer player;};#endif Then, change the MainComponent.cpp file to: #include "MainComponent.h"MainContentComponent::MainContentComponent(){addAndMakeVisible (&player);setSize (player.getWidth(),player.getHeight());}void MainContentComponent::resized(){player.setBounds (0, 0, getWidth(), getHeight());} Finally, make the window resizable in the Main.cpp file, and build and run the project to check that the window appears as expected. Adding audio file playback support Quit the application and return to the Introjucer project. Select the MediaPlayer.cpp file in the Files panel hierarchy and select its Class panel. The Parent classes setting already contains public Component. We are going to be listening for state changes from two of our member objects that are ChangeBroadcaster objects. To do this, we need our MediaPlayer class to inherit from the ChangeListener class. Change the Parent classes setting such that it reads: public Component, public ChangeListener Save the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project again, and open it into your IDE. Notice in the MediaPlayer.h file that the parent classes have been updated to reflect this change. For convenience, we are going to add some enumerated constants to reflect the current playback state of our MediaPlayer object, and a function to centralize the change of this state (which will, in turn, update the state of various objects, such as the text displayed on the buttons). The ChangeListener class also has one pure virtual function, which we need to add. Add the following code to the [UserMethods] section of MediaPlayer.h: //[UserMethods]-- You can add your own custom methods...enum TransportState {Stopped,Starting,Playing,Pausing,Paused,Stopping};void changeState (TransportState newState);void changeListenerCallback (ChangeBroadcaster* source);//[/UserMethods] We also need some additional member variables to support our audio playback. Add these to the [UserVariables] section: //[UserVariables] -- You can add your own custom variables...AudioDeviceManager deviceManager;AudioFormatManager formatManager;ScopedPointer<AudioFormatReaderSource> readerSource;AudioTransportSource transportSource;AudioSourcePlayer sourcePlayer;TransportState state;//[/UserVariables] The AudioDeviceManager object will manage our interface between the application and the audio hardware. The AudioFormatManager object will assist in creating an object that will read and decode the audio data from an audio file. This object will be stored in the ScopedPointer<AudioFormatReaderSource> object. The AudioTransportSource object will control the playback of the audio file and perform any sampling rate conversion that may be required (if the sampling rate of the audio file differs from the audio hardware sampling rate). The AudioSourcePlayer object will stream audio from the AudioTransportSource object to the AudioDeviceManager object. The state variable will store one of our enumerated constants to reflect the current playback state of our MediaPlayer object. Now add some code to the MediaPlayer.cpp file. In the [Constructor] section of the constructor, add following two lines: playButton->setEnabled (false);stopButton->setEnabled (false); This sets the Play and Stop buttons to be disabled (and grayed out) initially. Later, we enable the Play button once a valid file is loaded, and change the state of each button and the text displayed on the buttons, depending on whether the file is currently playing or not. In this [Constructor] section you should also initialize the AudioFormatManager as follows: formatManager.registerBasicFormats(); This allows the AudioFormatManager object to detect different audio file formats and create appropriate file reader objects. We also need to connect the AudioSourcePlayer, AudioTransportSource and AudioDeviceManager objects together, and initialize the AudioDeviceManager object. To do this, add the following lines to the [Constructor] section: sourcePlayer.setSource (&transportSource);deviceManager.addAudioCallback (&sourcePlayer);deviceManager.initialise (0, 2, nullptr, true); The first line connects the AudioTransportSource object to the AudioSourcePlayer object. The second line connects the AudioSourcePlayer object to the AudioDeviceManager object. The final line initializes the AudioDeviceManager object with: The number of required audio input channels (0 in this case). The number of required audio output channels (2 in this case, for stereo output). An optional "saved state" for the AudioDeviceManager object (nullptr initializes from scratch). Whether to open the default device if the saved state fails to open. As we are not using a saved state, this argument is irrelevant, but it is useful to set this to true in any case. The final three lines to add to the [Constructor] section to configure our MediaPlayer object as a listener to the AudioDeviceManager and AudioTransportSource objects, and sets the current state to Stopped: deviceManager.addChangeListener (this);transportSource.addChangeListener (this);state = Stopped; In the buttonClicked() function we need to add some code to the various sections. In the [UserButtonCode_openButton] section, add: //[UserButtonCode_openButton] -- add your button handler...FileChooser chooser ("Select a Wave file to play...",File::nonexistent,"*.wav");if (chooser.browseForFileToOpen()) {File file (chooser.getResult());readerSource = new AudioFormatReaderSource(formatManager.createReaderFor (file), true);transportSource.setSource (readerSource);playButton->setEnabled (true);}//[/UserButtonCode_openButton] When the openButton button is clicked, this will create a FileChooser object that allows the user to select a file using the native interface for the platform. The types of files that are allowed to be selected are limited using the wildcard *.wav to allow only files with the .wav file extension to be selected. If the user actually selects a file (rather than cancels the operation), the code can call the FileChooser::getResult() function to retrieve a reference to the file that was selected. This file is then passed to the AudioFormatManager object to create a file reader object, which in turn is passed to create an AudioFormatReaderSource object that will manage and own this file reader object. Finally, the AudioFormatReaderSource object is connected to the AudioTransportSource object and the Play button is enabled. The handlers for the playButton and stopButton objects will make a call to our changeState() function depending on the current transport state. We will define the changeState() function in a moment where its purpose should become clear. In the [UserButtonCode_playButton] section, add the following code: //[UserButtonCode_playButton] -- add your button handler...if ((Stopped == state) || (Paused == state))changeState (Starting);else if (Playing == state)changeState (Pausing);//[/UserButtonCode_playButton] This changes the state to Starting if the current state is either Stopped or Paused, and changes the state to Pausing if the current state is Playing. This is in order to have a button with combined play and pause functionality. In the [UserButtonCode_stopButton] section, add the following code: //[UserButtonCode_stopButton] -- add your button handler...if (Paused == state)changeState (Stopped);elsechangeState (Stopping);//[/UserButtonCode_stopButton] This sets the state to Stopped if the current state is Paused, and sets it to Stopping in other cases. Again, we will add the changeState() function in a moment, where these state changes update various objects. In the [UserButtonCode_settingsButton] section add the following code: //[UserButtonCode_settingsButton] -- add your button handler...bool showMidiInputOptions = false;bool showMidiOutputSelector = false;bool showChannelsAsStereoPairs = true;bool hideAdvancedOptions = false;AudioDeviceSelectorComponent settings (deviceManager,0, 0, 1, 2,showMidiInputOptions,showMidiOutputSelector,showChannelsAsStereoPairs,hideAdvancedOptions);settings.setSize (500, 400);DialogWindow::showModalDialog(String ("Audio Settings"),&settings,TopLevelWindow::getTopLevelWindow (0),Colours::white,true); //[/UserButtonCode_settingsButton] This presents a useful interface to configure the audio device settings. We need to add the changeListenerCallback() function to respond to changes in the AudioDeviceManager and AudioTransportSource objects. Add the following to the [MiscUserCode] section of the MediaPlayer.cpp file: //[MiscUserCode] You can add your own definitions...void MediaPlayer::changeListenerCallback (ChangeBroadcaster* src){if (&deviceManager == src) {AudioDeviceManager::AudioDeviceSetup setup;deviceManager.getAudioDeviceSetup (setup);if (setup.outputChannels.isZero())sourcePlayer.setSource (nullptr);elsesourcePlayer.setSource (&transportSource);} else if (&transportSource == src) {if (transportSource.isPlaying()) {changeState (Playing);} else {if ((Stopping == state) || (Playing == state))changeState (Stopped);else if (Pausing == state)changeState (Paused);}}}//[/MiscUserCode] If our MediaPlayer object receives a message that the AudioDeviceManager object changed in some way, we need to check that this change wasn't to disable all of the audio output channels, by obtaining the setup information from the device manager. If the number of output channels is zero, we disconnect our AudioSourcePlayer object from the AudioTransportSource object (otherwise our application may crash) by setting the source to nullptr. If the number of output channels becomes nonzero again, we reconnect these objects. If our AudioTransportSource object has changed, this is likely to be a change in its playback state. It is important to note the difference between requesting the transport to start or stop, and this change actually taking place. This is why we created the enumerated constants for all the other states (including transitional states). Again we issue calls to the changeState() function depending on the current value of our state variable and the state of the AudioTransportSource object. Finally, add the important changeState() function to the [MiscUserCode] section of the MediaPlayer.cpp file that handles all of these state changes: void MediaPlayer::changeState (TransportState newState){if (state != newState) {state = newState;switch (state) {case Stopped:playButton->setButtonText ("Play");stopButton->setButtonText ("Stop");stopButton->setEnabled (false);transportSource.setPosition (0.0);break;case Starting:transportSource.start();break;case Playing:playButton->setButtonText ("Pause");stopButton->setButtonText ("Stop");stopButton->setEnabled (true);break;case Pausing:transportSource.stop();break;case Paused:playButton->setButtonText ("Resume");stopButton->setButtonText ("Return to Zero");break;case Stopping:transportSource.stop();break;}}} After checking that the newState value is different from the current value of the state variable, we update the state variable with the new value. Then, we perform the appropriate actions for this particular point in the cycle of state changes. These are summarized as follows: In the Stopped state, the buttons are configured with the Play and Stop labels, the Stop button is disabled, and the transport is positioned to the start of the audio file. In the Starting state, the AudioTransportSource object is told to start. Once the AudioTransportSource object has actually started playing, the system will be in the Playing state. Here we update the playButton button to display the text Pause, ensure the stopButton button displays the text Stop, and we enable the Stop button. If the Pause button is clicked, the state becomes Pausing, and the transport is told to stop. Once the transport has actually stopped, the state changes to Paused, the playButton button is updated to display the text Resume and the stopButton button is updated to display Return to Zero. If the Stop button is clicked, the state is changed to Stopping, and the transport is told to stop. Once the transport has actually stopped, the state changes to Stopped (as described in the first point). If the Return to Zero button is clicked, the state is changed directly to Stopped (again, as previously described). When the audio file reaches the end of the file, the state is also changed to Stopped. Build and run the application. You should be able to select a .wav audio file after clicking the Open... button, play, pause, resume, and stop the audio file using the respective buttons, and configure the audio device using the Audio Settings… button. The audio settings window allows you to select the input and output device, the sample rate, and the hardware buffer size. It also provides a Test button that plays a tone through the selected output device. Summary This article has covered a few of the techniques for dealing with audio files in JUCE. The article has given only an introduction to get you started; there are many other options and alternative approaches, which may suit different circumstances. The JUCE documentation will take you through each of these and point you to related classes and functions. Resources for Article: Further resources on this subject: Quick start – media files and XBMC [Article] Audio Playback [Article] Automating the Audio Parameters – How it Works [Article]
Read more
  • 0
  • 0
  • 1011

article-image-testing-groovy
Packt
18 Oct 2013
24 min read
Save for later

Testing with Groovy

Packt
18 Oct 2013
24 min read
(For more resources related to this topic, see here.) This article is completely devoted to testing in Groovy. Testing is probably the most important activity that allows us to produce better software and make our users happier. The Java space has countless tools and frameworks that can be used for testing our software. In this article, we will direct our focus on some of those frameworks and how they can be integrated with Groovy. We will discuss not only unit testing techniques, but also integration and load testing strategies. Starting from the king of all testing frameworks, JUnit and its seamless Groovy integration, we move to explore how to test: SOAP and REST web services Code that interacts with databases The web application interface using Selenium The article also covers Behavior Driven Development (BDD) with Spock, advanced web service testing using soapUI, and load testing using JMeter. Unit testing Java code with Groovy One of the ways developers start looking into the Groovy language and actually using it is by writing unit tests. Testing Java code with Groovy makes the tests less verbose and it's easier for the developers that clearly express the intent of each test method. Thanks to the Java/Groovy interoperability, it is possible to use any available testing or mocking framework from Groovy, but it's just simpler to use the integrated JUnit based test framework that comes with Groovy. In this recipe, we are going to look at how to test Java code with Groovy. Getting ready This recipe requires a new Groovy project that we will use again in other recipes of this article. The project is built using Gradle and contains all the test cases required by each recipe. Let's create a new folder called groovy-test and add a build.gradle file to it. The build file will be very simple: apply plugin: 'groovy' apply plugin: 'java' repositories { mavenCentral() maven { url 'https://oss.sonatype.org' + '/content/repositories/snapshots' } } dependencies { compile 'org.codehaus.groovy:groovy-all:2.1.6' testCompile 'junit:junit:4.+' } The standard source folder structure has to be created for the project. You can use the following commands to achieve this: mkdir -p src/main/groovy mkdir -p src/test/groovy mkdir -p src/main/java Verify that the project builds without errors by typing the following command in your shell: gradle clean build How to do it... To show you how to test Java code from Groovy, we need to have some Java code first! So, this recipe's first step is to create a simple Java class called StringUtil, which we will test in Groovy. The class is fairly trivial, and it exposes only one method that concatenates String passed in as a List: package org.groovy.cookbook; import java.util.List; public class StringUtil { public String concat(List<String> strings, String separator) { StringBuilder sb = new StringBuilder(); String sep = ""; for(String s: strings) { sb.append(sep).append(s); sep = separator; } return sb.toString(); } } Note that the class has a specific package, so don't forget to create the appropriate folder structure for the package when placing the class in the src/main/java folder. Run the build again to be sure that the code compiles. Now, add a new test case in the src/test/groovy folder: package org.groovy.cookbook.javatesting import org.groovy.cookbook.StringUtil class JavaTest extends GroovyTestCase { def stringUtil = new StringUtil() void testConcatenation() { def result = stringUtil.concat(['Luke', 'John'], '-') assertToString('Luke-John', result) } void testConcatenationWithEmptyList() { def result = stringUtil.concat([], ',') assertEquals('', result) } } Again, pay attention to the package when creating the test case class and create the package folder structure. Run the test by executing the following Gradle command from your shell: gradle clean build Gradle should complete successfully with the BUILD SUCCESSFUL message. Now, add a new test method and run the build again: void testConcatenationWithNullShouldReturnNull() { def result = stringUtil.concat(null, ',') assertEquals('', result) } This time the build should fail: 3 tests completed, 1 failed :test FAILED FAILURE: Build failed with an exception. Fix the test by adding a null check to the Java code as the first statement of the concat method: if (strings == null) { return ""; } Run the build again to verify that it is now successful. How it works... The test case shown at step 3 requires some comments. The class extends GroovyTestCase is a base test case class that facilitates the writing of unit tests by adding several helper methods to the classes extending it. When a test case extends from GroovyTestCase, each test method name must start with test and the return type must be void. It is possible to use the JUnit 4.x @Test annotation, but, in that case, you don't have to extend from GroovyTestCase. The standard JUnit assertions (such as assertEquals and assertNull) are directly available in the test case (without explicit import) plus some additional assertion methods are added by the super class. The test case at step 3 uses assertToString to verify that a String matches the expected result. There are other assertions added by GroovyTestCase, such as assertArrayEquals, to check that two arrays contain the same values, or assertContains to assert that an array contains a given element. There's more... The GroovyTestCase class also offers an elegant method to test for expected exceptions. Let's add the following rule to the concat method: if (separator.length() != 1) { throw new IllegalArgumentException( "The separator must be one char long"); } Place the separator length check just after the null check for the List. Add the following new test method to the test case: void testVerifyExceptionOnWrongSeparator() { shouldFail IllegalArgumentException, { stringUtil(['a', 'b'], ',,') } shouldFail IllegalArgumentException, { stringUtil(['c', 'd'], '') } } The shouldFail method takes a closure that is executed in the context of a try-catch block. We can also specify the expected exception in the shouldFail method. The shouldFail method makes testing for exceptions very elegant and simple to read. See also http://junit.org/ http://groovy.codehaus.org/api/groovy/util/GroovyTestCase.html Testing SOAP web services This recipe shows you how to use the JUnit 4 annotations instead of the JUnit 3 API offered by GroovyTestCase. Getting ready For this recipe, we are going to use a publicly available web service, the US holiday date web service hosted at http://www.holidaywebservice.com. The WSDL of the service can be found on the /Holidays/US/Dates/USHolidayDates.asmx?WSDL path. We have already encountered this service in the Issuing a SOAP request and parsing a response recipe in Article 8, Working with Web Services in Groovy. Each service operation simply returns the date of a given US holiday such as Easter or Christmas. How to do it... We start from the Gradle build that we created in the Unit testing Java code with Groovy recipe. Add the following dependency to the dependencies section of the build.gradle file: testCompile 'com.github.groovy-wslite:groovy-wslite:0.8.0' Let's create a new unit test for verifying one of the web service operations. As usual, the test case is created in the src/test/groovy/org/groovy/cookbook folder: package org.groovy.cookbook.soap import static org.junit.Assert.* import org.junit.Test import wslite.soap.* class SoapTest { ... } Add a new test method to the body of the class: @Test void testMLKDay() { def baseUrl = 'http://www.holidaywebservice.com' def service = '/Holidays/US/Dates/USHolidayDates.asmx?WSDL' def client = new SOAPClient("${baseUrl}${service}") def baseNS = 'http://www.27seconds.com/Holidays/US/Dates/' def action = "${baseNS}GetMartinLutherKingDay" def response = client.send(SOAPAction: action) { body { GetMartinLutherKingDay('>gradle -Dtest.single=SoapTest clean test How it works... The test code creates a new SOAPClient with the URI of the target web service. The request is created using Groovy's MarkupBuilder. The body closure (and if needed, also the header closure) is passed to the MarkupBuilder for the SOAP message creation. The assertion code gets the result from the response, which is automatically parsed by XMLSlurper, allowing easy access to elements of the response such as the header or the body elements. In the previous test, we simply check that the returned Martin Luther King day matches with the expected one for the year 2013. There's more... If you require more control over the content of the SOAP request, the SOAPClient also supports sending the SOAP envelope as a String, such as in this example: def response = client.send ( """<?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope> <soapenv:Header/> <soapenv:Body> <dat:GetMartinLutherKingDay> <dat:year>2013</dat:year> </dat:GetMartinLutherKingDay> </soapenv:Body> </soapenv:Envelope> """ ) Replace the call to the send method in step 3 with the one above and run your test again. See also https://github.com/jwagenleitner/groovy-wslite http://www.holidaywebservice.com Testing RESTful services This recipe is very similar to the previous recipe Testing SOAP web services, except that it shows how to test a RESTful service using Groovy and JUnit. Getting ready For this recipe, we are going to use a test framework aptly named Rest-Assured. This framework is a simple DSL for testing and validating REST services returning either JSON or XML. Before we start to delve into the recipe, we need to start a simple REST service for testing purposes. We are going to use the Ratpack framework. The test REST service, which we will use, exposes three APIs to fetch, add, and delete books from a database using JSON as lingua franca. For the sake of brevity, the code for the setup of this recipe is available in the rest-test folder of the companion code for this article. The code contains the Ratpack server, the domain objects, Gradle build, and the actual test case that we are going to analyze in the next section. How to do it... The test case takes care of starting the Ratpack server and execute the REST requests. Here is the RestTest class located in src/test/groovy/org/groovy/ cookbok/rest folder: src/test/groovy/org/groovy/ cookbok/rest folder: package org.groovy.cookbook.rest import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher. RestAssuredMatchers.* import static org.hamcrest.Matchers.* import static org.junit.Assert.* import groovy.json.JsonBuilder import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test class RestTest { static server final static HOST = 'http://localhost:5050' @BeforeClass static void setUp() { server = App.init() server.startAndWait() } @AfterClass static void tearDown() { if(server.isRunning()) { server.stop() } } @Test void testGetBooks() { expect(). body('author', hasItems('Ian Bogost', 'Nate Silver')). when().get("${HOST}/api/books") } @Test void testGetBook() { expect(). body('author', is('Steven Levy')). when().get("${HOST}/api/books/5") } @Test void testPostBook() { def book = new Book() book.author = 'Haruki Murakami' book.date = '2012-05-14' book.title = 'Kafka on the shore' JsonBuilder jb = new JsonBuilder() jb.content = book given(). content(jb.toString()). expect().body('id', is(6)). when().post("${HOST}/api/books/new") } @Test void testDeleteBook() { expect().statusCode(200). when().delete("${HOST}/api/books/1") expect().body('id', not(hasValue(1))). when().get("${HOST}/api/books") } } Build the code and execute the test from the command line by typing: gradle clean test How it works... The JUnit test has a @BeforeClass annotated method, executed at the beginning of the unit test, that starts the Ratpack server and the associated REST services. The @AfterClass annotated method, on the contrary, shuts down the server when the test is over. The unit test has four test methods. The first one, testGetBooks executes a GET request against the server and retrieves all the books. The rather readable DSL offered by the Rest-Assured framework should be easy to follow. The expect method starts building the response expectation returned from the get method. The actual assert of the test is implemented via a Hamcrest matcher (hence the static org.hamcrest.Matchers.* import in the test). The test is asserting that the body of the response contains two books that have the author named Ian Bogost or Greg Grandin. The get method hits the URL of the embedded Ratpack server, started at the beginning of the test. The testGetBook method is rather similar to the previous one, except that it uses the is matcher to assert the presence of an author on the returned JSON message. The testPostBook tests that the creation of a new book is successful. First, a new book object is created and transformed into a JSON object using JsonBuilder. Instead of the expect method, we use the given method to prepare the POST request. The given method returns a RequestSpecification to which we assign the newly created book and finally, invoke the post method to execute the operation on the server. As in our book database, the biggest identifier is 8, the new book should get the id 9, which we assert in the test. The last test method (testDeleteBook) verifies that a book can be deleted. Again we use the expect method to prepare the response, but this time we verify that the returned HTTP status code is 200 (for deletion) upon deleting a book with the id 1. The same test also double-checks that fetching the full list of books does not contain the book with id equals to 1. See also https://code.google.com/p/rest-assured/ https://code.google.com/p/hamcrest/ https://github.com/ratpack/ratpack Writing functional tests for web applications If you are developing web applications, it is of utmost importance that you thoroughly test them before allowing user access. Web GUI testing can require long hours of very expensive human resources to repeatedly exercise the application against a varied list of input conditions. Selenium, a browser-based testing and automation framework, aims to solve these problems for software developers, and it has become the de facto standard for web interface integration and functional testing. Selenium is not just a single tool but a suite of software components, each catering to different testing needs of an organization. It has four main parts: Selenium Integrated Development Environment (IDE): It is a Firefox add-on that you can only use for creating relatively simple test cases and test suites. Selenium Remote Control (RC): It also known as Selenium 1, is the first Selenium tool that allowed users to use programming languages in creating complex tests. WebDriver: It is the newest Selenium implementation that allows your test scripts to communicate directly to the browser, thereby controlling it from the OS level. Selenium Grid: It is a tool that is used with Selenium RC to execute parallel tests across different browsers and operating systems. Since 2008, Selenium RC and WebDriver are merged into a single framework to form Selenium 2. Selenium 1 refers to Selenium RC. This recipe will show you how to write a Selenium 2 based test using HtmlUnitDriver. HtmlUnitDriver is the fastest and most lightweight implementation of WebDriver at the moment. As the name suggests, it is based on HtmlUnit, a relatively old framework for testing web applications. The main disadvantage of using this driver instead of a WebDriver implementation that "drives" a real browser is the JavaScript support. None of the popular browsers use the JavaScript engine used by HtmlUnit (Rhino). If you test JavaScript using HtmlUnit, the results may divert considerably from those browsers. Still, WebDriver and HtmlUnit can be used for fast paced testing against a web interface, leaving more JavaScript intensive tests to other, long running, WebDriver implementations that use specific browsers. Getting ready Due to the relative complexity of the setup required to demonstrate the steps of this recipe, it is recommended that the reader uses the code that comes bundled with this recipe. The code is located in the selenium-test located in the code directory for this article. The source code, as other recipes in this article, is built using Gradle and has a standard structure, containing application code and test code. The web application under test is very simple. It is composed of two pages. A welcome page that looks similar to the following screenshot: And a single field test form page: The Ratpack framework is utilized to run the fictional web application and serve the HTML pages along with some JavaScript and CSS. How to do it... The following steps will describe the salient points of Selenium testing with Groovy. Let's open the build.gradle file. We are interested in the dependencies required to execute the tests: testCompile group: 'org.seleniumhq.selenium', name: 'selenium-htmlunit-driver', version: '2.32.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-support', version: '2.9.0' Let's open the test case, SeleniumTest.groovy located in test/groovy/org/ groovy/cookbook/selenium: test/groovy/org/ groovy/cookbook/selenium: package org.groovy.cookbook.selenium import static org.junit.Assert.* import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test import org.openqa.selenium.By import org.openqa.selenium.WebDriver import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import org.openqa.selenium.support.ui.WebDriverWait import com.google.common.base.Function class SeleniumTest { static server static final HOST = 'http://localhost:5050' static HtmlUnitDriver driver @BeforeClass static void setUp() { server = App.init() server.startAndWait() driver = new HtmlUnitDriver(true) } @AfterClass static void tearDown() { if (server.isRunning()) { server.stop() } } @Test void testWelcomePage() { driver.get(HOST) assertEquals('welcome', driver.title) } @Test void testFormPost() { driver.get("${HOST}/form") assertEquals('test form', driver.title) WebElement input = driver.findElement(By.name('username')) input.sendKeys('test') input.submit() WebDriverWait wait = new WebDriverWait(driver, 4) wait.until ExpectedConditions. presenceOfElementLocated(By.className('hello')) assertEquals('oh hello,test', driver.title) } } How it works... The test case initializes the Ratpack server and the HtmlUnit driver by passing true to the HtmlUnitDriver instance. The boolean parameter in the constructor indicates whether the driver should support JavaScript. The first test, testWelcomePage, simply verifies that the title of the website's welcome page is as expected. The get method executes an HTTP GET request against the URL specified in the method, the Ratpack server in our test. The second test, testFormPost, involves the DOM manipulation of a form, its submission, and waiting for an answer from the server. The Selenium API should be fairly readable. For a start, the test checks that the page containing the form has the expected title. Then the element named username (a form field) is selected, populated, and finally submitted. This is how the HTML looks for the form field: <input type="text" name="username" placeholder="Your username"> The test uses the findElement method to select the input field. The method expects a By object that is essentially a mechanism to locate elements within a document. Elements can be identified by name, id, text link, tag name, CSS selector, or XPath expression. The form is submitted via AJAX. Here is part of the JavaScript activated by the form submission: complete:function(xhr, status) { if (status === 'error' || !xhr.responseText) { alert('error') } else { document.title = xhr.responseText jQuery(e.target). replaceWith('<p class="hello">' + xhr.responseText + '</p>') } } After the form submission, the DOM of the page is manipulated to change the page title of the form page and replace the form DOM element with a message wrapped in a paragraph element. To verify that the DOM changes have been applied, the test uses the WebDriverWait class to wait until the DOM is actually modified and the element with the class hello appears on the page. The WebDriverWait is instantiated with a four seconds timeout. This recipe only scratches the surface of the Selenium 2 framework's capabilities, but it should get you started to implement your own integration and functional test. See also http://docs.seleniumhq.org/ http://htmlunit.sourceforge.net/ Writing behavior-driven tests with Groovy Behavior Driven Development, or simply BDD, is a methodology where QA, business analysts, and marketing people could get involved in defining the requirements of a process in a common language. It could be considered an extension of Test Driven Development, although is not a replacement. The initial motivation for BDD stems from the perplexity of business people (analysts, domain experts, and so on) to deal with "tests" as these seem to be too technical. The employment of the word "behaviors" in the conversation is a way to engage the whole team. BDD states that software tests should be specified in terms of the desired behavior of the unit. The behavior is expressed in a semi-formal format, borrowed from user story specifications, a format popularized by agile methodologies. For a deeper insight into the BDD rationale, it is highly recommended to read the original paper from Dan North available at http://dannorth.net/introducing-bdd/. Spock is one of the most widely used frameworks in the Groovy and Java ecosystem that allows the creation of BDD tests in a very intuitive language and facilitates some common tasks such as mocking and extensibility. What makes it stand out from the crowd is its beautiful and highly expressive specification language. Thanks to its JUnit runner, Spock is compatible with most IDEs, build tools, and continuous integration servers. In this recipe, we are going to look at how to implement both a unit test and a web integration test using Spock. Getting ready This recipe has a slightly different setup than most of the recipes in this book, as it resorts to an existing web application source code, freely available on the Internet. This is the Spring Petclinic application provided by SpringSource as a demonstration of the latest Spring framework features and configurations. The web application works as a pet hospital and most of the interactions are typical CRUD operations against certain entities (veterinarians, pets, pet owners). The Petclinic application is available in the groovy-test/spock/web folder of the companion code for this article. All Petclinic's original tests have been converted to Spock. Additionally we created a simple integration test that uses Spock and Selenium to showcase the possibilities offered by the integration of the two frameworks. As usual, the recipe uses Gradle to build the reference web application and the tests. The Petclinic web application can be started by launching the following Gradle command in your shell from the groovy-test/spock/web folder: gradle tomcatRunWar If the application starts without errors, the shell should eventually display the following message: The Server is running at http://localhost:8080/petclinic Take some time to familiarize yourself with the Petclinic application by directing your browser to http://localhost:8080/petclinic and browsing around the website: How to do it... The following steps will describe the key concepts for writing your own behavior-driven unit and integration tests: Let's start by taking a look at the dependencies required to implement a Spock-based test suite: testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '2.16.1' testCompile group: 'junit', name: 'junit', version: '4.10' testCompile 'org.hamcrest:hamcrest-core:1.2' testRuntime 'cglib:cglib-nodep:2.2' testRuntime 'org.objenesis:objenesis:1.2' This is what a BDD unit test for the application's business logic looks like: package org.springframework.samples.petclinic.model import spock.lang.* class OwnerTest extends Specification { def "test pet and owner" () { given: def p = new Pet() def o = new Owner() when: p.setName("fido") then: o.getPet("fido") == null o.getPet("Fido") == null when: o.addPet(p) then: o.getPet("fido").equals(p) o.getPet("Fido").equals(p) } } The test is named OwnerTest.groovy, and it is available in the spock/web/src/ test folder of the main groovy-test project that comes with this article. The third test in this recipe mixes Spock and Selenium, the web testing framework already discussed in Writing functional tests for web applications recipe: package org.cookbook.groovy.spock import static java.util.concurrent.TimeUnit.SECONDS import org.openqa.selenium.By import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import spock.lang.Shared import spock.lang.Specification class HomeSpecification extends Specification { static final HOME = 'http://localhost:9966/petclinic' @Shared def driver = new HtmlUnitDriver(true) def setup() { driver.manage().timeouts().implicitlyWait 10, SECONDS } def 'user enters home page'() { when: driver.get(HOME) then: driver.title == 'PetClinic :: ' + 'a Spring Framework demonstration' } def 'user clicks on menus'() { when: driver.get(HOME) def vets = driver.findElement(By.linkText('Veterinarians')) vets.click() then: driver.currentUrl == 'http://localhost:9966/petclinic/vets.html' } } The test above is available in the spock/specs/src/test folder of the accompanying project. How it works... The first step of this recipe lays out the dependencies required to set up a Spock-based BDD test suite. Spock requires Java 5 or higher, and it's pretty picky with regard to the matching Groovy version to use. In the case of this recipe, as we are using Groovy 2.x, we set the dependency to the 0.7-groovy-2.0 version of the Spock framework. The full build.gradle file is located in the spock/specs folder of the recipe's code. The first test case demonstrated in the recipe is a direct conversion of a JUnit test written by Spring for the Petclinic application. This is the original test written in Java: public class OwnerTests { @Test public void testHasPet() { Owner owner = new Owner();Pet fido = new Pet(); fido.setName("Fido"); assertNull(owner.getPet("Fido")); assertNull(owner.getPet("fido")); owner.addPet(fido); assertEquals(fido, owner.getPet("Fido")); assertEquals(fido, owner.getPet("fido")); } } All we need to import in the Spock test is spock.lang.* that contains the most important types for writing specifications. A Spock test extends from spock.lang.Specification. The name of a specification normally relates to the system or system operation under test. In the case of the Groovy test at step 2, we reused the original Java test name, but it would have been better renamed to something more meaningful for a specification such as OwnerPetSpec. The class Specification exposes a number of practical methods for implementing specifications. Additionally, it tells JUnit to run the specification with Sputnik, Spock's own JUnit runner. Thanks to Sputnik, Spock specifications can be executed by all IDEs and build tools. Following the class definition, we have the feature method: def 'test pet and owner'() { ... } Feature methods lie at the core of a specification. They contain the description of the features (properties, aspects) that define the system that is under specification test. Feature methods are conventionally named with String literals: it's a good idea to choose meaningful names for feature methods. In the test above, we are testing that: given two entities, pet and owner, the getPet method of the owner instance will return null until the pet is not assigned to the owner, and that the getPet method will accept both "fido" and "Fido" in order to verify the ownership. Conceptually, a feature method consists of four phases: Set up the feature's fixture Provide an input to the system under specification (stimulus) Describe the response expected from the system Clean up Whereas the first and last phases are optional, the stimulus and response phases are always present and may occur more than once. Each phase is defined by blocks: blocks are defined by a label and extend to the beginning of the next block or the end of the method. In the test at step 2, we can see 3 types of blocks: In the given block, data gets initialized The when and then blocks always occur together. They describe a stimulus and the expected response. Whereas when blocks may contain arbitrary code; then blocks are restricted to conditions, exception checking, interactions, and variable definitions. The first test case has two when/then pairs. A pet is assigned the name "fido", and the test verifies that calling getPet on an owner object only returns something if the pet is actually "owned" by the owner. The second test is slightly more complex because it employs the Selenium framework to execute a web integration test with a BDD flavor. The test is located in the groovy-test/spock/specs/src/test folder. You can launch it by typing gradle test from the groovy-test/spock/specs folder. The test takes care of starting the web container and run the application under test, Petclinic. The test starts by defining a shared Selenium driver marked with the @Shared annotation, which is visible by all the feature methods. The first feature method simply opens the Petclinic main page and checks that the title matches the specification. The second feature method uses the Selenium API to select a link, click on it, and verify that the link brings the user to the right page. The verification is performed against the currentUrl of the browser that is expected to match the URL of the link we clicked on. See also http://dannorth.net/introducing-bdd/ http://en.wikipedia.org/wiki/Behavior-driven_development https://code.google.com/p/spock/ https://github.com/spring-projects/spring-petclinic/
Read more
  • 0
  • 0
  • 3659
article-image-parse-objects-and-queries
Packt
18 Oct 2013
6 min read
Save for later

Parse Objects and Queries

Packt
18 Oct 2013
6 min read
(For more resources related to this topic, see here.) In this article, we will learn how to work with Parse objects along with writing queries to set and get data from Parse. Every application has a different and specific Application ID associated with the Client Key, which remains same for all the applications of the same user. Parse is based on object-oriented principles. All the operations on Parse will be done in the form of objects. Parse saves your data in the form of objects you send, and helps you to fetch the data in the same format again. In this article, you will learn about objects and operations that can be performed on Parse objects. Parse objects All the data in Parse is saved in the form of PFObject. When you fetch any data from Parse by firing a query, the result will be in the form of PFObject. The detailed concept of PFObject is explained in the following section. PFObject Data stored on Parse is in the form of objects and it's developed around PFObject. PFObject can be defined as the key-value (dictionary format) pair of JSON data. The Parse data is schemaless, which means that you don't need to specify ahead of time what keys exist on each PFObject. Parse backend will take care of storing your data simply as a set of whatever key-value pair you want. Let's say you are tracking the visited count of the username with a user ID using your application. A single PFObject could contain the following code: visitedCount:1122, userName:"Jack Samuel", userId:1232333332 Parse accepts only string as Key. Values can be strings, numbers, Booleans, or even arrays, and dictionaries—anything that can be JSON encoded. The class name of PFObject is used to distinguish different sorts of data. Let's say you call the visitedCounts object of the user. Parse recommends you to write your class name NameYourClassLikeThis and nameYourKeysLikeThis just to provide readability to the code. As you have seen in the previous example, we have used visitedCounts to represent the visited count key. Operations on Parse objects You can perform save, update, and delete operations on Parse objects. Following is the detailed explanation of the operations that can be performed on Parse objects. Saving objects To save your User table on the Parse Cloud with additional fields, you need to follow the coding convention similar to the NSMutableDictionary method. After updating the data you have to call the saveInBackground method to save it on the Parse Cloud. Here is the example that explains how to save additional data on the Parse Cloud: PFObject *userObject = [PFObject currentUser];[userObject setObject:[NSNumber numberWithInt:1122]forKey:@"visitedCount"];[userObject setObject:@"Jack Samuel" forKey:@"userName"];[userObject setObject:@"1232333332" forKey:@"userId"];[userObject saveInBackground]; Just after executing the preceding piece of code, your data is saved on the Parse Cloud. You can check your data in Data Browser of your application on Parse. It should be something similar to the following line of code: objectId: "xWMyZ4YEGZ", visitedCount: 1122, userName: "JackSamuel", userId: "1232333332",createdAt:"2011-06-10T18:33:42Z", updatedAt:"2011-06-10T18:33:42Z" There are two things to note here: You don't have to configure or set up a new class called User before running your code. Parse will automatically create the class when it first encounters it. There are also a few fields you don't need to specify, those are provided as a convenience: objectId is a unique identifier for each saved object. createdAt and updatedAt represent the time that each object was created and last modified in the Parse Cloud. Each of these fields is filled in by Parse, so they don't exist on PFObject until a save operation has completed. You can provide additional logic after the success or failure of the callback operation using the saveInBackgroundWithBlock or saveInBackgroundWithTarget:selector: methods provided by Parse: [userObject saveInBackgroundWithBlock:^(BOOLsucceeded, NSError *error) {if (succeeded)NSLog(@"Success");elseNSLog(@"Error %@",error);}]; Fetching objects To fetch the saved data from the Parse Cloud is even easier than saving data. You can fetch the data from the Parse Cloud in the following way. You can fetch the complete object from its objectId using PFQuery. Methods to fetch data from the cloud are asynchronous. You can implement this either by using block-based or callback-based methods provided by Parse: PFQuery *query = [PFQuery queryWithClassName:@"GameScore"]; // 1[query getObjectInBackgroundWithId:@"xWMyZ4YEGZ" block:^(PFObject*gameScore, NSError *error) { //2// Do something with the returned PFObject in the gameScorevariable.int score = [[gameScore objectForKey:@"score"] intValue];NSString *playerName = [gameScore objectForKey:@"playerName"];//3BOOL cheatMode = [[gameScore objectForKey:@"cheatMode"]boolValue];NSLog(@"%@", gameScore);}];// The InBackground methods are asynchronous, so the code writtenafter this will be executed// immediately. The codes which are dependent on the query resultshould be moved// inside the completion block above. Lets analyze each line in here, as follows: Line 1: It creates a query object pointing to the class name given in the argument. Line 2: It calls an asynchronous method on the query object created in line 1 to download the complete object for objectId, provided as an argument. As we are using the block-based method, we can provide code inside the block, which will execute on success or failure. Line 3: It reads data from PFObject that we got in response to the query. Parse provides some common values of all Parse objects as properties: NSString *objectId = gameScore.objectId;NSDate *updatedAt = gameScore.updatedAt;NSDate *createdAt = gameScore.createdAt; To refresh the current Parse object, type: [myObject refresh]; This method can be called on any Parse object, which is useful when you want to refresh the data of the object. Let's say you want to re-authenticate a user, so you can call the refresh method on the user object to refresh it. Saving objects offline Parse provides you with the functions to save your data when the user is offline. So when the user is not connected to the Internet, the data will be saved locally in the objects, and as soon as the user is connected to the Internet, data will be saved automatically on the Parse Cloud. If your application is forcefully closed before establishing the connection, Parse will try again to save the object next time the application is opened. For such operations, Parse provides you with the saveEventually method, so that you will not lose any data even when the user is not connected to the Internet. Eventually all calls are executed in the order the request is made. The following code demonstrates the saveEventually call: // Create the object.PFObject *gameScore = [PFObject objectWithClassName:@"GameScore"];[gameScore setObject:[NSNumber numberWithInt:1337]forKey:@"score"];[gameScore setObject:@"Sean Plott" forKey:@"playerName"];[gameScore setObject:[NSNumber numberWithBool:NO]forKey:@"cheatMode"];[gameScore saveEventually]; Summary In this article, we explored Parse objects and the way to query the data available on Parse. We started by exploring Parse objects and the ways to save these objects on the cloud. Finally, we learned about the queries which will help us to fetch the saved data on Parse. Resources for Article: Further resources on this subject: New iPad Features in iOS 6 [Article] Creating a New iOS Social Project [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 3844

article-image-events-and-signals
Packt
16 Oct 2013
16 min read
Save for later

Events and Signals

Packt
16 Oct 2013
16 min read
(For more resources related to this topic, see here.) Event management An event in Qt is an object inherited from the abstract QEvent class which is a notification of something significant that has happened. Events become more useful in creating custom widgets on our own. An event can happen either within an application or as a result of an outside activity that the application needs to know about. When an event occurs, Qt creates an event object and notifies to the instance of an QObject class or one of its subclasses through their event() function. Events can be generated from both inside and outside the application. For instance, the QKeyEvent and QMouseEvent object represent some kind of keyboard and mouse interaction and they come from the window manager; the QTimerEvent objects are sent to QObject when one of its timers fires, and they usually come from the operating system; the QChildEvent objects are sent to QObject when a child is added or removed and they come from inside of your Qt application. The users of PySide usually get confused with events and signals. Events and signals are two parallel mechanisms used to accomplish the same thing. As a general difference, signals are useful when using a widget, whereas events are useful when implementing the widget. For example, when we are using a widget like QPushButton, we are more interested in its clicked() signal than in the low-level mouse press or key press events that caused the signal to be emitted. But if we are implementing the QPushButton class, we are more interested in the implementation of code for mouse and key events. Also, we usually handle events but get notified by signal emissions. Event loop All the events in Qt will go through an event loop. One main key concept to be noted here is that the events are not delivered as soon as they are generated; instead they're queued up in an event queue and processed later one-by-one. The event dispatcher will loop through this queue and dispatch these events to the target QObject and hence it is called an event loop. Qt's main event loop dispatcher, QCoreApplication.exec() will fetch the native window system events from the event queue and will process them, convert them into the QEvent objects, and send it to their respective target QObject. A simple event loop can be explained as described in the following pseudocode: while(application_is_active) { while(event_exists_in_event_queue) process_next_event(); wait_for_more_events(); } The Qt's main event loop starts with the QCoreApplication::exec() call and this gets blocked until QCoreApplication::exit() or QCoreApplication::quit() is called to terminate the loop. The wait_for_more_events() function blocks until some event is generated. This blocking is not a busy wait blocking and will not burn the CPU resources. Generally the event loop can be awaken by a window manager activity, socket activity, timers, or event posted by other threads. All these activities require a running event loop. It is more important not to block the event loop because when it is struck, widgets will not update themselves, timers won't fire, networking communications will slow down and stop. In short, your application will not respond to any external or internal events and hence it is advised to quickly react to events and return to the event loop as soon as possible. Event processing Qt offers five methods to do event processing. They are: By re-implementing a specific event handler like keyPressEvent(), paintEvent() By re-implementing the QObject::event() class Installing an event filter on a single QObject Installing an event filter on the QApplication object Subclassing QApplication and re-implementing notify() Generally, this can be broadly divided into re-implementing event handlers and installing event filters. We will see each of them in little detail. Reimplementing event handlers We can implement the task at hand or control a widget by reimplementing the virtual event handling functions. The following example will explain how to reimplement a few most commonly used events, a key press event, a mouse double-click event, and a window resize event. We will have a look at the code first and defer the explanation after the code: # Import necessary modules import sys from PySide.QtGui import * from PySide.QtCore import * # Our main widget class class MyWidget(QWidget): # Constructor function def __init__(self): QWidget.__init__(self) self.setWindowTitle("Reimplementing Events") self.setGeometry(300, 250, 300, 100) self.myLayout = QVBoxLayout() self.myLabel = QLabel("Press 'Esc' to close this App") self.infoLabel = QLabel() self.myLabel.setAlignment(Qt.AlignCenter) self.infoLabel.setAlignment(Qt.AlignCenter) self.myLayout.addWidget(self.myLabel) self.myLayout.addWidget(self.infoLabel) self.setLayout(self.myLayout) # Function reimplementing Key Press, Mouse Click and Resize Events def keyPressEvent(self, event): if event.key() == Qt.Key_Escape: self.close() def mouseDoubleClickEvent(self, event): self.close() def resizeEvent(self, event): self.infoLabel.setText("Window Resized to QSize(%d, %d)" % (event.size().width(), event.size().height())) if __name__ =='__main__': # Exception Handling try: myApp = QApplication(sys.argv) myWidget = MyWidget() myWidget.show() myApp.exec_() sys.exit(0) except NameError: print("Name Error:", sys.exc_info()[1]) except SystemExit: print("Closing Window...") except Exception: print(sys.exc_info()[1]) In the preceding code, the keyPressEvent() function reimplements the event generated as a result of pressing a key. We have implemented in such a way that the application closes when the Esc key is pressed. On running this code, we would get a output similar to the one shown in the following screenshot: The application will be closed if you press the Esc key. The same functionality is implemented on a mouse double-click event. The third event is a resize event. This event gets triggered when you try to resize the widget. The second line of text in the window will show the size of the window in (width, height) format. You could witness the same on resizing the window. Similar to keyPressEvent(), we could also implement keyReleaseEvent() that would be triggered on release of the key. Normally, we are not very interested in the key release events except for the keys where it is important. The specific keys where the release event holds importance are the modifier keys such as Ctrl, Shift, and Alt. These keys are called modifier keys and can be accessed using QKeyEvent::modifiers. For example, the key press of a Ctrl key can be checked using Qt.ControlModifier. The other modifiers are Qt.ShiftModifier and Qt.AltModifier. For instance, if we want to check the press event of combination of Ctrl + PageDown key, we could have the check as: if event.key() == Qt.Key_PageDown and event.modifiers() == Qt.ControlModifier: print("Ctrl+PgDn Key is pressed") Before any particular key press or mouse click event handler function, say, for example, keyPressEvent() is called, the widget's event() function is called first. The event() method may handle the event itself or may delegate the work to a specific event handler like resizeEvent() or keyPressEvent(). The implementation of the event() function is very helpful in some special cases like the Tab key press event. In most cases, the widget with the keyboard focuses the event() method will call setFocus() on the next widget in the tab order and will not pass the event to any of the specific handlers. So we might have to re-implement any specific functionality for the Tab key press event in the event() function. This behavior of propagating the key press events is the outcome of Qt's Parent-Child hierarchy. The event gets propagated to its parent or its grand-parent and so on if it is not handled at any particular level. If the top-level widget also doesn't handle the event it is safely ignored. The following code shows an example for reimplementing the event() function: class MyWidget(QWidget): # Constructor function def __init__(self): QWidget.__init__(self) self.setWindowTitle("Reimplementing Events") self.setGeometry(300, 250, 300, 100) self.myLayout = QVBoxLayout() self.myLabel1 = QLabel("Text 1") self.myLineEdit1 = QLineEdit() self.myLabel2 = QLabel("Text 2") self.myLineEdit2 = QLineEdit() self.myLabel3 = QLabel("Text 3") self.myLineEdit3 = QLineEdit() self.myLayout.addWidget(self.myLabel1) self.myLayout.addWidget(self.myLineEdit1) self.myLayout.addWidget(self.myLabel2) self.myLayout.addWidget(self.myLineEdit2) self.myLayout.addWidget(self.myLabel3) self.myLayout.addWidget(self.myLineEdit3) self.setLayout(self.myLayout) # Function reimplementing event() function def event(self, event): if event.type()== QEvent.KeyRelease and event.key()== Qt.Key_Tab: self.myLineEdit3.setFocus() return True return QWidget.event(self,event) In the preceding example, we try to mask the default behavior of the Tab key. If you haven't implemented the event() function, pressing the Tab key would have set focus to the next available input widget. You will not be able to detect the Tab key press in the keyPress() function as described in the previous examples, since the key press is never passed to them. Instead, we have to implement it in the event() function. If you execute the preceding code, you would see that every time you press the Tab key the focus will be set into the third QLineEdit widget of the application. Inside the event() function, it is more important to return the value from the function. If we have processed the required operation, True is returned to indicate that the event is handled successfully, else, we pass the event handling to the parent class's event() function. Installing event filters One of the interesting and notable features of Qt's event model is to allow a QObject instance to monitor the events of another QObject instance before the latter object is even notified of it. This feature is very useful in constructing custom widgets comprising of various widgets altogether. Consider that you have a requirement to implement a feature in an internal application for a customer such that pressing the Enter key must have to shift the focus to next input widget. One way to approach the problem is to reimplement the keyPressEvent() function for all the widgets present in the custom widget. Instead, this can be achieved by reimplementing the eventFilter() function for the custom widget. If we implement this, the events will first be passed on to the custom widget's eventFilter() function before being passed on to the target widget. An example is implemented as follows: def eventFilter(self, receiver, event): if(event.type() == QEvent.MouseButtonPress): QMessageBox.information(None,"Filtered Mouse Press Event!!",'Mouse Press Detected') return True return super(MyWidget,self).eventFilter(receiver, event) Remember to return the result of event handling, or pass it on to the parent's eventFilter() function. To invoke eventFilter(), it has to be registered as follows in the constructor function: self.installEventFilter(self) The event filters can also be implemented for the QApplication as a whole. This is left as an exercise for you to discover. Reimplementing the notify() function The final way of handling events is to reimplement the notify() function of the QApplication class. This is the only way to get all the events before any of the event filters discussed previously are notified. The event gets notified to this function first before it gets passed on to the event filters and specific event functions. The use of notify() and other event filters are generally discouraged unless it is absolutely necessary to implement them because handling them at top level might introduce unwanted results, and we might end up in handling the events that we don't want to. Instead, use the specific event functions to handle events. The following code excerpt shows an example of re-implementing the notify() function: class MyApplication(QApplication): def __init__(self, args): super(MyApplication, self).__init__(args) def notify(self, receiver, event): if (event.type() == QEvent.KeyPress): QMessageBox.information(None, "Received Key Release EVent", "You Pressed: "+ event.text()) return super(MyApplication, self).notify(receiver, event) Signals and slots The fundamental part of any GUI program is the communication between the objects. Signals and slots provide a mechanism to define this communication between the actions happened and the result proposed for the respective action. Prior to Qt's modern implementation of signal/slot mechanism, older toolkits achieve this kind of communication through callbacks. A callback is a pointer to a function, so if you want a processing function to notify about some event you pass a pointer to another function (the callback) to the processing function. The processing function then calls the callback whenever appropriate. This mechanism does not prove useful in the later advancements due to some flaws in the callback implementation. A signal is an observable event, or at least notification that the event has happened. A slot is a potential observer, more usually a function that is called. In order to establish communication between them, we connect a signal to a slot to establish the desired action. However, we have already seen the concept of connecting a signal to a slot in the earlier chapters while designing the text editor application. Those implementations handle and connect different signals to different objects. However, we may have different combinations as defined in the bullet points: One signal can be connected to many slots Many signals can be connected to the same slot A signal can be connected to other signals Connections can be removed PySide offers various predefined signals and slots such that we can connect a predefined signal to a predefined slot and do nothing else to achieve what we want. However, it is also possible to define our own signals and slots. Whenever a signal is emitted, Qt will simply throw it away. We can define the slot to catch and notice the signal that is being emitted. The first code excerpt that follows this text will be an example for connecting predefined signals to predefined slots and the latter will discuss the custom user defined signals and slots. The first example is a simple EMI calculator application that takes the Loan Amount, Rate of Interest, and Number of Years as its input, and calculates the EMI per month and displays it to the user. To start with, we set in a layout the components required for the EMI calculator application. The Amount will be a text input from the user. The rate of years will be taken from a spin box input or a dial input. A spin box is a GUI component which has its minimum and maximum value set, and the value can be modified using the up and down arrow buttons present at its side. The dial represents a clock like widget whose values can be changed by dragging the arrow. The Number of Years value is taken by a spin box input or a slider input: class MyWidget(QWidget): def __init__(self): QWidget.__init__(self) self.amtLabel = QLabel('Loan Amount') self.roiLabel = QLabel('Rate of Interest') self.yrsLabel = QLabel('No. of Years') self.emiLabel = QLabel('EMI per month') self.emiValue = QLCDNumber() self.emiValue.setSegmentStyle(QLCDNumber.Flat) self.emiValue.setFixedSize(QSize(130,30)) self.emiValue.setDigitCount(8) self.amtText = QLineEdit('10000') self.roiSpin = QSpinBox() self.roiSpin.setMinimum(1) self.roiSpin.setMaximum(15) self.yrsSpin = QSpinBox() self.yrsSpin.setMinimum(1) self.yrsSpin.setMaximum(20) self.roiDial = QDial() self.roiDial.setNotchesVisible(True) self.roiDial.setMaximum(15) self.roiDial.setMinimum(1) self.roiDial.setValue(1) self.yrsSlide = QSlider(Qt.Horizontal) self.yrsSlide.setMaximum(20) self.yrsSlide.setMinimum(1) self.calculateButton = QPushButton('Calculate EMI') self.myGridLayout = QGridLayout() self.myGridLayout.addWidget(self.amtLabel, 0, 0) self.myGridLayout.addWidget(self.roiLabel, 1, 0) self.myGridLayout.addWidget(self.yrsLabel, 2, 0) self.myGridLayout.addWidget(self.amtText, 0, 1) self.myGridLayout.addWidget(self.roiSpin, 1, 1) self.myGridLayout.addWidget(self.yrsSpin, 2, 1) self.myGridLayout.addWidget(self.roiDial, 1, 2) self.myGridLayout.addWidget(self.yrsSlide, 2, 2) self.myGridLayout.addWidget(self.calculateButton, 3, 1) self.setLayout(self.myGridLayout) self.setWindowTitle("A simple EMI calculator") Until now, we have set the components that are required for the application. Note that, the application layout uses a grid layout option. The next set of code is also defined in the contructor's __init__ function of the MyWidget class which will connect the different signals to slots. There are different ways by which you can use a connect function. The code explains the various options available: self.roiDial.valueChanged.connect(self.roiSpin.setValue) self.connect(self.roiSpin, SIGNAL("valueChanged(int)"), self.roiDial.setValue) In the first line of the previous code, we connect the valueChanged() signal of roiDial to call the slot of roiSpin, setValue(). So, if we change the value of roiDial, it emits a signal that connects to the roiSpin's setValue() function and will set the value accordingly. Here, we must note that changing either the spin or dial must change the other value because both represent a single entity. Hence, we induce a second line which calls roiDial's setValue() slot on changing the roiSpin's value. However, it is to be noted that the second form of connecting signals to slots is deprecated. It is given here just for reference and it is strongly discouraged to use this form. The following two lines of code execute the same for the number of years slider and spin: self.yrsSlide.valueChanged.connect(self.yrsSpin.setValue) self.connect(self.yrsSpin, SIGNAL("valueChanged(int)"), self.yrsSlide, SLOT("setValue(int)")) In order to calculate the EMI value, we connect the clicked signal of the push button to a function (slot) which calculates the EMI and displays it to the user: self.connect(self.calculateButton, SIGNAL("clicked()"), self.showEMI) The EMI calculation and display function is given for your reference: def showEMI(self): loanAmount = float(self.amtText.text()) rateInterest = float( float (self.roiSpin.value() / 12) / 100) noMonths = int(self.yrsSpin.value() * 12) emi = (loanAmount * rateInterest) * ( ( ( (1 + rateInterest) ** noMonths ) / ( ( (1 + rateInterest) ** noMonths ) - 1) )) self.emiValue.display(emi) self.myGridLayout.addWidget(self.emiLabel, 4, 0) self.myGridLayout.addWidget(self.emiValue, 4, 2) The sample output of the application is shown in the following screenshot: The EMI calculator application uses the predefined signals, say, for example, valueChanged(), clicked() and predefined slots, setValue(). However, the application also uses a user-defined slot showEMI() to calculate the EMI. As with slots, it is also possible to create a user-defined signal and emit it when required. The following program is an example for creating and emitting user-defined signals: import sys from PySide.QtCore import * # define a new slot that receives and prints a string def printText(text): print(text) class CustomSignal(QObject): # create a new signal mySignal = Signal(str) if __name__ == '__main__': try: myObject = CustomSignal() # connect signal and slot myObject.mySignal.connect(printText) # emit signal myObject.mySignal.emit("Hello, Universe!") except Exception: print(sys.exc_info()[1]) This is a very simple example of using custom signals. In the CustomSignal class, we create a signal named mySignal and we emit it in the main function. Also, we define that on emission of the signal mySignal, the printText() slot would be called. Many complex signal emissions can be built this way.
Read more
  • 0
  • 0
  • 4829