Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 5705

article-image-testing-groovy
Packt
18 Oct 2013
24 min read
Save for later

Testing with Groovy

Packt
18 Oct 2013
24 min read
(For more resources related to this topic, see here.) This article is completely devoted to testing in Groovy. Testing is probably the most important activity that allows us to produce better software and make our users happier. The Java space has countless tools and frameworks that can be used for testing our software. In this article, we will direct our focus on some of those frameworks and how they can be integrated with Groovy. We will discuss not only unit testing techniques, but also integration and load testing strategies. Starting from the king of all testing frameworks, JUnit and its seamless Groovy integration, we move to explore how to test: SOAP and REST web services Code that interacts with databases The web application interface using Selenium The article also covers Behavior Driven Development (BDD) with Spock, advanced web service testing using soapUI, and load testing using JMeter. Unit testing Java code with Groovy One of the ways developers start looking into the Groovy language and actually using it is by writing unit tests. Testing Java code with Groovy makes the tests less verbose and it's easier for the developers that clearly express the intent of each test method. Thanks to the Java/Groovy interoperability, it is possible to use any available testing or mocking framework from Groovy, but it's just simpler to use the integrated JUnit based test framework that comes with Groovy. In this recipe, we are going to look at how to test Java code with Groovy. Getting ready This recipe requires a new Groovy project that we will use again in other recipes of this article. The project is built using Gradle and contains all the test cases required by each recipe. Let's create a new folder called groovy-test and add a build.gradle file to it. The build file will be very simple: apply plugin: 'groovy' apply plugin: 'java' repositories { mavenCentral() maven { url 'https://oss.sonatype.org' + '/content/repositories/snapshots' } } dependencies { compile 'org.codehaus.groovy:groovy-all:2.1.6' testCompile 'junit:junit:4.+' } The standard source folder structure has to be created for the project. You can use the following commands to achieve this: mkdir -p src/main/groovy mkdir -p src/test/groovy mkdir -p src/main/java Verify that the project builds without errors by typing the following command in your shell: gradle clean build How to do it... To show you how to test Java code from Groovy, we need to have some Java code first! So, this recipe's first step is to create a simple Java class called StringUtil, which we will test in Groovy. The class is fairly trivial, and it exposes only one method that concatenates String passed in as a List: package org.groovy.cookbook; import java.util.List; public class StringUtil { public String concat(List<String> strings, String separator) { StringBuilder sb = new StringBuilder(); String sep = ""; for(String s: strings) { sb.append(sep).append(s); sep = separator; } return sb.toString(); } } Note that the class has a specific package, so don't forget to create the appropriate folder structure for the package when placing the class in the src/main/java folder. Run the build again to be sure that the code compiles. Now, add a new test case in the src/test/groovy folder: package org.groovy.cookbook.javatesting import org.groovy.cookbook.StringUtil class JavaTest extends GroovyTestCase { def stringUtil = new StringUtil() void testConcatenation() { def result = stringUtil.concat(['Luke', 'John'], '-') assertToString('Luke-John', result) } void testConcatenationWithEmptyList() { def result = stringUtil.concat([], ',') assertEquals('', result) } } Again, pay attention to the package when creating the test case class and create the package folder structure. Run the test by executing the following Gradle command from your shell: gradle clean build Gradle should complete successfully with the BUILD SUCCESSFUL message. Now, add a new test method and run the build again: void testConcatenationWithNullShouldReturnNull() { def result = stringUtil.concat(null, ',') assertEquals('', result) } This time the build should fail: 3 tests completed, 1 failed :test FAILED FAILURE: Build failed with an exception. Fix the test by adding a null check to the Java code as the first statement of the concat method: if (strings == null) { return ""; } Run the build again to verify that it is now successful. How it works... The test case shown at step 3 requires some comments. The class extends GroovyTestCase is a base test case class that facilitates the writing of unit tests by adding several helper methods to the classes extending it. When a test case extends from GroovyTestCase, each test method name must start with test and the return type must be void. It is possible to use the JUnit 4.x @Test annotation, but, in that case, you don't have to extend from GroovyTestCase. The standard JUnit assertions (such as assertEquals and assertNull) are directly available in the test case (without explicit import) plus some additional assertion methods are added by the super class. The test case at step 3 uses assertToString to verify that a String matches the expected result. There are other assertions added by GroovyTestCase, such as assertArrayEquals, to check that two arrays contain the same values, or assertContains to assert that an array contains a given element. There's more... The GroovyTestCase class also offers an elegant method to test for expected exceptions. Let's add the following rule to the concat method: if (separator.length() != 1) { throw new IllegalArgumentException( "The separator must be one char long"); } Place the separator length check just after the null check for the List. Add the following new test method to the test case: void testVerifyExceptionOnWrongSeparator() { shouldFail IllegalArgumentException, { stringUtil(['a', 'b'], ',,') } shouldFail IllegalArgumentException, { stringUtil(['c', 'd'], '') } } The shouldFail method takes a closure that is executed in the context of a try-catch block. We can also specify the expected exception in the shouldFail method. The shouldFail method makes testing for exceptions very elegant and simple to read. See also http://junit.org/ http://groovy.codehaus.org/api/groovy/util/GroovyTestCase.html Testing SOAP web services This recipe shows you how to use the JUnit 4 annotations instead of the JUnit 3 API offered by GroovyTestCase. Getting ready For this recipe, we are going to use a publicly available web service, the US holiday date web service hosted at http://www.holidaywebservice.com. The WSDL of the service can be found on the /Holidays/US/Dates/USHolidayDates.asmx?WSDL path. We have already encountered this service in the Issuing a SOAP request and parsing a response recipe in Article 8, Working with Web Services in Groovy. Each service operation simply returns the date of a given US holiday such as Easter or Christmas. How to do it... We start from the Gradle build that we created in the Unit testing Java code with Groovy recipe. Add the following dependency to the dependencies section of the build.gradle file: testCompile 'com.github.groovy-wslite:groovy-wslite:0.8.0' Let's create a new unit test for verifying one of the web service operations. As usual, the test case is created in the src/test/groovy/org/groovy/cookbook folder: package org.groovy.cookbook.soap import static org.junit.Assert.* import org.junit.Test import wslite.soap.* class SoapTest { ... } Add a new test method to the body of the class: @Test void testMLKDay() { def baseUrl = 'http://www.holidaywebservice.com' def service = '/Holidays/US/Dates/USHolidayDates.asmx?WSDL' def client = new SOAPClient("${baseUrl}${service}") def baseNS = 'http://www.27seconds.com/Holidays/US/Dates/' def action = "${baseNS}GetMartinLutherKingDay" def response = client.send(SOAPAction: action) { body { GetMartinLutherKingDay('>gradle -Dtest.single=SoapTest clean test How it works... The test code creates a new SOAPClient with the URI of the target web service. The request is created using Groovy's MarkupBuilder. The body closure (and if needed, also the header closure) is passed to the MarkupBuilder for the SOAP message creation. The assertion code gets the result from the response, which is automatically parsed by XMLSlurper, allowing easy access to elements of the response such as the header or the body elements. In the previous test, we simply check that the returned Martin Luther King day matches with the expected one for the year 2013. There's more... If you require more control over the content of the SOAP request, the SOAPClient also supports sending the SOAP envelope as a String, such as in this example: def response = client.send ( """<?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope> <soapenv:Header/> <soapenv:Body> <dat:GetMartinLutherKingDay> <dat:year>2013</dat:year> </dat:GetMartinLutherKingDay> </soapenv:Body> </soapenv:Envelope> """ ) Replace the call to the send method in step 3 with the one above and run your test again. See also https://github.com/jwagenleitner/groovy-wslite http://www.holidaywebservice.com Testing RESTful services This recipe is very similar to the previous recipe Testing SOAP web services, except that it shows how to test a RESTful service using Groovy and JUnit. Getting ready For this recipe, we are going to use a test framework aptly named Rest-Assured. This framework is a simple DSL for testing and validating REST services returning either JSON or XML. Before we start to delve into the recipe, we need to start a simple REST service for testing purposes. We are going to use the Ratpack framework. The test REST service, which we will use, exposes three APIs to fetch, add, and delete books from a database using JSON as lingua franca. For the sake of brevity, the code for the setup of this recipe is available in the rest-test folder of the companion code for this article. The code contains the Ratpack server, the domain objects, Gradle build, and the actual test case that we are going to analyze in the next section. How to do it... The test case takes care of starting the Ratpack server and execute the REST requests. Here is the RestTest class located in src/test/groovy/org/groovy/ cookbok/rest folder: src/test/groovy/org/groovy/ cookbok/rest folder: package org.groovy.cookbook.rest import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher. RestAssuredMatchers.* import static org.hamcrest.Matchers.* import static org.junit.Assert.* import groovy.json.JsonBuilder import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test class RestTest { static server final static HOST = 'http://localhost:5050' @BeforeClass static void setUp() { server = App.init() server.startAndWait() } @AfterClass static void tearDown() { if(server.isRunning()) { server.stop() } } @Test void testGetBooks() { expect(). body('author', hasItems('Ian Bogost', 'Nate Silver')). when().get("${HOST}/api/books") } @Test void testGetBook() { expect(). body('author', is('Steven Levy')). when().get("${HOST}/api/books/5") } @Test void testPostBook() { def book = new Book() book.author = 'Haruki Murakami' book.date = '2012-05-14' book.title = 'Kafka on the shore' JsonBuilder jb = new JsonBuilder() jb.content = book given(). content(jb.toString()). expect().body('id', is(6)). when().post("${HOST}/api/books/new") } @Test void testDeleteBook() { expect().statusCode(200). when().delete("${HOST}/api/books/1") expect().body('id', not(hasValue(1))). when().get("${HOST}/api/books") } } Build the code and execute the test from the command line by typing: gradle clean test How it works... The JUnit test has a @BeforeClass annotated method, executed at the beginning of the unit test, that starts the Ratpack server and the associated REST services. The @AfterClass annotated method, on the contrary, shuts down the server when the test is over. The unit test has four test methods. The first one, testGetBooks executes a GET request against the server and retrieves all the books. The rather readable DSL offered by the Rest-Assured framework should be easy to follow. The expect method starts building the response expectation returned from the get method. The actual assert of the test is implemented via a Hamcrest matcher (hence the static org.hamcrest.Matchers.* import in the test). The test is asserting that the body of the response contains two books that have the author named Ian Bogost or Greg Grandin. The get method hits the URL of the embedded Ratpack server, started at the beginning of the test. The testGetBook method is rather similar to the previous one, except that it uses the is matcher to assert the presence of an author on the returned JSON message. The testPostBook tests that the creation of a new book is successful. First, a new book object is created and transformed into a JSON object using JsonBuilder. Instead of the expect method, we use the given method to prepare the POST request. The given method returns a RequestSpecification to which we assign the newly created book and finally, invoke the post method to execute the operation on the server. As in our book database, the biggest identifier is 8, the new book should get the id 9, which we assert in the test. The last test method (testDeleteBook) verifies that a book can be deleted. Again we use the expect method to prepare the response, but this time we verify that the returned HTTP status code is 200 (for deletion) upon deleting a book with the id 1. The same test also double-checks that fetching the full list of books does not contain the book with id equals to 1. See also https://code.google.com/p/rest-assured/ https://code.google.com/p/hamcrest/ https://github.com/ratpack/ratpack Writing functional tests for web applications If you are developing web applications, it is of utmost importance that you thoroughly test them before allowing user access. Web GUI testing can require long hours of very expensive human resources to repeatedly exercise the application against a varied list of input conditions. Selenium, a browser-based testing and automation framework, aims to solve these problems for software developers, and it has become the de facto standard for web interface integration and functional testing. Selenium is not just a single tool but a suite of software components, each catering to different testing needs of an organization. It has four main parts: Selenium Integrated Development Environment (IDE): It is a Firefox add-on that you can only use for creating relatively simple test cases and test suites. Selenium Remote Control (RC): It also known as Selenium 1, is the first Selenium tool that allowed users to use programming languages in creating complex tests. WebDriver: It is the newest Selenium implementation that allows your test scripts to communicate directly to the browser, thereby controlling it from the OS level. Selenium Grid: It is a tool that is used with Selenium RC to execute parallel tests across different browsers and operating systems. Since 2008, Selenium RC and WebDriver are merged into a single framework to form Selenium 2. Selenium 1 refers to Selenium RC. This recipe will show you how to write a Selenium 2 based test using HtmlUnitDriver. HtmlUnitDriver is the fastest and most lightweight implementation of WebDriver at the moment. As the name suggests, it is based on HtmlUnit, a relatively old framework for testing web applications. The main disadvantage of using this driver instead of a WebDriver implementation that "drives" a real browser is the JavaScript support. None of the popular browsers use the JavaScript engine used by HtmlUnit (Rhino). If you test JavaScript using HtmlUnit, the results may divert considerably from those browsers. Still, WebDriver and HtmlUnit can be used for fast paced testing against a web interface, leaving more JavaScript intensive tests to other, long running, WebDriver implementations that use specific browsers. Getting ready Due to the relative complexity of the setup required to demonstrate the steps of this recipe, it is recommended that the reader uses the code that comes bundled with this recipe. The code is located in the selenium-test located in the code directory for this article. The source code, as other recipes in this article, is built using Gradle and has a standard structure, containing application code and test code. The web application under test is very simple. It is composed of two pages. A welcome page that looks similar to the following screenshot: And a single field test form page: The Ratpack framework is utilized to run the fictional web application and serve the HTML pages along with some JavaScript and CSS. How to do it... The following steps will describe the salient points of Selenium testing with Groovy. Let's open the build.gradle file. We are interested in the dependencies required to execute the tests: testCompile group: 'org.seleniumhq.selenium', name: 'selenium-htmlunit-driver', version: '2.32.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-support', version: '2.9.0' Let's open the test case, SeleniumTest.groovy located in test/groovy/org/ groovy/cookbook/selenium: test/groovy/org/ groovy/cookbook/selenium: package org.groovy.cookbook.selenium import static org.junit.Assert.* import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test import org.openqa.selenium.By import org.openqa.selenium.WebDriver import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import org.openqa.selenium.support.ui.WebDriverWait import com.google.common.base.Function class SeleniumTest { static server static final HOST = 'http://localhost:5050' static HtmlUnitDriver driver @BeforeClass static void setUp() { server = App.init() server.startAndWait() driver = new HtmlUnitDriver(true) } @AfterClass static void tearDown() { if (server.isRunning()) { server.stop() } } @Test void testWelcomePage() { driver.get(HOST) assertEquals('welcome', driver.title) } @Test void testFormPost() { driver.get("${HOST}/form") assertEquals('test form', driver.title) WebElement input = driver.findElement(By.name('username')) input.sendKeys('test') input.submit() WebDriverWait wait = new WebDriverWait(driver, 4) wait.until ExpectedConditions. presenceOfElementLocated(By.className('hello')) assertEquals('oh hello,test', driver.title) } } How it works... The test case initializes the Ratpack server and the HtmlUnit driver by passing true to the HtmlUnitDriver instance. The boolean parameter in the constructor indicates whether the driver should support JavaScript. The first test, testWelcomePage, simply verifies that the title of the website's welcome page is as expected. The get method executes an HTTP GET request against the URL specified in the method, the Ratpack server in our test. The second test, testFormPost, involves the DOM manipulation of a form, its submission, and waiting for an answer from the server. The Selenium API should be fairly readable. For a start, the test checks that the page containing the form has the expected title. Then the element named username (a form field) is selected, populated, and finally submitted. This is how the HTML looks for the form field: <input type="text" name="username" placeholder="Your username"> The test uses the findElement method to select the input field. The method expects a By object that is essentially a mechanism to locate elements within a document. Elements can be identified by name, id, text link, tag name, CSS selector, or XPath expression. The form is submitted via AJAX. Here is part of the JavaScript activated by the form submission: complete:function(xhr, status) { if (status === 'error' || !xhr.responseText) { alert('error') } else { document.title = xhr.responseText jQuery(e.target). replaceWith('<p class="hello">' + xhr.responseText + '</p>') } } After the form submission, the DOM of the page is manipulated to change the page title of the form page and replace the form DOM element with a message wrapped in a paragraph element. To verify that the DOM changes have been applied, the test uses the WebDriverWait class to wait until the DOM is actually modified and the element with the class hello appears on the page. The WebDriverWait is instantiated with a four seconds timeout. This recipe only scratches the surface of the Selenium 2 framework's capabilities, but it should get you started to implement your own integration and functional test. See also http://docs.seleniumhq.org/ http://htmlunit.sourceforge.net/ Writing behavior-driven tests with Groovy Behavior Driven Development, or simply BDD, is a methodology where QA, business analysts, and marketing people could get involved in defining the requirements of a process in a common language. It could be considered an extension of Test Driven Development, although is not a replacement. The initial motivation for BDD stems from the perplexity of business people (analysts, domain experts, and so on) to deal with "tests" as these seem to be too technical. The employment of the word "behaviors" in the conversation is a way to engage the whole team. BDD states that software tests should be specified in terms of the desired behavior of the unit. The behavior is expressed in a semi-formal format, borrowed from user story specifications, a format popularized by agile methodologies. For a deeper insight into the BDD rationale, it is highly recommended to read the original paper from Dan North available at http://dannorth.net/introducing-bdd/. Spock is one of the most widely used frameworks in the Groovy and Java ecosystem that allows the creation of BDD tests in a very intuitive language and facilitates some common tasks such as mocking and extensibility. What makes it stand out from the crowd is its beautiful and highly expressive specification language. Thanks to its JUnit runner, Spock is compatible with most IDEs, build tools, and continuous integration servers. In this recipe, we are going to look at how to implement both a unit test and a web integration test using Spock. Getting ready This recipe has a slightly different setup than most of the recipes in this book, as it resorts to an existing web application source code, freely available on the Internet. This is the Spring Petclinic application provided by SpringSource as a demonstration of the latest Spring framework features and configurations. The web application works as a pet hospital and most of the interactions are typical CRUD operations against certain entities (veterinarians, pets, pet owners). The Petclinic application is available in the groovy-test/spock/web folder of the companion code for this article. All Petclinic's original tests have been converted to Spock. Additionally we created a simple integration test that uses Spock and Selenium to showcase the possibilities offered by the integration of the two frameworks. As usual, the recipe uses Gradle to build the reference web application and the tests. The Petclinic web application can be started by launching the following Gradle command in your shell from the groovy-test/spock/web folder: gradle tomcatRunWar If the application starts without errors, the shell should eventually display the following message: The Server is running at http://localhost:8080/petclinic Take some time to familiarize yourself with the Petclinic application by directing your browser to http://localhost:8080/petclinic and browsing around the website: How to do it... The following steps will describe the key concepts for writing your own behavior-driven unit and integration tests: Let's start by taking a look at the dependencies required to implement a Spock-based test suite: testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '2.16.1' testCompile group: 'junit', name: 'junit', version: '4.10' testCompile 'org.hamcrest:hamcrest-core:1.2' testRuntime 'cglib:cglib-nodep:2.2' testRuntime 'org.objenesis:objenesis:1.2' This is what a BDD unit test for the application's business logic looks like: package org.springframework.samples.petclinic.model import spock.lang.* class OwnerTest extends Specification { def "test pet and owner" () { given: def p = new Pet() def o = new Owner() when: p.setName("fido") then: o.getPet("fido") == null o.getPet("Fido") == null when: o.addPet(p) then: o.getPet("fido").equals(p) o.getPet("Fido").equals(p) } } The test is named OwnerTest.groovy, and it is available in the spock/web/src/ test folder of the main groovy-test project that comes with this article. The third test in this recipe mixes Spock and Selenium, the web testing framework already discussed in Writing functional tests for web applications recipe: package org.cookbook.groovy.spock import static java.util.concurrent.TimeUnit.SECONDS import org.openqa.selenium.By import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import spock.lang.Shared import spock.lang.Specification class HomeSpecification extends Specification { static final HOME = 'http://localhost:9966/petclinic' @Shared def driver = new HtmlUnitDriver(true) def setup() { driver.manage().timeouts().implicitlyWait 10, SECONDS } def 'user enters home page'() { when: driver.get(HOME) then: driver.title == 'PetClinic :: ' + 'a Spring Framework demonstration' } def 'user clicks on menus'() { when: driver.get(HOME) def vets = driver.findElement(By.linkText('Veterinarians')) vets.click() then: driver.currentUrl == 'http://localhost:9966/petclinic/vets.html' } } The test above is available in the spock/specs/src/test folder of the accompanying project. How it works... The first step of this recipe lays out the dependencies required to set up a Spock-based BDD test suite. Spock requires Java 5 or higher, and it's pretty picky with regard to the matching Groovy version to use. In the case of this recipe, as we are using Groovy 2.x, we set the dependency to the 0.7-groovy-2.0 version of the Spock framework. The full build.gradle file is located in the spock/specs folder of the recipe's code. The first test case demonstrated in the recipe is a direct conversion of a JUnit test written by Spring for the Petclinic application. This is the original test written in Java: public class OwnerTests { @Test public void testHasPet() { Owner owner = new Owner();Pet fido = new Pet(); fido.setName("Fido"); assertNull(owner.getPet("Fido")); assertNull(owner.getPet("fido")); owner.addPet(fido); assertEquals(fido, owner.getPet("Fido")); assertEquals(fido, owner.getPet("fido")); } } All we need to import in the Spock test is spock.lang.* that contains the most important types for writing specifications. A Spock test extends from spock.lang.Specification. The name of a specification normally relates to the system or system operation under test. In the case of the Groovy test at step 2, we reused the original Java test name, but it would have been better renamed to something more meaningful for a specification such as OwnerPetSpec. The class Specification exposes a number of practical methods for implementing specifications. Additionally, it tells JUnit to run the specification with Sputnik, Spock's own JUnit runner. Thanks to Sputnik, Spock specifications can be executed by all IDEs and build tools. Following the class definition, we have the feature method: def 'test pet and owner'() { ... } Feature methods lie at the core of a specification. They contain the description of the features (properties, aspects) that define the system that is under specification test. Feature methods are conventionally named with String literals: it's a good idea to choose meaningful names for feature methods. In the test above, we are testing that: given two entities, pet and owner, the getPet method of the owner instance will return null until the pet is not assigned to the owner, and that the getPet method will accept both "fido" and "Fido" in order to verify the ownership. Conceptually, a feature method consists of four phases: Set up the feature's fixture Provide an input to the system under specification (stimulus) Describe the response expected from the system Clean up Whereas the first and last phases are optional, the stimulus and response phases are always present and may occur more than once. Each phase is defined by blocks: blocks are defined by a label and extend to the beginning of the next block or the end of the method. In the test at step 2, we can see 3 types of blocks: In the given block, data gets initialized The when and then blocks always occur together. They describe a stimulus and the expected response. Whereas when blocks may contain arbitrary code; then blocks are restricted to conditions, exception checking, interactions, and variable definitions. The first test case has two when/then pairs. A pet is assigned the name "fido", and the test verifies that calling getPet on an owner object only returns something if the pet is actually "owned" by the owner. The second test is slightly more complex because it employs the Selenium framework to execute a web integration test with a BDD flavor. The test is located in the groovy-test/spock/specs/src/test folder. You can launch it by typing gradle test from the groovy-test/spock/specs folder. The test takes care of starting the web container and run the application under test, Petclinic. The test starts by defining a shared Selenium driver marked with the @Shared annotation, which is visible by all the feature methods. The first feature method simply opens the Petclinic main page and checks that the title matches the specification. The second feature method uses the Selenium API to select a link, click on it, and verify that the link brings the user to the right page. The verification is performed against the currentUrl of the browser that is expected to match the URL of the link we clicked on. See also http://dannorth.net/introducing-bdd/ http://en.wikipedia.org/wiki/Behavior-driven_development https://code.google.com/p/spock/ https://github.com/spring-projects/spring-petclinic/
Read more
  • 0
  • 0
  • 5649

article-image-apache-geronimo-logging
Packt
16 Nov 2009
8 min read
Save for later

Apache Geronimo Logging

Packt
16 Nov 2009
8 min read
We will start by briefly looking at each of the logging frameworks mentioned above, and will then go into how the server logs events and errors and where it logs them to. After examining them, we will look into the different ways in which we can configure application logging. Apache log4j: Log4j is an open source logging framework that is developed by the Apache Software Foundation. It provides a set of loggers, appenders, and layouts, to control which messages should be logged at runtime, where they should be logged to, and in what format they should be logged. The loggers are organized in a tree hierarchy, starting with the root logger at the top of the hierarchy. All loggers except the root logger are named entities and can be retrieved by their names. The root logger can be accessed by using the Logger.getRootLogger() API, while all other loggers can be accessed by using the Logger.getLogger() API. The names of the loggers follow the rule that the name of the parent logger followed by a '.' is a prefix to the child logger's name. For example, if com.logger.test is the name of a logger, then its direct ancestor is com.logger, and the prior ancestor is com. Each of the loggers may be assigned levels. The set of possible levels in an ascending order are—TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. If a logger is not assigned a level, then it inherits its level from its closest ancestor. A log statement makes a logging request to the log4j subsystem. This request is enabled only if its logging level is higher than or equal to its logger's level. If it is lower than the log, then the message is not output through the configured appenders. Log4j allows logs to be output to multiple destinations. This is done via different appenders. Currently there are appenders for the console, files, GUI components, JMS destinations, NT, and Unix system event loggers and remote sockets. Log4j is one of the most widely-used logging frameworks for Java applications, especially ones running on application servers. It also provides more features than the other logging framework that we are about to see, that is, the Java Logging API. Java Logging API: The Java Logging API, also called JUL, from the java.util.logging package name of the framework, is another logging framework that is distributed with J2SE from version 1.4 onwards. It also provides a hierarchy of loggers such as log4j, and the inheritance of properties by child loggers from parents just like log4j. It provides handlers for handling output, and formatters for configuring the way that the output is displayed. It provides a subset of the functionality that log4j provides, but the advantage is that it is bundled with the JRE, and so does not require the application to include third-party JARS as log4j does. SLF4J: The Simple Logging Facade for Java or SLF4J is an abstraction or facade over various logging systems. It allows a developer to plug in the desired logging framework at deployment time. It also supports the bridging of legacy API calls through the slf4j API, and to the underlying logging implementation. Versions of Apache Geronimo prior to 2.0 used Apache Commons logging as the facade or wrapper. However, commons logging uses runtime binding and a dynamic discovery mechanism, which came to be the source of quite a few bugs. Hence, Apache Geronimo migrated to slf4j, which allows the developer to plug in the logging framework during deployment, thereby eliminating the need for runtime binding. Configuring Apache Geronimo logging Apache Geronimo uses slf4j and log4j for logging. The log4j configuration files can be found in the <GERONIMO_HOME>/var/log directory. There are three configuration files that you will find in this directory, namely: client-log4j.properties deployer-log4j.properties server-log4j.properties Just as they are named, these files configure log4j logging for the client container (Java EE application client), deployer system, and the server. You will also find the corresponding log files—client.log, deployer.log, and server.log. The properties files, listed above, contain the configuration of the various appenders, loggers, and layouts for the server, deployer, and client. As mentioned above, log4j provides a hierarchy of loggers with a granularity ranging from the entire server to each class on the server. Let us examine one of the configuration files: the server-log4j.properties file: This file starts with the line log4j.rootLogger=INFO, CONSOLE, FILE. This means that the log4j root logger has a level of INFO and writes log statements to two appenders, namely, the CONSOLE appender and the FILE appender. These are the appenders that write to the console and to files respectively. The console appender and file appenders are configured to write to System.out and to <GERONIMO_HOME>/var/log/geronimo.log. Below this section, there is a finer-grained configuration of loggers at class or package levels. For example, log4j.logger.openjpa.Enhance=TRACE. It configures the logger for the class log4j.logger.openjpa.Enhance to the TRACE level. Note that all of the classes that do not have a log level defined will take on the log level of their parents. This applies recursively until we reach the root logger and inherit its log level (INFO in this case). Configuring application logging We will be illustrating how applications can log messages in Geronimo by using two logging frameworks, namely, log4j and JUL. We will also illustrate how you can use the slf4j wrapper to log messages with the above two underlying implementations. We will be using a sample application, namely, the HelloWorld web application to illustrate this. Using log4j We can use log4j for logging the application log to either a separate logfile or to the geronimo.log file. We will also illustrate how the logs can be written to a separate file in the <GERONIMO_HOME>/var/log directory, by using a GBean. Logging to the geronimo.log file and the command console Logging to the geronimo.log file and the command console is the simplest way to do application logging in Geronimo. For enabling this in your application, you only need to add logging statements to your application code. The HelloWorld sample application has a servlet called HelloWorldServlet, which has the following statements for enabling logging. The servlet is shown below. package com.packtpub.hello;import java.io.*;import javax.servlet.ServletException;import javax.servlet.http.*;import org.apache.log4j.Logger;public class HelloWorldServlet extends HttpServlet{ Logger logger = Logger.getLogger(HelloWorldServlet.class.getName()); protected void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { res.setContentType("text/html"); PrintWriter out = res.getWriter(); out.print("<html>"); logger.info("Printing out <html>"); out.print("<head><title>Hello World Application</title></head>"); logger.info("Printing out <head><title>Hello World Application</title></head>"); out.print("<body>"); logger.info("Printing out <body>"); out.print("<b>Hello World</b><br>"); logger.info("Printing out <b>Hello World</b><br>"); out.print("</body>"); logger.info("Printing out </body>"); out.print("</html>"); logger.info("Printing out </html>"); logger.warn("Sample Warning message"); logger.error("Sample error message"); }} Deploy the sample HelloWorld-1.0.war file, and then access http://localhost:8080/HelloWorld/. This servlet will log the following messages in the command console, as shown in the image below: The geronimo.log file will have the following entries: 2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <html>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <head><title>Hello World Application</title></head>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <b>Hello World</b><br>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </html>2009-02-02 20:01:38,906 WARN [HelloWorldServlet] Sample Warning message2009-02-02 20:01:38,906 ERROR [HelloWorldServlet] Sample error message Notice that only the messages with a logging level of greater than or equal to WARN are being logged to the command console, while all the INFO, ERROR, and WARN messages are logged to the geronimo.log file. This is because in server-log4j.properties the CONSOLE appender's threshold is set to the value of the system property, org.apache.geronimo.log.ConsoleLogLevel, as shown below: log4j.appender.CONSOLE.Threshold=${org.apache.geronimo.log.ConsoleLogLevel} The value of this property is, by default, WARN. All of the INFO messages are logged to the logfile because the FILE appender has a lower threshold, of TRACE, as shown below: log4j.appender.FILE.Threshold=TRACE Using this method, you can log messages of different severity to the console and logfile to which the server messages are logged. This is done for operator convenience, that is, only high severity log messages, such as warnings and errors, are logged to the console, and they need the operator's attention. The other messages are logged only to a file.
Read more
  • 0
  • 0
  • 5583
Visually different images

article-image-working-complex-associations-using-cakephp
Packt
28 Oct 2009
6 min read
Save for later

Working with Complex Associations using CakePHP

Packt
28 Oct 2009
6 min read
Defining Many-To-Many Relationship in Models In the previous article in this series on Working with Simple Associations using CakePHP, we assumed that a book can have only one author. But in real life scenario, a book may also have more than one author. In that case, the relation between authors and books is many-to-many. We are now going to see how to define associations for a many-to-many relation. We will modify our existing code-base that we were working on in the previous article to set up the associations needed to represent a many-to-many relation. Time for Action: Defining Many-To-Many Relation Empty the database tables: TRUNCATE TABLE `authors`;TRUNCATE TABLE `books`; Remove the author_id field from the books table: ALTER TABLE `books` DROP `author_id` Create a new table, authors_books:; CREATE TABLE `authors_books` (`author_id` INT NOT NULL ,`book_id` INT NOT NULL Modify the Author (/app/models/author.php) model: <?phpclass Author extends AppModel{ var $name = 'Author'; var $hasAndBelongsToMany = 'Book';}?> Modify the Book (/app/models/book.php) model: <?phpclass Book extends AppModel{ var $name = 'Book'; var $hasAndBelongsToMany = 'Author';}?> Modify the AuthorsController (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?> Modify the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; var $scaffold;}?> Now, visit the following URLs and add some test data into the system:http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We first emptied the database and then dropped the field author_id from the books table. Then we added a new join table authors_books that will be used to establish a many-to-many relation between authors and books. The following diagram shows how a join table relates two tables in many-to-many relation: In a many-to-many relation, one record of any of the tables can be related to multiple records of the other table. To establish this link, a join table is used—a join table contains two fields to hold the primary-keys of both of the records in relation. CakePHP has certain conventions for naming a join table—join tables should be named after the tables in relation, in alphabetical order, with underscores in between. The join table between authors and books tables should be named authors_books, not books_authors. Also by Cake convention, the default value for the foreign keys used in the join table must be underscored, singular name of the models in relation, suffixed with _id. After creating the join table, we defined associations in the models, so that our models also know about the new relationship that they have. We added hasAndBelongsToMany (HABTM) associations in both of the models. HABTM is a special type of association used to define a many-to-many relation in models. Both the models have HABTM associations to define the many-to-many relationship from both ends. After defining the associations in the models, we created two controllers for these two models and put in scaffolding in them to see the association working. We could also use an array to set up the HABTM association in the models. Following code segment shows how to use an array for setting up an HABTM association between authors and books in the Author model: var $hasAndBelongsToMany = array( 'Book' => array( 'className' => 'Book', 'joinTable' => 'authors_books', 'foreignKey' => 'author_id', 'associationForeignKey' => 'book_id' ) ); Like, simple relationships, we can also override default association characteristics by adding/modifying key/value pairs in the associative array. The foreignKey key/value pair holds the name of the foreign-key found in the current model—default is underscored, singular name of the current model suffixed with _id. Whereas, associationForeignKey key/value pair holds the foreign-key name found in the corresponding table of the other model—default is underscored, singular name of the associated model suffixed with _id. We can also have conditions, fields, and order key/value pairs to customize the relationship in more detail. Retrieving Related Model Data in Many-To-Many Relation Like one-to-one and one-to-many relations, once the associations are defined, CakePHP will automatically fetch the related data in many-to-many relation. Time for Action: Retrieving Related Model Data Take out scaffolding from both of the controllers—AuthorsController (/app/controllers/authors_controller.php) and BooksController (/app/controllers/books_controller.php). Add an index() action inside the AuthorsController (/app/controllers/authors_controller.php), like the following: <?phpclass AuthorsController extends AppController { var $name = 'Authors'; function index() { $this->Author->recursive = 1; $authors = $this->Author->find('all'); $this->set('authors', $authors); }}?> Create a view file for the /authors/index action (/app/views/authors/index.ctp): <?php foreach($authors as $author): ?><h2><?php echo $author['Author']['name'] ?></h2><hr /><h3>Book(s):</h3><ul><?php foreach($author['Book'] as $book): ?><li><?php echo $book['title'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Write down the following code inside the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; function index() { $this->Book->recursive = 1; $books = $this->Book->find('all'); $this->set('books', $books); }}?> Create a view file for the action /books/index (/app/views/books/index.ctp): <?php foreach($books as $book): ?><h2><?php echo $book['Book']['title'] ?></h2><hr /><h3>Author(s):</h3><ul><?php foreach($book['Author'] as $author): ?><li><?php echo $author['name'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Now, visit the following URLs:http://localhost/relationship/authors/http://localhost/relationship/books/ What Just Happened? In both of the models, we first set the value of $recursive attributes to 1 and then we called the respective models find('all') functions. So, these subsequent find('all') operations return all associated model data that are related directly to the respective models. These returned results of the find('all') requests are then passed to the corresponding view files. In the view files, we looped through the returned results and printed out the models and their related data. In the BooksController, this returned data from find('all') is stored in a variable $books. This find('all') returns an array of books and every element of that array contains information about one book and its related authors. Array ( [0] => Array ( [Book] => Array ( [id] => 1 [title] => Book Title ... ) [Author] => Array ( [0] => Array ( [id] => 1 [name] => Author Name ... ) [1] => Array ( [id] => 3 ... 54 54 ... ...) Same for the Author model, the returned data is an array of authors. Every element of that array contains two arrays: one contains the author information and the other contains an array of books related to this author. These arrays are very much like what we got from a find('all') call in case of the hasMany association.
Read more
  • 0
  • 0
  • 5573

article-image-the-openjdk-transition-things-to-know-and-do
Rich Sharples
28 Feb 2019
5 min read
Save for later

The OpenJDK Transition: Things to know and do

Rich Sharples
28 Feb 2019
5 min read
At this point, it should not be a surprise that a number of major changes were announced in the Java ecosystem, some of which have the potential to force a reassessment of Java roadmaps and even vendor selection for enterprise Java users. Some of the biggest changes are taking place in the Upstream OpenJDK (Open Java Development Kit), which means that users will need to have a backup plan in place for the transition. Read on to better understand exactly what the changes Oracle made to Oracle JDK are, why you should care about them and how to choose the best next steps for your organization. For some quick background, OpenJDK began as a free and open source implementation of the Java Platform, Standard Edition, through a 2006 Sun Microsystems initiative. They made the announcement at the 2006 JavaOne event that Java and the core Java platform would be open sourced. Over the next few years, major components of the Java Platform were released as free, open source software using the GPL. Other vendors - like Red Hat - then got involved with the project about a year later, through a broad contributor agreement which covers the participation of non-Sun engineers in the project. This piece is by Rich Sharples, Senior Director of Product Management at Red Hat , on what Oracle ending free lifecycle support means, and what steps users can take now to prepare for the change. So what is new in the world of Java and JDK? Why should you care? Okay, enough about the history. What are we anticipating for the future of Java and OpenJDK? At the end of January 2019, Oracle officially ended free public updates to Oracle JDK for non-Oracle customer commercial users. These users will no longer be able to get updates without an Oracle support contract. Additionally, Oracle has changed the Oracle JDK license (BCPL) so that commercial use for JDK 11 and beyond will require an Oracle subscription. They do also have a non-proprietary (GPLv2+CPE) distribution but the support policy around that is unclear at this time. This is a big deal because it means that users need to have strategies and plans in place to continue to get the support that Oracle had offered. You may be wondering whether it really is critical to run OpenJDK with support and if you can get away with running it without the support. The truth is that while the open source license and nature of OpenJDK means that you can technically run the software with no commercial support, it doesn’t mean that you necessarily should. There are too many risks associated with running OpenJDK unsupported, including numerous cases of critical vulnerabilities and security flaws. While there is nothing inherently insecure about open source software, when it is deployed without support or even a maintenance plan, it can open up your organization to threats that you may not be prepared to handle and resolve. Additionally, and probably the biggest reason why it is important to have commercial OpenJDK support, is that without a third party to help, you have to be entirely self-sufficient, meaning that you would have to ensure that you have specific engineering efforts that are permanently allocated to monitoring the upstream project and maintaining installations of OpenJDK. Not all organizations have the bandwidth or resources to be able to do this consistently. How can vendors help and what are other key technology considerations? Software vendors, including Red Hat, offer OpenJDK distributions and support to make sure that those who had been reliant on Oracle JDK have a seamless transition to take care of the above risks associated with not having OpenJDK support. It also allows customers and partners to focus on their business software, rather than needing to spend valuable engineering efforts on underlying Java runtimes. Additionally, Java SE 11 has introduced some significant new features, as well as deprecated others, and is the first significant update to the platform that will require users to think seriously about the impact of moving from it. With this, it is important to separate upgrading from Java SE 8 to Java SE 11 from moving from one vendor’s Java SE distribution to another vendor’s. In fact, moving from Java SE distributions - ones that are based in OpenJDK - without needing to change versions should be a fairly simple task. It is not recommended to change both the vendor and the version in one single step. Luckily also, the differences between Oracle JDK and OpenJDK are fairly minor and in fact, are slowly aligning to become more similar. Technology vendors can help in the transition that will inevitably come for OpenJDK - if it has not happened already. Having a proper regression test plan in place to ensure that applications run as they previously did, with a particular focus on performance and scalability is key and is something that vendors can help set up. Auditing and understanding if you need to make any code changes to applications is also a major undertaking that a third-party can help guide you on, that likely includes rulesets for the migrations. Finally, third-party vendors can help you deploy to the new OpenJDK solution and provide patches and security guidance as it comes up, to help keep the environment secure, updated and safe. While there are changes to Oracle JDK, once you find the alternative solution that is best suited for your organization, it should be fairly straightforward and cost-effective to make the transition, and third party vendors have the expertise, knowledge, and experience to help guide users through the OpenJDK transition. OpenJDK Project Valhalla is now in Phase III State of OpenJDK: Past, Present and Future with Oracle Mark Reinhold on the evolution of Java platform and OpenJDK
Read more
  • 0
  • 0
  • 5570

article-image-dispatchers-and-routers
Packt
12 Nov 2012
5 min read
Save for later

Dispatchers and Routers

Packt
12 Nov 2012
5 min read
(For more resources related to this topic, see here.) Dispatchers In the real world, dispatchers are the communication coordinators that are responsible for receiving and passing messages. For the emergency services (for example, in U.S. – 911), the dispatchers are the people responsible for taking in the call, and passing on the message to the other departments (medical, police, fire station, or others). The dispatcher coordinates the route and activities of all these departments, to make sure that the right help reaches the destination as early as possible. Another example is how the airport manages airplanes taking off. The air traffic controllers (ATCs) coordinate the use of the runway between the various planes taking off and landing. On one side, air traffic controllers manage the runways (usually ranging from 1 to 3), and on the other, aircrafts of different sizes and capacity from different airlines ready to take off and land. An air traffic controller coordinates the various airplanes, gets the airplanes lined up, and allocates the runways to take off and land: As we can see, there are multiple runways available and multiple airlines, each having a different set of airplanes needing to take off. It is the responsibility of air traffic controller(s) to coordinate the take-off and landing of planes from each airline and do this activity as fast as possible. Dispatcher as a pattern Dispatcher is a well-recognized and used pattern in the Java world. Dispatchers are used to control the flow of execution. Based on the dispatching policy, dispatchers will route the incoming message or request to the business process. Dispatchers as a pattern provide the following advantages: Centralized control: Dispatchers provide a central place from where various messages/requests are dispatched. The word "centralized" means code is re-used, leading to improved maintainability and reduced duplication of code. Application partitioning: There is a clear separation between the business logic and display logic. There is no need to intermingle business logic with the display logic. Reduced inter-dependencies: Separation of the display logic from the business logic means there are reduced inter-dependencies between the two. Reduced inter-dependencies mean less contention on the same resources, leading to a scalable model. Dispatcher as a concept provides a centralized control mechanism that decouples different processing logic within the application, which in turn reduces inter-dependencies. Executor in Java In Akka, dispatchers are based on the Java Executor framework (part of java.util.concurrent).Executor provides the framework for the execution of asynchronous tasks. It is based on the producer–consumer model, meaning the act of task submission (producer) is decoupled from the act of task execution (consumer). The threads that submit tasks are different from the threads that execute the tasks. Two important implementations of the Executor framework are as follows: ThreadPoolExecutor: It executes each submitted task using thread from a predefined and configured thread pool. ForkJoinPool: It uses the same thread pool model but supplemented with work stealing. Threads in the pool will find and execute tasks (work stealing) created by other active tasks or tasks allocated to other threads in the pool that are pending execution. Fork/join is based a on fine-grained, parallel, divide-andconquer style, parallelism model. The idea is to break down large data chunks into smaller chunks and process them in parallel to take advantage of the underlying processor cores. Executor is backed by constructs that allow you to define and control how the tasks are executed. Using these Executor constructor constructs, one can specify the following: How many threads will be running? (thread pool size) How are the tasks queued until they come up for processing? How many tasks can be executed concurrently? What happens in case the system overloads, when tasks to be rejected are selected? What is the order of execution of tasks? (LIFO, FIFO, and so on) Which pre- and post-task execution actions can be run? In the book Java Concurrency in Practice, Addison-Wesley Publishing, the authors have described the Executor framework and its usage very nicely. It will be useful to read the book for more details on the concurrency constructs provided by Java language. Dispatchers in Akka In the Akka world, the dispatcher controls and coordinates the message dispatching to the actors mapped on the underlying threads. They make sure that the resources are optimized and messages are processed as fast as possible. Akka provides multiple dispatch policies that can be customized according to the underlying hardware resource (number of cores or memory available) and type of application workload. If we take our example of the airport and map it to the Akka world, we can see that the runways are mapped to the underlying resources—threads. The airlines with their planes are analogous to the mailbox with the messages. The ATC tower employs a dispatch policy to make sure the runways are optimally utilized and the planes are spending minimum time on waiting for clearance to take off or land: For Akka, the dispatchers, actors, mailbox, and threads look like the following diagram: The dispatchers run on their threads; they dispatch the actors and messages from the attached mailbox and allocate on heap to the executor threads. The executor threads are configured and tuned to the underlying processor cores that available for processing the messages.
Read more
  • 0
  • 0
  • 5543
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-web-server-development
Packt
15 Apr 2016
24 min read
Save for later

Web Server Development

Packt
15 Apr 2016
24 min read
In this article by Holger Brunn, Alexandre Fayolle, and Daniel Eufémio Gago Reis, the authors of the book, Odoo Development Cookbook, have discussed how to deploy the web server in Odoo. In this article, we'll cover the following topics: Make a path accessible from the network Restrict access to web accessible paths Consume parameters passed to your handlers Modify an existing handler Using the RPC API (For more resources related to this topic, see here.) Introduction We'll introduce the basics of the web server part of Odoo in this article. Note that this article covers the fundamental pieces. All of Odoo's web request handling is driven by the Python library werkzeug (http://werkzeug.pocoo.org). While the complexity of werkzeug is mostly hidden by Odoo's convenient wrappers, it is an interesting read to see how things work under the hood. Make a path accessible from the network In this recipe, we'll see how to make an URL of the form http://yourserver/path1/path2 accessible to users. This can either be a web page or a path returning arbitrary data to be consumed by other programs. In the latter case, you would usually use the JSON format to consume parameters and to offer you data. Getting ready We'll make use of a ready-made library.book model. We want to allow any user to query the full list of books. Furthermore, we want to provide the same information to programs via a JSON request. How to do it… We'll need to add controllers, which go into a folder called controllers by convention. Add a controllers/main.py file with the HTML version of our page: from openerp import http from openerp.http import request class Main(http.Controller): @http.route('/my_module/books', type='http', auth='none') def books(self): records = request.env['library.book']. sudo().search([]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result Add a function to serve the same information in the JSON format @http.route('/my_module/books/json', type='json', auth='none') def books_json(self): records = request.env['library.book']. sudo().search([]) return records.read(['name']) Add the file controllers/__init__.py: from . import main Add controllers to your __init__.py addon: from . import controllers After restarting your server, you can visit /my_module/books in your browser and get presented with a flat list of book names. To test the JSON-RPC part, you'll have to craft a JSON request. A simple way to do that would be using the following command line to receive the output on the command line: curl -i -X POST -H "Content-Type: application/json" -d "{}" localhost:8069/my_module/books/json If you get 404 errors at this point, you probably have more than one database available on your instance. In this case, it's impossible for Odoo to determine which database is meant to serve the request. Use the --db-filter='^yourdatabasename$' parameter to force using exact database you installed the module in. Now the path should be accessible. How it works… The two crucial parts here are that our controller is derived from openerp.http.Controller and that the methods we use to serve content are decorated with openerp.http.route. Inheriting from openerp.http.Controller registers the controller with Odoo's routing system in a similar way as models are registered by inheriting from openerp.models.Model; also, Controller has a meta class that takes care of this. In general, paths handled by your addon should start with your addon's name to avoid name clashes. Of course, if you extend some addon's functionality, you'll use this addon's name. openerp.http.route The route decorator allows us to tell Odoo that a method is to be web accessible in the first place, and the first parameter determines on which path it is accessible. Instead of a string, you can also pass a list of strings in case you use the same function to serve multiple paths. The type argument defaults to http and determines what type of request is to be served. While strictly speaking JSON is HTTP, declaring the second function as type='json' makes life a lot easier, because Odoo then handles type conversions itself. Don't worry about the auth parameter for now, it will be addressed in recipe Restrict access to web accessible paths. Return values Odoo's treatment of the functions' return values is determined by the type argument of the route decorator. For type='http', we usually want to deliver some HTML, so the first function simply returns a string containing it. An alternative is to use request.make_response(), which gives you control over the headers to send in the response. So to indicate when our page was updated the last time, we might change the last line in books() to the following: return request.make_response( result, [ ('Last-modified', email.utils.formatdate( ( fields.Datetime.from_string( request.env['library.book'].sudo() .search([], order='write_date desc', limit=1) .write_date) - datetime.datetime(1970, 1, 1) ).total_seconds(), usegmt=True)), ]) This code sends a Last-modified header along with the HTML we generated, telling the browser when the list was modified for the last time. We extract this information from the write_date field of the library.book model. In order for the preceding snippet to work, you'll have to add some imports on the top of the file: import email import datetime from openerp import fields You can also create a Response object of werkzeug manually and return that, but there's little gain for the effort. Generating HTML manually is nice for demonstration purposes, but you should never do this in production code. Always use templates as appropriate and return them by calling request.render(). This will give you localization for free and makes your code better by separating business logic from the presentation layer. Also, templates provide you with functions to escape data before outputting HTML. The preceding code is vulnerable to cross-site-scripting attacks if a user manages to slip a script tag into the book name, for example. For a JSON request, simply return the data structure you want to hand over to the client, Odoo takes care of serialization. For this to work, you should restrict yourself to data types that are JSON serializable, which are roughly dictionaries, lists, strings, floats and integers. openerp.http.request The request object is a static object referring to the currently handled request, which contains everything you need to take useful action. Most important is the property request.env, which contains an Environment object which is just the same as in self.env for models. This environment is bound to the current user, which is none in the preceding example because we used auth='none'. Lack of a user is also why we have to sudo() all our calls to model methods in the example code. If you're used to web development, you'll expect session handling, which is perfectly correct. Use request.session for an OpenERPSession object (which is quite a thin wrapper around the Session object of werkzeug), and request.session.sid to access the session id. To store session values, just treat request.session as a dictionary: request.session['hello'] = 'world' request.session.get('hello') Note that storing data in the session is not different from using global variables. Use it only if you must - that is usually the case for multi request actions like a checkout in the website_sale module. And also in this case, handle all functionality concerning sessions in your controllers, never in your modules. There's more… The route decorator can have some extra parameters to customize its behavior further. By default, all HTTP methods are allowed, and Odoo intermingles with the parameters passed. Using the parameter methods, you can pass a list of methods to accept, which usually would be one of either ['GET'] or ['POST']. To allow cross origin requests (browsers block AJAX and some other types of requests to domains other than where the script was loaded from for security and privacy reasons), set the cors parameter to * to allow requests from all origins, or some URI to restrict requests to ones originating from this URI. If this parameter is unset, which is the default, the Access-Control-Allow-Origin header is not set, leaving you with the browser's standard behavior. In our example, we might want to set it on /my_module/books/json in order to allow scripts pulled from other websites accessing the list of books. By default, Odoo protects certain types of requests from an attack known as cross-site request forgery by passing a token along on every request. If you want to turn that off, set the parameter csrf to False, but note that this is a bad idea in general. See also If you host multiple Odoo databases on the same instance and each database has different web accessible paths on possibly multiple domain names per database, the standard regular expressions in the --db-filter parameter might not be enough to force the right database for every domain. In that case, use the community module dbfilter_from_header from https://github.com/OCA/server-tools in order to configure the database filters on proxy level. To see how using templates makes modularity possible, see recipe Modify an existing handler later in the article. Restrict access to web accessible paths We'll explore the three authentication mechanisms Odoo provides for routes in this recipe. We'll define routes with different authentication mechanisms in order to show their differences. Getting ready As we extend code from the previous recipe, we'll also depend on the library.book model, so you should get its code correct in order to proceed. How to do it… Define handlers in controllers/main.py: Add a path that shows all books: @http.route('/my_module/all-books', type='http', auth='none') def all_books(self): records = request.env['library.book'].sudo().search([]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result Add a path that shows all books and indicates which was written by the current user, if any: @http.route('/my_module/all-books/mark-mine', type='http', auth='public') def all_books_mark_mine(self): records = request.env['library.book'].sudo().search([]) result = '<html><body><table>' for record in records: result += '<tr>' if record.author_ids & request.env.user.partner_id: result += '<th>' else: result += '<td>' result += record.name if record.author_ids & request.env.user.partner_id: result += '</th>' else: result += '</td>' result += '</tr>' result += '</table></body></html>' return result Add a path that shows the current user's books: @http.route('/my_module/all-books/mine', type='http', auth='user') def all_books_mine(self): records = request.env['library.book'].search([ ('author_ids', 'in', request.env.user.partner_id.ids), ]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result With this code, the paths /my_module/all_books and /my_module/all_books/mark_mine look the same for unauthenticated users, while a logged in user sees her books in a bold font on the latter path. The path /my_module/all-books/mine is not accessible at all for unauthenticated users. If you try to access it without being authenticated, you'll be redirected to the login screen in order to do so. How it works… The difference between authentication methods is basically what you can expect from the content of request.env.user. For auth='none', the user record is always empty, even if an authenticated user is accessing the path. Use this if you want to serve content that has no dependencies on users, or if you want to provide database agnostic functionality in a server wide module. The value auth='public' sets the user record to a special user with XML ID, base.public_user, for unauthenticated users, and to the user's record for authenticated ones. This is the right choice if you want to offer functionality to both unauthenticated and authenticated users, while the authenticated ones get some extras, as demonstrated in the preceding code. Use auth='user' to be sure that only authenticated users have access to what you've got to offer. With this method, you can be sure request.env.user points to some existing user. There's more… The magic for authentication methods happens in the ir.http model from the base addon. For whatever value you pass to the auth parameter in your route, Odoo searches for a function called _auth_method_<yourvalue> on this model, so you can easily customize this by inheriting this model and declaring a method that takes care of your authentication method of choice. As an example, we provide an authentication method base_group_user which enforces a currently logged in user who is a member of the group with XML ID, base.group_user: from openerp import exceptions, http, models from openerp.http import request class IrHttp(models.Model): _inherit = 'ir.http' def _auth_method_base_group_user(self): self._auth_method_user() if not request.env.user.has_group('base.group_user'): raise exceptions.AccessDenied() Now you can say auth='base_group_user' in your decorator and be sure that users running this route's handler are members of this group. With a little trickery you can extend this to auth='groups(xmlid1,…)', the implementation of this is left as an exercise to the reader, but is included in the example code. Consume parameters passed to your handlers It's nice to be able to show content, but it's better to show content as a result of some user input. This recipe will demonstrate the different ways to receive this input and react to it. As the recipes before, we'll make use of the library.book model. How to do it… First, we'll add a route that expects a traditional parameter with a book's ID to show some details about it. Then, we'll do the same, but we'll incorporate our parameter into the path itself: Add a path that expects a book's ID as parameter: @http.route('/my_module/book_details', type='http', auth='none') def book_details(self, book_id): record = request.env['library.book'].sudo().browse( int(book_id)) return u'<html><body><h1>%s</h1>Authors: %s' % ( record.name, u', '.join(record.author_ids.mapped( 'name')) or 'none', ) Add a path where we can pass the book's ID in the path @http.route("/my_module/book_details/<model('library.book') :book>", type='http', auth='none') def book_details_in_path(self, book): return self.book_details(book.id) If you point your browser to /my_module/book_details?book_id=1, you should see a detail page of the book with ID 1. If this doesn't exist, you'll receive an error page. The second handler allows you to go to /my_module/book_details/1 and view the same page. How it works… By default, Odoo (actually werkzeug) intermingles with GET and POST parameters and passes them as keyword argument to your handler. So by simply declaring your function as expecting a parameter called book_id, you introduce this parameter as either GET (the parameter in the URL) or POST (usually passed by forms with your handler as action) parameter. Given that we didn't add a default value for this parameter, the runtime will raise an error if you try to access this path without setting the parameter. The second example makes use of the fact that in a werkzeug environment, most paths are virtual anyway. So we can simply define our path as containing some input. In this case, we say we expect the ID of a library.book as the last component of the path. The name after the colon is the name of a keyword argument. Our function will be called with this parameter passed as keyword argument. Here, Odoo takes care of looking up this ID and delivering a browse record, which of course only works if the user accessing this path has appropriate permissions. Given that book is a browse record, we can simply recycle the first example's function by passing book.id as parameter book_id to give out the same content. There's more… Defining parameters within the path is a functionality delivered by werkzeug, which is called converters. The model converter is added by Odoo, which also defines the converter, models, that accepts a comma separated list of IDs and passes a record set containing those IDs to your handler. The beauty of converters is that the runtime coerces the parameters to the expected type, while you're on your own with normal keyword parameters. These are delivered as strings and you have to take care of the necessary type conversions yourself, as seen in the first example. Built-in werkzeug converters include int, float, and string, but also more intricate ones such as path, any, or uuid. You can look up their semantics at http://werkzeug.pocoo.org/docs/0.11/routing/#builtin-converters. See also Odoo's custom converters are defined in ir_http.py in the base module and registered in the _get_converters method of ir.http. As an exercise, you can create your own converter that allows you to visit the /my_module/book_details/Odoo+cookbook page to receive the details of this book (if you added it to your library before). Modify an existing handler When you install the website module, the path /website/info displays some information about your Odoo instance. In this recipe, we override this in order to change this information page's layout, but also to change what is displayed. Getting ready Install the website module and inspect the path /website/info. Now craft a new module that depends on website and uses the following code. How to do it… We'll have to adapt the existing template and override the existing handler: Override the qweb template in a file called views/templates.xml: <?xml version="1.0" encoding="UTF-8"?> <odoo> <template id="show_website_info" inherit_id="website.show_website_info"> <xpath expr="//dl[@t-foreach='apps']" position="replace"> <table class="table"> <tr t-foreach="apps" t-as="app"> <th> <a t-att-href="app.website"> <t t-esc="app.name" /></a> </th> <td><t t-esc="app.summary" /></td> </tr> </table> </xpath> </template> </odoo> Override the handler in a file called controllers/main.py: from openerp import http from openerp.addons.website.controllers.main import Website class Website(Website): @http.route() def website_info(self): result = super(Website, self).website_info() result.qcontext['apps'] = result.qcontext[ 'apps'].filtered( lambda x: x.name != 'website') return result Now when visiting the info page, we'll only see a filtered list of installed applications, and in a table as opposed to the original definition list. How it works In the first step, we override an existing QWeb template. In order to find out which that is, you'll have to consult the code of the original handler. Usually, it will end with the following command line, which tells you that you need to override template.name: return request.render('template.name', values) In our case, the handler uses a template called website.info, but this one is extended immediately by another template called website.show_website_info, so it's more convenient to override this one. Here, we replace the definition list showing installed apps with a table. In order to override the handler method, we must identify the class that defines the handler, which is openerp.addons.website.controllers.main.Website in this case. We import the class to be able to inherit from it. Now we override the method and change the data passed to the response. Note that what the overridden handler returns is a Response object and not a string of HTML as the previous recipes did for the sake of brevity. This object contains a reference to the template to be used and the values accessible to the template, but is only evaluated at the very end of the request. In general, there are three ways to change an existing handler: If it uses a QWeb template, the simplest way of changing it is to override the template. This is the right choice for layout changes and small logic changes. QWeb templates get a context passed, which is available in the response as the field qcontext. This usually is a dictionary where you can add or remove values to suit your needs. In the preceding example, we filter the list of apps to only contain apps which have a website set. If the handler receives parameters, you could also preprocess those in order to have the overridden handler behave the way you want. There's more… As seen in the preceding section, inheritance with controllers works slightly differently than model inheritance: You actually need a reference to the base class and use Python inheritance on it. Don't forget to decorate your new handler with the @http.route decorator; Odoo uses it as a marker for which methods are exposed to the network layer. If you omit the decorator, you actually make the handler's path inaccessible. The @http.route decorator itself behaves similarly to field declarations: every value you don't set will be derived from the decorator of the function you're overriding, so we don't have to repeat values we don't want to change. After receiving a response object from the function you override, you can do a lot more than just changing the QWeb context: You can add or remove HTTP headers by manipulating response.headers. If you want to render an entirely different template, you can set response.template. To detect if a response is based on QWeb in the first place, query response.is_qweb. The resulting HTML code is available by calling response.render(). Using the RPC API One of Odoo's strengths is its interoperability, which is helped by the fact that basically any functionality is available via JSON-RPC 2.0 and XMLRPC. In this recipe, we'll explore how to use both of them from client code. This interface also enables you to integrate Odoo with any other application. Making functionality available via any of the two protocols on the server side is explained in the There's more section of this recipe. We'll query a list of installed modules from the Odoo instance, so that we could show a list as the one displayed in the previous recipe in our own application or website. How to do it… The following code is not meant to run within Odoo, but as simple scripts: First, we query the list of installed modules via XMLRPC: #!/usr/bin/env python2 import xmlrpclib db = 'odoo9' user = 'admin' password = 'admin' uid = xmlrpclib.ServerProxy( 'http://localhost:8069/xmlrpc/2/common') .authenticate(db, user, password, {}) odoo = xmlrpclib.ServerProxy( 'http://localhost:8069/xmlrpc/2/object') installed_modules = odoo.execute_kw( db, uid, password, 'ir.module.module', 'search_read', [[('state', '=', 'installed')], ['name']], {'context': {'lang': 'fr_FR'}}) for module in installed_modules: print module['name'] Then we do the same with JSONRPC: import json import urllib2 db = 'odoo9' user = 'admin' password = 'admin' request = urllib2.Request( 'http://localhost:8069/web/session/authenticate', json.dumps({ 'jsonrpc': '2.0', 'params': { 'db': db, 'login': user, 'password': password, }, }), {'Content-type': 'application/json'}) result = urllib2.urlopen(request).read() result = json.loads(result) session_id = result['result']['session_id'] request = urllib2.Request( 'http://localhost:8069/web/dataset/call_kw', json.dumps({ 'jsonrpc': '2.0', 'params': { 'model': 'ir.module.module', 'method': 'search_read', 'args': [ [('state', '=', 'installed')], ['name'], ], 'kwargs': {'context': {'lang': 'fr_FR'}}, }, }), { 'X-Openerp-Session-Id': session_id, 'Content-type': 'application/json', }) result = urllib2.urlopen(request).read() result = json.loads(result) for module in result['result']: print module['name'] Both code snippets will print a list of installed modules, and because they pass a context that sets the language to French, the list will be in French if there are no translations available. How it works… Both snippets call the function search_read, which is very convenient because you can specify a search domain on the model you call, pass a list of fields you want to be returned, and receive the result in one request. In older versions of Odoo, you had to call search first to receive a list of IDs and then call read to actually read the data. search_read returns a list of dictionaries, with the keys being the names of the fields requested and the values the record's data. The ID field will always be transmitted, no matter if you requested it or not. Now, we need to look at the specifics of the two protocols. XMLRPC The XMLRPC API expects a user ID and a password for every call, which is why we need to fetch this ID via the method authenticate on the path /xmlrpc/2/common. If you already know the user's ID, you can skip this step. As soon as you know the user's ID, you can call any model's method by calling execute_kw on the path /xmlrpc/2/object. This method expects the database you want to execute the function on, the user's ID and password for authentication, then the model you want to call your function on, and then the function's name. The next two mandatory parameters are a list of positional arguments to your function, and a dictionary of keyword arguments. JSONRPC Don't be distracted by the size of the code example, that's because Python doesn't have built in support for JSONRPC. As soon as you've wrapped the urllib calls in some helper functions, the example will be as concise as the XMLRPC one. As JSONRPC is stateful, the first thing we have to do is to request a session at /web/session/authenticate. This function takes the database, the user's name, and their password. The crucial part here is that we record the session ID Odoo created, which we pass in the header X-Openerp-Session-Id to /web/dataset/call_kw. Then the function behaves the same as execute_kw from; we need to pass a model name and a function to call on it, then positional and keyword arguments. There's more… Both protocols allow you to call basically any function of your models. In case you don't want a function to be available via either interface, prepend its name with an underscore – Odoo won't expose those functions as RPC calls. Furthermore, you need to take care that your parameters, as well as the return values, are serializable for the protocol. To be sure, restrict yourself to scalar values, dictionaries, and lists. As you can do roughly the same with both protocols, it's up to you which one to use. This decision should be mainly driven by what your platform supports best. In a web context, you're generally better off with JSON, because Odoo allows JSON handlers to pass a CORS header conveniently (see the Make a path accessible from the network recipe for details). This is rather difficult with XMLRPC. Summary In this article, we saw how to start about with the web server architecture. Later on, we covered the Routes and Controllers that will be used in the article and their authentication, how the handlers consumes parameters, and how to use an RPC API, namely, JSON-RPC and XML-RPC. Resources for Article: Further resources on this subject: Advanced React [article] Remote Authentication [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 5471

article-image-interacting-databases-through-java-persistence-api
Packt
23 Oct 2009
17 min read
Save for later

Interacting with Databases through the Java Persistence API

Packt
23 Oct 2009
17 min read
We will look into: Creating our first JPA entity Interacting with JPA entities with entity manager Generating forms in JSF pages from JPA entities Generating JPA entities from an existing database schema JPA named queries and JPQL Entity relationships Generating complete JSF applications from JPA entities Creating Our First JPA Entity JPA entities are Java classes whose fields are persisted to a database by the JPA API. JPA entities are Plain Old Java Objects (POJOs), as such, they don't need to extend any specific parent class or implement any specific interface. A Java class is designated as a JPA entity by decorating it with the @Entity annotation. In order to create and test our first JPA entity, we will be creating a new web application using the JavaServer Faces framework. In this example we will name our application jpaweb. As with all of our examples, we will be using the bundled GlassFish application server. To create a new JPA Entity, we need to right-click on the project and select New | Entity Class. After doing so, NetBeans presents the New Entity Class wizard. At this point, we should specify the values for the Class Name and Package fields (Customer and com.ensode.jpaweb in our example), then click on the Create Persistence Unit... button. The Persistence Unit Name field is used to identify the persistence unit that will be generated by the wizard, it will be defined in a JPA configuration file named persistence.xml that NetBeans will automatically generate from the Create Persistence Unit wizard. The Create Persistence Unit wizard will suggest a name for our persistence unit, in most cases the default can be safely accepted. JPA is a specification for which several implementations exist. NetBeans supports several JPA implementations including Toplink, Hibernate, KODO, and OpenJPA. Since the bundled GlassFish application server includes Toplink as its default JPA implementation, it makes sense to take this default value for the Persistence Provider field when deploying our application to GlassFish. Before we can interact with a database from any Java EE 5 application, a database connection pool and data source need to be created in the application server. A database connection pool contains connection information that allow us to connect to our database, such as the server name, port, and credentials. The advantage of using a connection pool instead of directly opening a JDBC connection to a database is that database connections in a connection pool are never closed, they are simply allocated to applications as they need them. This results in performance improvements, since the operations of opening and closing database connections are expensive in terms of performance. Data sources allow us to obtain a connection from a connection pool by obtaining an instance of javax.sql.DataSource via JNDI, then invoking its getConnection() method to obtain a database connection from a connection pool. When dealing with JPA, we don't need to directly obtain a reference to a data source, it is all done automatically by the JPA API, but we still need to indicate the data source to use in the application's Persistence Unit. NetBeans comes with a few data sources and connection pools pre-configured. We could use one of these pre-configured resources for our application, however, NetBeans also allows creating these resources "on the fly", which is what we will be doing in our example. To create a new data source we need to select the New Data Source... item from the Data Source combo box. A data source needs to interact with a database connection pool. NetBeans comes pre-configured with a few connection pools out of the box, but just like with data sources, it allows us to create a new connection pool "on demand". In order to do this, we need to select the New Database Connection... item from the Database Connection combo box. NetBeans includes JDBC drivers for a few Relational Database Management Systems (RDBMS) such as JavaDB, MySQL, and PostgreSQL "out of the box". JavaDB is bundled with both GlassFish and NetBeans, therefore we picked JavaDB for our example. This way we avoid having to install an external RDBMS. For RDBMS systems that are not supported out of the box, we need to obtain a JDBC driver and let NetBeans know of it's location by selecting New Driver from the Name combo box. We then need to navigate to the location of a JAR file containing the JDBC driver. Consult your RDBMS documentation for details. JavaDB is installed in our workstation, therefore the server name to use is localhost. By default, JavaDB listens to port 1527, therefore that is the port we specify in the URL. We wish to connect to a database called jpaintro, therefore we specify it as the database name. Since the jpaintro database does not exist yet, we pass the attribute create=true to JavaDB, this attribute is used to create the database if it doesn't exist yet. Every JavaDB database contains a schema named APP, since each user by default uses a schema named after his/her own login name. The easiest way to get going is to create a user named "APP" and select a password for this user. Clicking on the Show JDBC URL checkbox reveals the JDBC URL for the connection we are setting up. The New Database Connection wizard warns us of potential security risks when choosing to let NetBeans remember the password for the database connection. Database passwords are scrambled (but not encrypted) and stored in an XML file under the .netbeans/[netbeans version]/config/Databases/Connections directory. If we follow common security practices such as locking our workstation when we walk away from it, the risks of having NetBeans remember database passwords will be minimal. Once we have created our new data source and connection pool, we can continue configuring our persistence unit. It is a good idea to leave the Use Java Transaction APIs checkbox checked. This will instruct our JPA implementation to use the Java Transaction API (JTA) to allow the application server to manage transactions. If we uncheck this box, we will need to manually write code to manage transactions. Most JPA implementations allow us to define a table generation strategy. We can instruct our JPA implementation to create tables for our entities when we deploy our application, to drop the tables then regenerate them when our application is deployed, or not create any tables at all. NetBeans allows us to specify the table generation strategy for our application by clicking the appropriate value in the Table Generation Strategy radio button group. When working with a new application, it is a good idea to select the Drop and Create table generation strategy. This will allow us to add, remove, and rename fields in our JPA entity at will without having to make the same changes in the database schema. When selecting this table generation strategy, tables in the database schema will be dropped and recreated, therefore any data previously persisted will be lost. Once we have created our new data source, database connection and persistence unit, we are ready to create our new JPA entity. We can do so by simply clicking on the Finish button. At this point NetBeans generates the source for our JPA entity. JPA allows the primary field of a JPA entity to map to any column type (VARCHAR, NUMBER). It is best practice to have a numeric surrogate primary key, that is, a primary key that serves only as an identifier and has no business meaning in the application. Selecting the default Primary Key type of long will allow for a wide range of values to be available for the primary keys of our entities. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } //Other generated methods (hashCode(), equals() and //toString() omitted for brevity.} As we can see, a JPA entity is a standard Java object. There is no need to extend any special class or implement any special interface. What differentiates a JPA entity from other Java objects are a few JPA-specific annotations. The @Entity annotation is used to indicate that our class is a JPA entity. Any object we want to persist to a database via JPA must be annotated with this annotation. The @Id annotation is used to indicate what field in our JPA entity is its primary key. The primary key is a unique identifier for our entity. No two entities may have the same value for their primary key field. This annotation can be placed just above the getter method for the primary key class. This is the strategy that the NetBeans wizard follows. It is also correct to specify the annotation right above the field declaration. The @Entity and the @Id annotations are the bare minimum two annotations that a class needs in order to be considered a JPA entity. JPA allows primary keys to be automatically generated. In order to take advantage of this functionality, the @GeneratedValue annotation can be used. As we can see, the NetBeans generated JPA entity uses this annotation. This annotation is used to indicate the strategy to use to generate primary keys. All possible primary key generation strategies are listed in the following table:   Primary Key Generation Strategy   Description   GenerationType.AUTO   Indicates that the persistence provider will automatically select a primary key generation strategy. Used by default if no primary key generation strategy is specified.   GenerationType.IDENTITY   Indicates that an identity column in the database table the JPA entity maps to must be used to generate the primary key value.   GenerationType.SEQUENCE   Indicates that a database sequence should be used to generate the entity's primary key value.   GenerationType.TABLE   Indicates that a database table should be used to generate the entity's primary key value.       In most cases, the GenerationType.AUTO strategy works properly, therefore it is almost always used. For this reason the New Entity Class wizard uses this strategy. When using the sequence or table generation strategies, we might have to indicate the sequence or table used to generate the primary keys. These can be specified by using the @SequenceGenerator and @TableGenerator annotations, respectively. Consult the Java EE 5 JavaDoc at http://java.sun.com/javaee/5/docs/api/ for details. For further knowledge on primary key generation strategies you can refer EJB 3 Developer Guide by Michael Sikora, which is another book by Packt Publishing (http://www.packtpub.com/developer-guide-for-ejb3/book). Adding Persistent Fields to Our Entity At this point, our JPA entity contains a single field, its primary key. Admittedly not very useful, we need to add a few fields to be persisted to the database. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; private String firstName; private String lastName; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } //Additional methods omitted for brevity} In this modified version of our JPA entity, we added two fields to be persisted to the database; firstName will be used to store the user's first name, lastName will be used to store the user's last name. JPA entities need to follow standard JavaBean coding conventions. This means that they must have a public constructor that takes no arguments (one is automatically generated by the Java compiler if we don't specify any other constuctors), and all fields must be private, and accessed through getter and setter methods. Automatically Generating Getters and Setters In NetBeans, getter and setter methods can be generated automatically. Simply declare new fields as usual then use the "insert code" keyboard shortcut (default is Alt+Insert), then select Getter and Setter from the resulting pop-up window, then click on the check box next to the class name to select all fields, then click on the Generate button. Before we can use JPA persist our entity's fields into our database, we need to write some additional code. Creating a Data Access Object (DAO) It is a good idea to follow the DAO design pattern whenever we write code that interacts with a database. The DAO design pattern keeps all database access functionality in DAO classes. This has the benefit of creating a clear separation of concerns, leaving other layers in our application, such as the user interface logic and the business logic, free of any persistence logic. There is no special procedure in NetBeans to create a DAO. We simply follow the standard procedure to create a new class by selecting File | New, then selecting Java as the category and the Java Class as the file type, then entering a name and a package for the class. In our example, we will name our class CustomerDAO and place it in the com.ensode.jpaweb package. At this point, NetBeans create a very simple class containing only the package and class declarations. To take complete advantage of Java EE features such as dependency injection, we need to make our DAO a JSF managed bean. This can be accomplished by simply opening faces-config.xml, clicking its XML tab, then right-clicking on it and selecting JavaServer Faces | Add Managed Bean. We get the Add Manged Bean dialog as seen here: We need to enter a name, fully qualified name, and scope for our managed bean (which, in our case, is our DAO), then click on the Add button. This action results in our DAO being declared as a managed bean in our application's faces-config.xml configuration file. <managed-bean> <managed-bean-name>CustomerDAO</managed-bean-name> <managed-bean-class> com.ensode.jpaweb.CustomerDAO </managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> We could at this point start writing our JPA code manually, but with NetBeans there is no need to do so, we can simply right-click on our code and select Persistence | Use Entity Manager, and most of the work is automatically done for us. Here is how our code looks like after doing this trivial procedure: package com.ensode.jpaweb;import javax.annotation.Resource;import javax.naming.Context;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;@PersistenceContext(name = "persistence/LogicalName", unitName = "jpawebPU")public class CustomerDAO { @Resource private javax.transaction.UserTransaction utx; protected void persist(Object object) { try { Context ctx = (Context) new javax.naming.InitialContext(). lookup("java:comp/env"); utx.begin(); EntityManager em = (EntityManager) ctx.lookup("persistence/LogicalName"); em.persist(object); utx.commit(); } catch (Exception e) { java.util.logging.Logger.getLogger( getClass().getName()).log( java.util.logging.Level.SEVERE, "exception caught", e); throw new RuntimeException(e); } }} All highlighted code is automatically generated by NetBeans. The main thing NetBeans does here is add a method that will automatically insert a new row in the database, effectively persisting our entity's properties. As we can see, NetBeans automatically generates all necessary import statements. Additionally, our new class is automatically decorated with the @PersistenceContext annotation. This annotation allows us to declare that our class depends on an EntityManager (we'll discuss EntityManager in more detail shortly). The value of its name attribute is a logical name we can use when doing a JNDI lookup for our EntityManager. NetBeans by default uses persistence/LogicalName as the value for this property. The Java Naming and Directory Interface (JNDI) is an API we can use to obtain resources, such as database connections and JMS queues, from a directory service. The value of the unitName attribute of the @PersistenceContext annotation refers to the name we gave our application's Persistence Unit. NetBeans also creates a new instance variable of type javax.transaction.UserTransaction. This variable is needed since all JPA code must be executed in a transaction. UserTransaction is part of the Java Transaction API (JTA). This API allows us to write code that is transactional in nature. Notice that the UserTransaction instance variable is decorated with the @Resource annotation. This annotation is used for dependency injection. in this case an instance of a class of type javax.transaction.UserTransaction will be instantiated automatically at run-time, without having to do a JNDI lookup or explicitly instantiating the class. Dependency injection is a new feature of Java EE 5 not present in previous versions of J2EE, but that was available and made popular in the Spring framework. With standard J2EE code, it was necessary to write boilerplate JNDI lookup code very frequently in order to obtain resources. To alleviate this situation, Java EE 5 made dependency injection part of the standard. The next thing we see is that NetBeans added a persist method that will persist a JPA entity, automatically inserting a new row containing our entity's fields into the database. As we can see, this method takes an instance of java.lang.Object as its single parameter. The reason for this is that the method can be used to persist any JPA entity (although in our example, we will use it to persist only instances of our Customer entity). The first thing the generated method does is obtain an instance of javax.naming.InitialContext by doing a JNDI lookup on java:comp/env. This JNDI name is the root context for all Java EE 5 components. The next thing the method does is initiate a transaction by invoking uxt.begin(). Notice that since the value of the utx instance variable was injected via dependency injection (by simply decorating its declaration with the @Resource annotation), there is no need to initialize this variable. Next, the method does a JNDI lookup to obtain an instance of javax.persistence.EntityManager. This class contains a number of methods to interact with the database. Notice that the JNDI name used to obtain an EntityManager matches the value of the name attribute of the @PersistenceContext annotation. Once an instance of EntityManager is obtained from the JNDI lookup, we persist our entity's properties by simply invoking the persist() method on it, passing the entity as a parameter to this method. At this point, the data in our JPA entity is inserted into the database. In order for our database insert to take effect, we must commit our transaction, which is done by invoking utx.commit(). It is always a good idea to look for exceptions when dealing with JPA code. The generated method does this, and if an exception is caught, it is logged and a RuntimeException is thrown. Throwing a RuntimeException has the effect of rolling back our transaction automatically, while letting the invoking code know that something went wrong in our method. The UserTransaction class has a rollback() method that we can use to roll back our transaction without having to throw a RunTimeException. At this point we have all the code we need to persist our entity's properties in the database. Now we need to write some additional code for the user interface part of our application. NetBeans can generate a rudimentary JSF page that will help us with this task.
Read more
  • 0
  • 0
  • 5370

article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 5277

article-image-working-liferay-user-user-group-organization
Packt
04 Jun 2015
23 min read
Save for later

Working with a Liferay User / User Group / Organization

Packt
04 Jun 2015
23 min read
In this article by Piotr Filipowicz and Katarzyna Ziółkowska, authors of the book Liferay 6.x Portal Enterprise Intranets Cookbook, we will cover the basic functionalities that will allow us to manage the structure and users of the intranet. In this article, we will cover the following topics: Managing an organization structure Creating a new user group Adding a new user Assigning users to organizations Assigning users to user groups Exporting users (For more resources related to this topic, see here.) The first step in creating an intranet, beyond answering the question of who the users will be, is to determine its structure. The structure of the intranet is often a derivative of the organizational structure of the company or institution. Liferay Portal CMS provides several tools that allow mapping of a company's structure in the system. The hierarchy is built by organizations that match functional or localization departments of the company. Each organization represents one department or localization and assembles users who represent employees of these departments. However, sometimes, there are other groups of employees in the company. These groups exist beyond the company's organizational structure, and can be reflected in the system by the User Groups functionality. Managing an organization structure Building an organizational structure in Liferay resembles the process of managing folders on a computer drive. An organization may have its suborganizations and—except the first level organization—at the same time, it can be a suborganization of another one. This folder-similar mechanism allows you to create a tree structure of organizations. Let's imagine that we are obliged to create an intranet for a software development company. The company's headquarter is located in London. There are also two other offices in Liverpool and Glasgow. The company is divided into finance, marketing, sales, IT, human resources, and legal departments. Employees from Glasgow and Liverpool belong to the IT department. How to do it… In order to create a structure described previously, these are the steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Add button. Choose the type of organization you want to create (in our example, it will be a regular organization called software development company, but it is also possible to choose a location). Provide a name for the top-level organization. Choose the parent organization (if a top-level organization is created, this must be skipped). Click on the Save button: Click on the Change button and upload a file, with a graphic representation of your company (for example, logo). Use the right column menu to navigate to data sections you want to fill in with the information. Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (the left-arrow icon next to the Edit Software Development Company header). Click on the Actions button, located near the name of the newly created organization. Choose the Add Regular Organization option. Provide a name for the organization (in our example, it is IT). Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (left-arrow icon next to Edit IT header). Click on the Actions button, located near the name of the newly created organization (in our case, it is IT). Choose the Add Location option. Provide a name for the organization (for instance, IT Liverpool). Provide a country. Provide a region (if available). Click on the Save button. How it works… Let's take a look at what we did throughout the previous recipe. In steps 1 through 6, we created a new top-level organization called software development company. With steps 7 through 9, we defined a set of attributes of the newly created organization. Starting from step 11, we created suborganizations: standard organization (IT) and its location (IT Liverpool). Creating an organization There are two types of organizations: regular organizations and locations. The regular organization provides the possibility to create a multilevel structure, each unit of which can have parent organizations and suborganizations (there is one exception: the top-level organization cannot have any parent organizations). The localization is a special kind of organization that allows us to provide some additional data, such as country and region. However, it does not enable us to create suborganizations. When creating the tree of organizations, it is possible to combine regular organizations and locations, where, for instance, the top-level organization will be the regular organization and, both locations and regular organizations will be used as child organizations. When creating a new organization, it is very important to choose the organization type wisely, because it is the only organization parameter, which cannot be modified further. As was described previously, organizations can be arranged in a tree structure. The position of the organization in a tree is determined by the parent organization parameter, which is set by creating a new organization or by editing an existing one. If the parent organization is not set, a top-level organization is always created. There are two ways of creating a suborganization. It is possible to add a new organization by using the Add button and choosing a parent organization manually. The other way is to go to a specific organization's action menu and choose the Add Regular Organization action. While creating a new organization using this option, the parent organization parameter will be set automatically. Setting attributes Similarly, just like its counterpart in reality, every organization in Liferay has a set of attributes that are grouped and can be modified through the organization profile form. This form is available after clicking on the Edit button from the organization's action list (see the There's more… section). All the available attributes are divided into the following groups: The ORGANIZATION INFORMATION group, which contains the following sections: The Details section, which allows us to change the organization name, parent organization, country, or region (available for locations only). The name of the organization is the only required organization parameter. It is used by the search mechanism to search for organizations. It is also a part of an URL address of the organization's sites. The Organization Sites section, which allows us to enable the private and public pages of the organization's website. The Categorization section, which provides tags and categories. They can be assigned to an organization. IDENTIFICATION, which groups the Addresses, Phone Numbers, Additional Email Addresses, Websites, and Services sections. MISCELLANEOUS, which consists of: The Comments section, which allows us to manage an organization's comments The Reminder Queries section, in which reminder queries for different languages can be set The Custom Fields section, which provides a tool to manage values of custom attributes defined for the organization Customizing an organization functionalities Liferay provides the possibility to customize an organization's functionality. In the portal.properties file located in the portal-impl/src folder, there is a section called Organizations. All these settings can be overridden in the portal-ext.properties file. We mentioned that top-level organization cannot have any parent organizations. If we look deeper into portal settings, we can dig out the following properties: organizations.rootable[regular-Organization]=true organizations.rootable[location]=false These properties determine which type of organization can be created as a root organization. In many cases, users want to add a new organization's type. To achieve this goal, it is necessary to set a few properties that describe a new type: organizations.types=regular-Organization,location,my-Organization organizations.rootable[my-organization]=false organizations.children.types[my-organization]=location organizations.country.enabled[my-organization]=false organizations.country.required[my-organization]=false The first property defines a list of available types. The second one denies the possibility to create an organization as a root. The next one specifies a list of types that we can create as children. In our case, this is only the location type. The last two properties turn off the country list in the creation process. This option is useful when the location is not important. Another interesting feature is the ability to customize an organization's profile form. It is possible to indicate which sections are available on the creation form and which are available on the modification form. The following properties aggregate this feature: organizations.form.add.main=details,organization-site organizations.form.add.identification= organizations.form.add.miscellaneous=   organizations.form.update.main=details,organization-site,categorization organizations.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,services organizations.form.update.miscellaneous=comments,reminder-queries,custom-fields There's more… It is also possible to modify an existing organization and its attributes and to manage its members using actions available in the organization Actions menu. There are several possible actions that can be performed on an organization: The Edit action allows us to modify the attributes of an organization. The Manage Site action redirects the user to the Site Settings section in Control Panel and allows us to manage the organization's public and private sites (if the organization site has been already created). The Assign Organization Roles action allows us to set organization roles to members of an organization. The Assign Users action allows us to assign users already existing in the Liferay database to the specific organization. The Add User action allows us to create a new user, who will be automatically assigned to this specific organization. The Add Regular Organization action enables us to create a new child regular organization (the current organization will be automatically set as a parent organization of a new one). The Add Location action enables us to create a new location (the current organization will be automatically set as a parent organization of a new one). The Delete action allows us to remove an organization. While removing an organization, all pages with portlets and content are also removed. An organization cannot be removed if there are suborganizations or users assigned to it. In order to edit an organization, assign or add users, create a new suborganization (regular organization or location) or delete an organization. Perform the following steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button, located near the name of the organization you want to modify. Click on the name of the chosen action. Creating a new user group Sometimes, in addition to the hierarchy, within the company, there are other groups of people linked by common interests or occupations, such as people working on a specific project, people occupying the same post, and so on. Such groups in Liferay are represented by user groups. This functionality is similar to the LDAP users group where it is possible to set group permissions. One user can be assigned into many user groups. How to do it… In order to create a new user group, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Add button. Provide Name (required) and Description of the user group. Leave the default values in the User Group Site section. Click on the Save button. How it works… The user groups functionality allows us to create a collection of users and provide them with a public and/or private site, which contain a bunch of tools for collaboration. Unlike the organization, the user group cannot be used to produce a multilevel structure. It enables us to create non-hierarchical groups of users, which can be used by other functionalities. For example, a user group can be used as an additional information targeting tool for the announcements portlet, which presents short messages sent by authorized users (the announcements portlet allows us to direct a message to all users from a specific organization or user group). It is also possible to set permissions to a user group and decide which actions can be performed by which roles within this particular user group. It is worth noting that user groups can assemble users who are already members of organizations. This mechanism is often used when, aside from the company organizational structure, there exist other groups of people who need a common place to store data or for information exchange. There's more… It is also possible to modify an existing user group and its attributes and to manage its members using actions available in the user group Actions menu. There are several possible actions that can be performed on a user group. They are as follows: The Edit action allows us to modify attributes of a user group The Permissions action allows us to decide which roles can assign members of this user group, delete the user group, manage announcements, set permissions, and update or view the user group The Manage Site Pages action redirects the user to the site settings section in Control Panel and allows us to manage the user group's public and private sites The Go to the Site's Public Pages action opens the user group's public pages in a new window (if any public pages of User Group Site has been created) The Go to the Site's Private Pages action opens the user group's private pages in a new window (if any public pages of User Group Site has been created) The Assign Members action allows us to assign users already existing in the Liferay database to this specific user group The Delete action allows us to delete a user group A user group cannot be removed if there are users assigned to it. In order to edit a user group, set permissions, assign members, manage site pages, or delete a user group, perform these steps: Go to Admin | Control panel | Users | User Groups. Click on the Actions button, located near the name of the user group you want to modify: Click on the name of the chosen action. Adding a new user Each system is created for users. Liferay Portal CMS provides a few different ways of adding users to the system that can be enabled or disabled depending on the requirements. The first way is to enable users by creating their own accounts via the Create Account form. This functionality allows all users who can enter the site containing the form to register and gain access to the designated content of the website. In this case, the system automatically assigns the default user account parameters, which indicate the range of activities that may be carried by them in the system. The second solution (which we presented in this recipe) is to reserve the users' account creation to the administrators, who will decide what parameters should be assigned to each account. How to do it… To add a new user, you need to follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Add button. Choose the User option. Fill in the form by providing the user's details in the Email Address (Required), Title, First Name (Required), Middle Name, Last Name, Suffix, Birthday, and Job Title fields (if the Autogenerated User Screen Names option in the Portal Settings | Users section is disabled, the screen name field will be available): Click on the Save button: Using the right column menu, navigate to the data sections you want to fill in with the information. Click on the Save button. How it works… In steps 1 through 5, we created a new user. With steps 6 and 7, we defined a set of attributes of the newly created user. This user is active and can already perform activities according to their memberships and roles. To understand all the mechanisms that influence the user's possible behavior in the system, we have to take a deeper look at these attributes. User as a member of organizations, user groups, and sites The first and most important thing to know about users is that they can be members of organizations, user groups, and sites. The range of activities performed by users within each organization, user group, or site they belong to is determined by the roles assigned to them. All the roles must be assigned for each user of an organization and site individually. This means it is possible, for instance, to make a user the administrator of one organization and only a power user of another. User attributes Each user in Liferay has a set of attributes that are grouped and can be modified through the user profile form. This form is available after clicking on the Edit button from the user's actions list (see, the There's more… section). All the available attributes are divided into the following groups: USER INFORMATION, which contains the following sections: The Details section enables us to provide basic user information, such as Screen Name, Email Address, Title, First Name, Middle Name, Last Name, Suffix, Birthday, Job Title, and Avatar The Password section allows us to set a new password or force a user to change their current password The Organizations section enables us to choose the organizations of which the user is a member The Sites section enables us to choose the sites of which the user is a member The User Groups section enables us to choose user groups of which the user is a member The Roles tab allows us to assign user roles The Personal Site section helps direct the public and private sites to the user The Categorization section provides tags and categories, which can be assigned to a user IDENTIFICATION allows us to to set additional user information, such as Addresses, Phone Numbers, Additional Email Addresses, Websites, Instant Messenger, Social Network, SMS, and OpenID MISCELLANEOUS, which contains the following sections: The Announcements section allows us to set the delivery options for alerts and announcements The Display Settings section covers the Language, Time Zone, and Greeting text options The Comments section allows us to manage the user's comments The Custom Fields section provides a tool to manage values of custom attributes defined for the user User site As it was mentioned earlier, each user in Liferay may have access to different kinds of sites: organization sites, user group sites, and standalone sites. In addition to these, however, users may also have their own public and private sites, which can be managed by them. The user's public and private sites can be reached from the user's menu located on the dockbar (the My Profile and My Dashboard links). It is also possible to enter these sites using their addresses, which are /web/username/home and /user/username/home, respectively. Customizing users Liferay gives us a whole bunch of settings in portal.properties under the Users section. If you want to override some of the properties, put them into the portal-ext.properties file. It is possible to deny deleting a user by setting the following property: users.delete=false As in the case of organizations, there is a functionality that lets us customize sections on the creation or modification form: users.form.add.main=details,Organizations,personal-site users.form.add.identification= users.form.add.miscellaneous=   users.form.update.main=details,password,Organizations,sites,user-groups,roles,personal-site,categorization users.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,instant-messenger,social-network,sms,open-id users.form.update.miscellaneous=announcements,display-settings,comments,custom-fields There are many other properties, but we will not discuss all of them. In portal.properties, located in the portal-impl/src folder, under the Users section, it is possible to find all the settings, and every line is documented by comment. There's more… Each user in the system can be active or inactive. An active user can log into their user account and use all resources available for them within their roles and memberships. Inactive user cannot enter his account, access places and perform activities, which are reserved for authorized and authenticated users only. It is worth noticing that active users cannot be deleted. In order to remove a user from Liferay, you need to to deactivate them first. To deactivate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Go to the All Users tab. Find the active user you want to deactivate. Click on the Deactivate button. Confirm this action by clicking on the Ok button. To activate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Go to the All Users tab. Find the inactive user you want to activate. Click on the Actions button located near the name of the user. Click on the Activate button. Sometimes, when using the system, users report some irregularities or get a little confused and require assistance. You need to look at the page through the user's eyes. Liferay provides a very useful functionality that allows authorized users to impersonate another user. In order to use this functionality, perform these steps: Log in as an administrator and go to Control Panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Click on the Impersonate user button. See also For more information on managing users, refer to the Exporting users recipe from this article Assigning users to organizations There are several ways a user can be assigned to an organization. It can be done by editing the user account that has already been created (see the User attributes section in Adding a new user recipe) or using the Assign Users action from the organization actions menu. In this recipe, we will show you how to assign a user to an organization using the option available in the organization actions menu. Getting ready To go through this recipe, you will need an organization and a user (refer to Managing an organization structure and Adding a new user recipes from this article). How to do it… In order to assign a user to an organization from the organization menu, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the organization to which you want to assign the user. Choose the Assign Users option. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… Each user in Liferay can be assigned to as many regular organizations as required and to exactly one location. When a user is assigned to the organization, they appear on the list of users of the organization. They become members of the organization and gain access to the organization's public and private pages according to the assigned roles and permissions. As was shown in the previous recipe, while editing the list of assigned users in the organization menu, it is possible to assign multiple users. It is worth noting that an administrator can assign the users of the organizations and suborganizations tasks that she or he can manage. To allow any administrator of an organization to be able to assign any user to that organization, set the following property in the portal-ext.properties file: Organizations.assignment.strict=true In many cases, when our organizations have a tree structure, it is not necessary that a member of a child organization has access to the ancestral ones. To disable this structure set the following property: Organizations.membership.strict=true See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on assigning users to user groups, refer to the Assigning users to a user group recipe from this article Assigning users to a user group In addition to being a member of the organization, each user can be a member of one or more user groups. As a member of a user group, a user can profit by getting access to the user group's sites or other information directed exclusively to its members, for instance, messages sent by the Announcements portlet. A user becomes a member of the group when they are assigned to it. This assignment can be done by editing the user account that has already been created (see the User attributes description in Adding a new user recipe) or using the Assign Members action from the User Groups actions menu. In this recipe, we will show you how to assign a user to a user group using the option available in the User Groups actions menu. Getting ready To step through this recipe, first, you have to create a user group and a user (see the Creating a new user group and Adding a new user recipes). How to do it… In order to assign a user to a user group from the User Groups menu, perform these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Actions button located near the name of the user group to which you want to assign the user. Click on the Assign Members button. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… As was shown in this recipe, one or more users can be assigned to a user group by editing the list of assigned users in the user group menu. Each user assigned to a user group becomes a member of this group and gains access to the user group's public and private pages according to assigned roles and permissions. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information about assigning users to organization, refer to the Assigning users to organizations recipe from this article Exporting users Liferay Portal CMS provides a simple export mechanism, which allows us to export a list of all the users stored in the database or a list of all the users from a specific organization to a file. How to do it… In order to export the list of all users from the database to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Export Users button. In order to export the list of all users from the specific organization to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the All Organizations tab. Click on the name of an organization to which the users are supposed to be exported. Click on the Export Users button. How it works… As mentioned previously, Liferay allows us to export users from a particular organization to a .csv file. The .csv file contains a list of user names and corresponding e-mail addresses. It is also possible to export all the users by clicking on the Export Users button located on the All Users tab. You will find this tab by going to Admin | Control panel | Users | Users and Organizations. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on how to assign users to organizations, refer to the Assigning users to organizations recipe from this article Summary In this article, you have learnt how to manage an organization structure by creating users and assigning them to organizations and user groups. You have also learnt how to export users using Liferay's export mechanism. Resources for Article: Further resources on this subject: Cache replication [article] Portlet [article] Liferay, its Installation and setup [article]
Read more
  • 0
  • 1
  • 5276
article-image-developing-web-project-jasperreports
Packt
27 May 2013
11 min read
Save for later

Developing a Web Project for JasperReports

Packt
27 May 2013
11 min read
(For more resources related to this topic, see here.) Setting the environment First, we need to install the required software, Oracle Enterprise Pack for Eclipse 12c, from http://www.oracle.com/technetwork/middleware/ias/ downloads/wls-main-097127.html using Installers with Oracle WebLogic Server, Oracle Coherence and Oracle Enterprise Pack for Eclipse, and download the Oracle Database 11g Express Edition from http://www.oracle.com/technetwork/products/express-edition/overview/index.html. Setting the environment requires the following tasks: Creating database tables Configuring a data source in WebLogic Server 12c Copying JasperReports required JAR files to the server classpath First create a database table, which shall be the data source for creating the reports, with the following SQL script. If a database table has already been created, the table may be used for this article too. CREATE TABLE OE.Catalog(CatalogId INTEGER PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25)); INSERT INTO OE.Catalog VALUES('1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss'); INSERT INTO OE.Catalog VALUES('2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); INSERT INTO OE.Catalog VALUES('3', 'Oracle Magazine', 'Oracle Publishing', 'March-April 2005', 'Starting with Oracle ADF ', 'Steve Muench'); Next, configure a data source in WebLogic server with JNDI name jdbc/OracleDS. Next, we need to download some JasperReports JAR files including dependencies. Download the JAR/ZIP files listed below and extract the zip/tar.gz to a directory, c:/jasperreports for example.   JAR/ZIP Donwload URL jasperreports-4.7.0.jar http://sourceforge.net/projects/ jasperreports/files/jasperreports/JasperReports%204.7.0/ itext-2.1.0 http://mirrors.ibiblio.org/pub/mirrors/maven2/com/ lowagie/itext/2.1.0/itext-2.1.0.jar commons-beanutils-1.8.3-bin.zip http://commons.apache.org/beanutils/download_beanutils.cgi commons-digester-2.1.jar http://commons.apache.org/digester/download_digester.cgi commons-logging-1.1.1-bin http://commons.apache.org/logging/download_logging.cgi  poi-bin-3.8-20120326 zip or tar.gz http://poi.apache.org/download.html#POI-3.8 All the JasperReports libraries are open source. We shall be using the following JAR files to create a JasperReports report: JAR File Description commons-beanutils-1.8.3.jar JavaBeans utility classes commons-beanutils-bean-collections-1.8.3.jar Collections framework extension classes commons-beanutils-core-1.8.3.jar JavaBeans utility core classes commons-digester-2.1.jar Classes for processing XML documents. commons-logging-1.1.1.jar Logging classes iText-2.1.0.jar PDF library jasperreports-4.7.0.jar JasperReports API poi-3.8-20120326.jar, poi-excelant-3.8-20120326.jar, poi-ooxml-3.8-20120326.jar, poi-ooxml-schemas-3.8-20120326.jar, poi-scratchpad-3.8-20120326.jar Apache Jakarta POI  classes and dependencies. Add the Jasper Reports required by the JAR files to the user_projectsdomains base_domainbinstartWebLogic.bat script's CLASSPATH variable: set SAVE_CLASSPATH=%CLASSPATH%;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-1.8.3.jar;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-bean-collections-1.8.3.jar;C: jasperreportscommons-beanutils-1.8.3commons-beanutils-core- 1.8.3.jar;C:jasperreportscommons-digester-2.1.jar;C:jasperreports commons-logging-1.1.1commons-logging-1.1.1.jar;C:jasperreports itext-2.1.0.jar;C:jasperreportsjasperreports-4.7.0.jar;C: jasperreportspoi-3.8poi-3.8-20120326.jar;C:jasperreportspoi- 3.8poi-scratchpad-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- 3.8-20120326.jar;C:jasperreportspoi-3.8.jar;C:jasperreports poi-3.8poi-excelant-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- schemas-3.8-20120326.jar Creating a Dynamic Web project in Eclipse First, we need to create a web project for generating JasperReports reports. Select File | New | Other. In New wizard select Web | Dynamic Web Project. In Dynamic Web Project configuration specify a Project name (PDFExcelReports for example), select the Target Runtime as Oracle WebLogic Server 11g R1 ( 10.3.5). Click on Next. Select the default Java settings; that is, Default output folder as build/classes, and then click on Next. In WebModule, specify ContextRoot as PDFExcelReports and Content Directory as WebContent. Click on Finish. A web project for PDFExcelReports gets generated. Right-click on the project node in ProjectExplorer and select Project Properties. In Properties, select Project Facets. The Dynamic Web Module project facet should be selected by default as shown in the following screenshot: Next, create a User Library for JasperReports JAR files and dependencies. Select Java Build Path in Properties. Click on Add Library. In Add Library, select User Library and click on Next. In User Library, click on User Libraries. In User Libraries, click on New. In New User Library, specify a User library name (JasperReports) and click on OK. A new user library gets added to User Libraries. Click on Add JARs to add JAR files to the library. The following screenshot shows the JasperReports that are added: Creating the configuration file We require a JasperReports configuration file for generating reports. JasperReports XML configuration files are based on the jasperreport.dtd DTD, with a root element of jasperReport. We shall specify the JasperReports report design in an XML configuration bin file, which we have called config.xml. Create an XML file config.xml in the webContent folder by selecting XML | XML File in the New wizard. Some of the other elements (with commonly used subelements and attributes) in a JasperReports configuration XML file are listed in the following table: XML Element Description Sub-Elements Attributes jasperReport Root Element reportFont, parameter, queryString, field, variable, group, title, pageHeader, columnHeader, detail, columnFooter, pageFooter. name, columnCount, pageWidth, pageHeight, orientation, columnWidth, columnSpacing, leftMargin, rightMargin, topMargin, bottomMargin. reportFont Report level font definitions - name, isDefault, fontName, size, isBold, isItalic, isUnderline, isStrikeThrough, pdfFontName, pdfEncoding, isPdfEmbedded parameter Object references used in generating a report. Referenced with P${name} parameterDescription, defaultValueExpression name, class queryString Specifies the SQL query for retrieving data from a database. - - field Database table columns included in report. Referenced with F${name} fieldDescription name, class variable Variable used in the report XML file. Referenced with V${name} variableExpression, initialValueExpression name,class. title Report title band - pageHeader Page Header band - columnHeader Specifies the different columns in the report generated. band - detail Specifies the column values band - columnFooter Column footer band - A report section is represented with the band element. A band element includes staticText and textElement elements. A staticText element is used to add static text to a report (for example, column headers) and a textElement element is used to add dynamically generated text to a report (for example, column values retrieved from a database table). We won't be using all or even most of these element and attributes. Specify the page width with the pageWidth attribute in the root element jasperReport. Specify the report fonts using the reportFont element. The reportElement elements specify the ARIAL_NORMAL, ARIAL_BOLD, and ARIAL_ITALIC fonts used in the report. Specify a ReportTitle parameter using the parameter element. The queryString of the example JasperReports configuration XML file catalog.xml specifies the SQL query to retrieve the data for the report. <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM OE.Catalog]]> </queryString> The PDF report has the columns CatalogId, Journal, Publisher, Edition, Title, and Author. Specify a report band for the report title. The ReportTitle parameter is invoked using the $P {ReportTitle} expression. Specify a column header using the columnHeader element. Specify static text with the staticText element. Specify the report detail with the detail element. A column text field is defined using the textField element. The dynamic value of a text field is defined using the textFieldExpression element: <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> Specify a page footer with the pageFooter element. Report parameters are defined using $P{}, report fields using $F{}, and report variables using $V{}. The config. xml file is listed as follows: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE jasperReport PUBLIC "-//JasperReports//DTD Report Design// EN" "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd"> <jasperReport name="PDFReport" pageWidth="975"> The following code snippet specifies the report fonts: <reportFont name="Arial_Normal" isDefault="true" fontName="Arial" size="15" isBold="false" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Bold" isDefault="false" fontName="Arial" size="15" isBold="true" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Bold" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Italic" isDefault="false" fontName="Arial" size="12" isBold="false" isItalic="true" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Oblique" pdfEncoding="Cp1252" isPdfEmbedded="false"/> The following code snippet specifies the parameter for the report title, the SQL query to generate the report with, and the report fields. The resultset from the SQL query gets bound to the fields. <parameter name="ReportTitle" class="java.lang.String"/> <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM Catalog]]></queryString> <field name="CatalogId" class="java.lang.String"/> <field name="Journal" class="java.lang.String"/> <field name="Publisher" class="java.lang.String"/> <field name="Edition" class="java.lang.String"/> <field name="Title" class="java.lang.String"/> <field name="Author" class="java.lang.String"/> Add the report title to the report as follows: <title> <band height="50"> <textField> <reportElement x="350" y="0" width="200" height="50" /> <textFieldExpression class="java.lang. String">$P{ReportTitle}</textFieldExpression> </textField> </band> </title> <pageHeader> <band> </band> </pageHeader> Add the column's header as follows: <columnHeader> <band height="20"> <staticText> <reportElement x="0" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[CATALOG ID]]></text> </staticText> <staticText> <reportElement x="125" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[JOURNAL]]></text> </staticText> <staticText> <reportElement x="250" y="0" width="150" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[PUBLISHER]]></text> </staticText> <staticText> <reportElement x="425" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[EDITION]]></text> </staticText> <staticText> <reportElement x="550" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[TITLE]]></text> </staticText> <staticText> <reportElement x="775" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[AUTHOR]]></text> </staticText> </band> </columnHeader> The following code snippet shows how to add the report detail, which consists of values retrieved using the SQL query from the Oracle database: <detail> <band height="20"> <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="125" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Jour nal}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="250" y="0" width="150" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Publ isher}]]></textFieldExpression> </textField> <textField> <reportElement x="425" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Edit ion}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="550" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Title}]]></textFieldExpression> </textField> <textField> <reportElement x="775" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Author}]]></textFieldExpression> </textField> </band> </detail> Add the column and page footer including the page number as follows: <columnFooter> <band> </band> </columnFooter> <pageFooter> <band height="15"> <staticText> <reportElement x="0" y="0" width="40" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <text><![CDATA[Page #]]></text> </staticText> <textField> <reportElement x="40" y="0" width="100" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <textFieldExpression class="java.lang. Integer"><![CDATA[$V{PAGE_NUMBER}]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band> </band> </summary> </jasperReport> We need to create a JAR file for the config.xml file and add the JAR file to the WebLogic Server's domain's lib directory. Create a JAR file using the following command from the directory containing the config.xml as follows: >jar cf config.jar config.xml Add the config.jar file to the user_projectsdomainsbase_domainlib directory, which is in the classpath of the server.
Read more
  • 0
  • 0
  • 5236

article-image-using-xml-facade-dom
Packt
13 Sep 2013
25 min read
Save for later

Using XML Facade for DOM

Packt
13 Sep 2013
25 min read
(For more resources related to this topic, see here.) The Business Process Execution Language (BPEL) is based on XML, which means that all the internal variables and data are presented in XML. BPEL and Java technologies are complementary, we seek ways to ease the integration of the technologies. In order to handle the XML content from BPEL variables in Java resources (classes), we have a couple of possibilities: Use DOM (Document Object Model) API for Java, where we handle the XML content directly through API calls. An example of such a call would be reading from the input variable: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); We receive the XMLElement class, which we need to handle further, either be assignment, reading of content, iteration, or something else. As an alternative, we can use XML facade though Java Architecture for XML Binding (JAXB). JAXB provides a convenient way of transforming XML to Java or vice-versa. The creation of XML facade is supported through the xjc utility and of course via the JDeveloper IDE. The example code for accessing XML through XML facade is: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We can see that there is neither XML content nor DOM API anymore. Furthermore, we have to access the whole XML structure represented by Java classes. The latest specification of JAXB at the time of writing is 2.2.7, and its specification can be found at the following location: https://jaxb.java.net/. The purpose of an XML facade operation is the marshalling and un-marshalling of Java classes. When the originated content is presented in XML, we use un-marshalling methods in order to generate the correspondent Java classes. In cases where we have content stored in Java classes and we want to present the content in XML, we use the marshalling methods. JAXB provides the ability to create XML facade from an XML schema definition or from the WSDL (Web Service Definition/Description Language). The latter method provides a useful approach as we, in most cases, orchestrate web services whose operations are defined in WSDL documents. Throughout this article, we will work on a sample from the banking world. On top of this sample, we will show how to build the XML facade. The sample contains the simple XML types, complex types, elements, and cardinality, so we cover all the essential elements of functionality in XML facade. Setting up an XML facade project We start generating XML facade by setting up a project in a JDeveloper environment which provides convenient tools for building XML facades. This recipe will describe how to set up a JDeveloper project in order to build XML facade. Getting ready To complete the recipe, we need the XML schema of the BPEL process variables based on which we build XML facade. Explore the XML schema of our banking BPEL process. We are interested in the structure of the BPEL request message: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0"name="unadjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0"name="adjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType><xsd:complexType name="CashflowsType"><xsd:sequence><xsd:element maxOccurs="unbounded" minOccurs="0"name="principalExchange" type="prc:PrincipalExchange"/></xsd:sequence></xsd:complexType><xsd:element name="Cashflows" type="prc:CashflowsType"/> The request message structure presents just a small fragment of cash flows modeled in the banks. The concrete definition of a cash flow is much more complex. However, our definition contains all the right elements so that we can show the advantages of using XML facade in a BPEL process. How to do it... The steps involved in setting up a JDeveloper project for XML façade are as follows: We start by opening a new Java Project in JDeveloper and naming it CashflowFacade. Click on Next. In the next window of the Create Java Project wizard, we select the default package name org.packt.cashflow.facade. Click on Finish. We now have the following project structure in JDeveloper: We have created a project that is ready for XML facade creation. How it works... After the wizard has finished, we can see the project structure created in JDeveloper. Also, the corresponding file structure is created in the filesystem. Generating XML facade using ANT This recipe explains how to generate XML facade with the use of the Apache ANT utility. We use the ANT scripts when we want to build or rebuild the XML facade in many iterations, for example, every time during nightly builds. Using ANT to build XML façade is very useful when XML definition changes are constantly in phases of development. With ANT, we can ensure continuous synchronization between XML and generated Java code. The official ANT homepage along with detailed information on how to use it can be found at the following URL: http://ant.apache.org/. Getting ready By completing our previous recipe, we built up a JDeveloper project ready to create XML facade out of XML schema. To complete this recipe, we need to add ANT project technology to the project. We achieve this through the Project Properties dialog: How to do it... The following are the steps we need to take to create a project in JDeveloper for building XML façade with ANT: Create a new ANT build file by right-clicking on the CashflowFacade project node, select New, and choose Buildfile from Project (Ant): The ANT build file is generated and added into the project under the Resources folder. Now we need to amend the build.xml file with the code to build XML facade. We will first define the properties for our XML facade: <property name="schema_file" location="../Banking_BPEL/xsd/Derivative_Cashflow.xsd"/><property name="dest_dir" location="./src"/><property name="package" value="org.packt.cashflow.facade"/> We define the location of the source XML schema (it is located in the BPEL process). Next, we define the destination of the generated Java files and the name of the package. Now, we define the ANT target in order to build XML facade classes. The ANT target presents one closed unit of ANT work. We define the build task for the XML façade as follows: <target name="xjc"><delete dir="src"/><mkdir dir="src"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-xmlschema"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> Now we have XML facade packaged and ready to be used in BPEL processes. How it works… ANT is used as a build tool and performs various tasks. As such, we can easily use it to build XML facade. Java Architecture for XML Binding provides the xjc utility, which can help us in building XML facade. We have provided the following parameters to the xjc utility: Xmlschema: This is the threat input schema as XML schema d: This specifies the destination directory of the generated classes p: This specifies the package name of the generated classes There are a number of other parameters, however we will not go into detail about them here. Based on the parameters we provided to the xjc utility, the Java representation of the XML schema is generated. If we examine the generated classes, we can see that there exists a Java class for every type defined in the XML schema. Also, we can see that the ObjectFactory class is generated, which eases the generation of Java class instances. There's more... There is a difference in creating XML facade between Versions 10g and 11g of Oracle SOA Suite. In Oracle SOA Suite 10g, there was a convenient utility named schema, which is used for building XML facade. However, in Oracle SOA Suite 11g, the schema utility is not available anymore. To provide a similar solution, we create a template class, which is later copied to a real code package when needed to provide functionality for XML facade. We create a new class Facade in the called facade package. The only method in the class is static and serves as a creation point of facade: public static Object createFacade(String context, XMLElement doc)throws Exception {JAXBContext jaxbContext;Object zz= null;try {jaxbContext = JAXBContext.newInstance(context);Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();zz = unmarshaller.unmarshal(doc);return zz;} catch (JAXBException e) {throw new Exception("Cannot create facade from the XML content. "+ e.getMessage());}} The class code implementation is simple and consists of creating the JAXB context. Further, we un-marshall the context and return the resulting class to the client. In case of problems, we either throw an exception or return a null object. Now the calling code is trivial. For example, to create XML facade for the XML content, we call as follows: Object zz = facade.Facade.createFacade("org.packt.cashflow.facade",document.getSrcRoot()); Creating XML facade from XSD This recipe describes how to create XML facade classes from XSD. Usually, the necessity to access XML content out of Java classes comes from already defined XML schemas in BPEL processes. How to do it... We have already defined the BPEL process and the XML schema (Derivative_Cashflow.xsd) in the project. The following steps will show you how to create the XML facade from the XML schema: Select the CashflowFacade project, right-click on it, and select New. Select JAXB 2.0 Content Model from XML Schema. Select the schema file from the Banking_BPEL project. Select the Package Name for Generated Classes checkbox and click on the OK button. The corresponding Java classes for the XML schema were generated. How it works... Now compare the classes generated via the ANT utility in the Generating XML facade using ANT recipe with this one. In essence, the generated files are the same. However, we see the additional file jaxb.properties, which holds the configuration of the JAXB factory used for the generation of Java classes. It is recommended to create the same access class (Facade.java) in order to simplify further access to XML facade. Creating XML facade from WSDL It is possible to include the definitions of schema elements into WSDL. To overcome the extraction of XML schema content from the WSDL document, we would rather take the WSDL document and create XML facade for it. This recipe explains how to create XML facade out of the WSDL document. Getting ready To complete the recipe, we need the WSDL document with the XML schema definition. Luckily, we already have one automatically generated WSDL document, which we received during the Banking_BPEL project creation. We will amend the already created project, so it is recommended to complete the Generating XML facade using ANT recipe before continuing with this recipe. How to do it... The following are the steps involved in creating XML façade from WSDL: Open the ANT configuration file (build.xml) in JDeveloper. We first define the property which identifies the location of the WSDL document: <property name="wsdl_file" location="../Banking_BPEL/Derivative_Cashflow.wsdl"/> Continue with the definition of a new target inside the ANT configuration file in order to generate Java classes from the WSDL document: <target name="xjc_wsdl"><delete dir="src/org"/><mkdir dir="src/org"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-wsdl"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> From the configuration point of view, this step completes the recipe. To run the newly defined ANT task, we select the build.xml file in the Projects pane. Then, we select the xjc_wsdl task in the Structure pane of JDeveloper, right-click on it, and select Run Target "xjc_wsdl": How it works... The generation of Java representation classes from WSDL content works similar to the generation of Java classes from XSD content. Only the source of the XML input content is different from the xjc utility. In case we execute the ANT task with the wrong XML or WSDL content, we receive a kind notification from the xjc utility. For example, if we run the utility xjc with the parameter –xmlschema over the WSDL document, we get a warning that we should use different parameters for generating XML façade from WSDL. Note that generation of Java classes from the WSDL document via JAXB is only available through ANT task definition or the xjc utility. If we try the same procedure with JDeveloper, an error is reported. Packaging XML facade into JAR This recipe explains how to prepare a package containing XML facade to be used in BPEL processes and in Java applications in general. Getting ready To complete this recipe, we need the XML facade created out of the XML schema. Also, the generated Java classes need to be compiled. How to do it... The steps involved for packaging XML façade into JAR are as follows: We open the Project Properties by right-clicking on the CashflowFacade root node. From the left-hand side tree, select Deployment and click on the New button. The Create Deployment Profile window opens where we set the name of the archive. Click on the OK button. The Edit JAR Deployment Profile Properties dialog opens where you can configure what is going into the JAR archive. We confirm the dialog and deployment profile as we don't need any special configuration. Now, we right-click on the project root node (CashflowFacade), then select Deploy and CFacade. The window requesting the deployment action appears. We simply confirm it by pressing the Finish button: As a result, we can see the generated JAR file created in the deploy folder of the project. There's more... In this article, we also cover the building of XML facade with the ANT tool. To support an automatic build process, we can also define an ANT target to build the JAR file. We open the build.xml file and define a new target for packaging purposes. With this target, we first recreate the deploy directory and then prepare the package to be utilized in the BPEL process: <target name="pack" depends="compile"><delete dir="deploy"/><mkdir dir="deploy"/><jar destfile="deploy/CFacade.jar"basedir="./classes"excludes="**/*data*"/></target> To automate the process even further, we define the target to copy generated JAR files to the location of the BPEL process. Usually, this means copying the JAR files to the SCA-INF/lib directory: <target name="copyLib" depends="pack"><copy file="deploy/CFacade.jar" todir="../Banking_BPEL/SCAINF/lib"/></target> The task depends on the successful creation of a JAR package, and when the JAR package is created, it is copied over to the BPEL process library folder. Generating Java documents for XML facade Well prepared documentation presents important aspect of further XML facade integration. Suppose we only receive the JAR package containing XML facade. It is virtually impossible to use XML facade if we don't know what the purpose of each data type is and how we can utilize it. With documentation, we receive a well-defined XML facade capable of integrating XML and Java worlds together. This recipe explains how to document the XML facade generated Java classes. Getting ready To complete this recipe, we only need the XML schema defined. We already have the XML schema in the Banking_BPEL project (Derivative_Cashflow.xsd). How to do it... The following are the steps we need to take in order to generate Java documents for XML facade: We open the Derivative_Cashflow.xsd XML schema file. Initially, we need to add an additional schema definition to the XML schema file: <xsd:schema attributeFormDefault="unqualified"elementFormDefault="qualified"targetNamespace="http:// jxb_version="2.1"></xsd:schema> In order to put documentation at the package level, we put the following code immediately after the <xsd:schema> tag in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc>This package represents the XML facadeof the cashflows in the financial derivativesstructure.</jxb:javadoc></jxb:package></jxb:schemaBindings></xsd:appinfo></xsd:annotation> In order to add documentation at the complexType level, we need to put the following lines into the XML schema file. The code goes immediately after the complexType definition: <xsd:annotation><xsd:appinfo><jxb:class><jxb:javadoc>This class defines the data for theevents, when principal exchange occurs.</jxb:javadoc></jxb:class></xsd:appinfo></xsd:annotation> The elements of the complexType definition are annotated in a similar way. We put the annotation data immediately after the element definition in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:property><jxb:javadoc>Raw principal exchangedate.</jxb:javadoc></jxb:property></xsd:appinfo></xsd:annotation> In JDeveloper, we are now ready to build the javadoc documentation. So, select the project CashflowFacade root node. Then, from the main menu, select the Build and Javadoc CashflowFacade.jpr option. The javadoc content will be built in the javadoc directory of the project. How it works... During the conversion from XML schema to Java classes, JAXB is also processing possible annotations inside the XML schema file. When the conversion utility (xjc or execution through JDeveloper) finds the annotation in the XML schema file, it decorates the generated Java classes according to the specification. The XML schema file must contain the following declarations. In the <xsd:schema> element, the following declaration of the JAXB schema namespace must exist: jxb:version="2.1" Note that the xjb:version attribute is where the Version of the JAXB specification is defined. The most common Version declarations are 1.0, 2.0, and 2.1. The actual definition of javadoc resides within the <xsd:annotation> and <xsd:appinfo> blocks. To annotate at package level, we use the following code: <jxb:schemaBindings><jxb:package name="PKG_NAME"><jxb:javadoc>TEXT</jxb:javadoc></jxb:package></jxb:schemaBindings> We define the package name to annotate and a javadoc text containing the documentation for the package level. The annotation of javadoc at class or attribute level is similar to the following code: <jxb:class|property><jxb:javadoc>TEXT</jxb:javadoc></jxb:class|property> If we want to annotate the XML schema at complexType level, we use the <jaxb:class> element. To annotate the XML schema at element level, we use the <jaxb:property> element. There's more... In many cases, we need to annotate the XML schema file directly for various reasons. The XML schema defined by different vendors is automatically generated. In such cases, we would need to annotate the XML schema each time we want to generate Java classes out of it. This would require additional work just for annotation decoration tasks. In such situations, we can separate the annotation part of the XML schema to a separate file. With such an approach, we separate the annotating part from the XML schema content itself, over which we usually don't have control. For that purpose, we create a binding file in our CashflowFacade project and name it extBinding.xjb. We put the annotation documentation into this file and remove it from the original XML schema. We start by defining the binding file header declaration: <jxb:bindings version="1.0"><jxb:bindings schemaLocation="file:/D:/delo/source_code/Banking_BPEL/xsd/Derivative_Cashflow.xsd" node="/xs:schema"> We need to specify the name of the schema file location and the root node of the XML schema which corresponds to our mapping. We continue by declaring the package level annotation: <jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc><![CDATA[<body>This package representsthe XML facade of the cashflows in the financialderivatives structure.</body>]]></jxb:javadoc></jxb:package><jxb:nameXmlTransform><jxb:elementName suffix="Element"/></jxb:nameXmlTransform></jxb:schemaBindings> We notice that the structure of the package level annotation is identical to those in the inline XML schema annotation. To annotate the class and its attribute, we use the following declaration: <jxb:bindings node="//xs:complexType[@name='CashflowsType']"><jxb:class><jxb:javadoc><![CDATA[This class defines the data for the events, whenprincipal exchange occurs.]]></jxb:javadoc></jxb:class><jxb:bindingsnode=".//xs:element[@name='principalExchange']"><jxb:property><jxb:javadoc>TEST prop</jxb:javadoc></jxb:property></jxb:bindings></jxb:bindings> Notice the indent annotation of attributes inside the class annotation that naturally correlates to the object programming paradigm. Now that we have the external binding file, we can regenerate the XML facade. Note that external binding files are not used only for the creation of javadoc. Inside the external binding file, we can include various rules to be followed during conversion. One such rule is aimed at data type mapping; that is, which Java data type will match the XML data type. In JDeveloper, if we are building XML facade for the first time, we follow either the Creating XML facade from XSD or the Creating XML facade from WSDL recipe. To rebuild XML facade, we use the following procedure: Select the XML schema file (Cashflow_Facade.xsd) in the CashflowFacade project. Right-click on it and select the Generate JAXB 2.0 Content Model option. The configuration dialog opens with some already pre-filled fields. We enter the location of the JAXB Customization File (in our case, the location of the extBinding.xjb file) and click on the OK button. Next, we build the javadoc part to get the documentation. Now, if we open the generated documentation in the web browser, we can see our documentation lines inside. Invoking XML facade from BPEL processes This recipe explains how to use XML facade inside BPEL processes. We can use XML façade to simplify access of XML content from Java code. When using XML façade, the XML content is exposed over Java code. Getting ready To complete the recipe, there are no special prerequisites. Remember that in the Packaging XML facade into JAR recipe, we defined the ANT task to copy XML facade to the BPEL process library directory. This task basically presents all the prerequisites for XML facade utilization. How to do it... Open a BPEL process (Derivative_Cashflow.bpel) in JDeveloper and insert the Java Embedding activity into it: We first insert a code snippet. The whole code snippet is enclosed by a try catch block: try { Read the input cashflow variable data: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); Un-marshall the XML content through the XML facade: Object obj_cf = facade.Facade.createFacade("org.packt.cashflow.facade", input_cf); We must cast the serialized object to the XML facade class: javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType> cfs = (javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType>)obj_cf; Retrieve the Java class out of the JAXBElement content class: org.packt.cashflow.facade.CashflowsType cf= cfs.getValue(); Finally, we close the try block and handle any exceptions that may occur during processing: } catch (Exception e) {e.printStackTrace();addAuditTrailEntry("Error in XML facade occurred: " +e.getMessage());} We close the Java Embedding activity dialog. Now, we are ready to deploy the BPEL process and test the XML facade. Actually, the execution of the BPEL process will not produce any output, since we have no output lines defined. In case some exception occurs, we will receive information about the exception in the audit trail as well as the BPEL server console. How it works... We add the XML facade JAR file to the BPEL process library directory (<BPEL_process_home>SCA-INFlib). Before we are able to access the XML facade classes, we need to extract the XML content from the BPEL process. To create the Java representation classes, we transform the XML content through the JAXB context. As a result, we receive an un-marshalled Java class ready to be used further in Java code. Accessing complex types through XML facade The advantage of using XML facade is to provide the ability to access the XML content via Java classes and methods. This recipe explains how to access the complex types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from the Invoking XML facade from BPEL processes recipe. How to do it... The steps involved in accessing the complex types through XML façade are as follows: Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the following code to access the complex type: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We receive a list of principal exchange cash flows that contain various data. How it works... In the previous example, we receive a list of cash flows. The corresponding XML content definition states: <xsd:complexType name="PrincipalExchange"><xsd:sequence></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> We can conclude that each of the principle exchange cash flows is modeled as an individual Java class. Depending on the hierarchy level of the complex type, it is modeled either as a Java class or as a Java class member. Complex types are organized in the Java object hierarchy according to the XML schema definition. Mostly, complex types can be modeled as a Java class and at the same time as a member of an other Java class. Accessing simple types through XML facade This recipe explains how to access simple types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from our previous recipe, Accessing complex types through XML facade. How to do it... Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the code to access the XML simple types: for (org.packt.cashflowfacade.PrincipalExchange pe: princEx) {addAuditTrailEntry("Received cashflow with id: " + pe.getId() +"n" +" Unadj. Principal Exch. Date ...: " + pe.getUnadjustedPrincipalExchangeDate() + "n" +" Adj. Principal Exch. Date .....: " + pe.getAdjustedPrincipalExchangeDate() + "n" +" Discount factor ...............: " +pe.getDiscountFactor() + "n" +" Principal Exch. Amount ........: " +pe.getPrincipalExchangeAmount() + "n");} With the preceding code, we output all Java class members to the audit trail. Now if we run the BPEL process, we can see the following part of output in the BPEL flow trace: How it works... The XML schema simple types are mapped to Java classes as members. If we check our example, we have three simple types in the XML schema: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0" name="unadjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="adjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> The simple types defined in the XML schema are <xsd:date>, <xsd:decimal>, and <xsd:int>. Let us find the corresponding Java class member definitions. Open the PrincipalExchange.java file. The definition of members we can see is as follows: @XmlSchemaType(name = "date")protected XMLGregorianCalendar unadjustedPrincipalExchangeDate;@XmlSchemaType(name = "date")protected XMLGregorianCalendar adjustedPrincipalExchangeDate;protected BigDecimal principalExchangeAmount;protected BigDecimal discountFactor;@XmlAttributeprotected Integer id; We can see that the mapping between the XML content and the Java classes was performed as shown in the following table: XML schema simple type Java class member <xsd:date> javax.xml.datatype.XMLGregorianCalendar <xsd:decimal> java.math.BigDecimal <xsd:int> java.lang.Integer Also, we can identify that the XML simple type definitions as well as the XML attributes are always mapped as members in corresponding Java class representations. Summary In this article, we have learned how to set up an XML facade project, generate XML facade using ANT, create XML facade from XSD and WSDL, Package XML facade into a JAR file, generate Java documents for XML facade, Invoke XML facade from BPEL processes, and access complex and simple types through XML facade. Resources for Article: Further resources on this subject: BPEL Process Monitoring [Article] Human Interactions in BPEL [Article] Business Processes with BPEL [Article]
Read more
  • 0
  • 0
  • 5232

article-image-tcltk-handling-string-expressions
Packt
02 Mar 2011
11 min read
Save for later

Tcl/Tk: Handling String Expressions

Packt
02 Mar 2011
11 min read
Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language When I first started using Tcl, everything I read or researched stressed the mantra "Everything is a string". Coming from a hard-typed coding environment, I was used to declaring variable types and in Tcl this was not needed. A set command could—and still does—create the variable and assigns the type on the fly. For example, set variable "7" and set variable 7 will both create a variable containing 7. However, with Tcl, you can still print the variable containing a numeric 7 and add 1 to the variable containing a string representation of 7. It still holds true today that everything in Tcl is a string. When we explore the TK Toolkit and widget creation, you will rapidly see that widgets themselves have a set of string values that determine their appearance and/or behavior. As a pre-requisite for the recipes in this article, launch the Tcl shell as appropriate for your operating system. You can access Tcl from the command line to execute the commands. As with everything else we have seen, Tcl provides a full suite of commands to assist in handling string expressions. However due to the sheer number of commands and subsets, I won't be listing every item individually in the following section. Instead we will be creating numerous recipes and examples to explore in the following sections. A general list of the commands is as follows: CommandDescriptionstringThe string command contains multiple keywords allowing for manipulation and data gathering functions.appendAppends to a string variable.formatFormat a string in the same manner as C sprint.regexpRegular Expression matching.regsubPerforms substitution, based on Regular Expression matching.scanParses a string using conversion specifiers in the same manner as C sscanf.substPerform backslash, command, and variable substitution on a string. Using the commands listed in the table, a developer can address all their needs as applies to strings. In the following sections, we will explore these commands as well as many subsets of the string command. Appending to a string Creating a string in Tcl using the set command is the starting point for all string commands. This will be the first command for most, if not all of the following recipes. As we have seen previously, entering a set variable value on the command line does this. However, to fully implement strings within a Tcl script, we need to interact with these strings from time to time, for example, with an open channel to a file or HTTP pipe. To accomplish this, we will need to read from the channel and append to the original string. To accomplish appending to a string, Tcl provides the append command. The append command is as follows: append variable value value value... How to do it… In the following example, we will create a string of comma-delimited numbers using the for control construct. Return values from the commands are provided for clarity. Enter the following command: % set var 0 0 % for {set x 1} {$x<=10}{$x<=10} {incr x} { append var , $x } %puts $var 0,1,2,3,4,5,6,7,8,9,10 How it works… The append command accepts a named variable to contain the resulting string and a space delimited list of strings to append. As you can see, the append command accepted our variable argument and a string containing the comma. These values were used to append to original variable (containing a starting value of 0). The resulting string output with the puts command displays our newly appended variable complete with commas. Formatting a string Strings, as we all know, are our primary way of interacting with the end-user. Whether presented in a message box or simply directed to the Tcl shell, they need to be as fluid as possible, in the values they present. To accomplish this, Tcl provides the format command. This command allows us to format a string with variable substitution in the same manner as the ANSI C sprintf procedure. The format command is as follows: format string argument argument argument... The format command accepts a string containing the value to be formatted as well as % conversion specifiers. The arguments contain the values to be substituted into the final string. Each conversion specifier may contain up to six (6) sections—an XPG2 position specifier, a set of fags, minimum field width, a numeric precision specifier, size modifier, and a conversion character. The conversion specifiers are as follows: SpecifierDescriptiond or iFor converting an integer to a signed decimal string.uFor converting an integer to an unsigned decimal string.oFor converting an integer to an unsigned octal sting.x or XFor converting an integer to an unsigned hexadecimal string. The lowercase x is used for lowercase hexadecimal notations. The uppercase X will contain the uppercase hexadecimal notations.cFor converting an integer to the Unicode character it represents.sNo conversion is performed.fFor converting the number provided to a signed decimal string of the form xxx.yyy, where the number of y's is determined with the precision of 6 decimal places (by default).e or EIf the uppercase E is used, it is utilized in the string in place of the lowercase e.g or GIf the exponent is less than -4 or greater than or equal to the precision, then this is used for converting the number utilized for the %e or %E; otherwise for converting in the same manner as %f.%The % sign performs no conversion; it merely inserts a % character into the string. There are three differences between the Tcl format and the ANSI C sprintf procedure: The %p and %n conversion switches are not supported. The % conversion for %c only accepts an integer value. Size modifiers are ignored for formatting of floating-point values. How to do it… In the following example, we format a long date string for output on the command line. Return values from the commands are provided for clarity. Enter the following command: % set month May May % set weekday Friday Friday % set day 5 5 % set extension th th %set year 2010 2010 %puts [format "Today is %s, %s %d%s %d" $weekday $month $day $extension $year] Today is Friday, May 5th 2010 How it works… The format command successfully replaced the desired conversion fag delimited regions with the variables assigned. Matching a regular expression within a string Regular expressions provide us with a powerful method to locate an arbitrarily complex pattern within a string. The regexp command is similar to a Find function in a text editor. You search for a defined string for the character or the pattern of characters you are looking for and it returns a Boolean value that indicates success or failure and populates a list of optional variables with any matched strings. The -indices and -inline options must be used to modify the behavior, as indicated by this statement. But it doesn't stop there; by providing switches, you can control the behavior of regexp. The switches are as follows: SwitchBehavior-aboutNo actual matching is made. Instead regexp returns a list containing information about the regular expression where the first element is a subexpression count and the second is a list of property names describing various attributes about the expression.-expandedAllows the use of expanded regular expression, wherein whitespaces and comments are ignored.-indicesReturns a list of two decimal strings, containing the indices in the string to match for the first and last characters in the range-lineEnables the newline-sensitive matching similar to passing the -linestop and -lineanchor switches. -linestop Changes the behavior of [^] bracket expressions and the "." character so that they stop at newline characters.-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line.-nocaseTreats uppercase characters in the search string as lowercase.-allCauses the command to match as many times as possible and returns the count of the matches found.-inline Causes regexp to return a list of the data that would otherwise have been placed in match variables. Match variables may NOT be used if -inline is specified.   -startAllows us to specify a character index from which searching should start.--Denotes the end of switches being passed to regexp. Any argument following this switch will be treated as an expression, even if they start with a "-". Now that we have a background in switches, let's look at the command itself: regexp switches expression string submatchvar submatchvar... The regexp command determines if the expression matches part or all of the string and returns a 1 if the match exists or a 0 if it is not found. If the variables (submatchvar) (for example myNumber or myData) are passed after the string, they are used as variables to store the returned submatchvar. Keep in mind that if the –inline switch has been passed, no return variables should be included in the command. Getting ready To complete the following example, we will need to create a Tcl script file in your working directory. Open the text editor of your choice and follow the next set of instructions. How to do it… A common use for regexp is to accept a string containing multiple words and to split it into its constituent parts. In the following example, we will create a string containing an IP address and assign the values to the named variables. Enter the following command: % regexp "([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})" $ip all first second third fourth % puts "$all n$first n$second n$third n$fourth" 192.168.1.65 192 168 1 65 How it works… As you can see, the IP Address has been split into its individual octet values. What regexp has done is match the groupings of decimal characters [0-9] of a varying length of 1 to 3 characters {1, 3} delimited by a "." character. The original IP address is assigned to the first variable (all) while the octet values are assigned to the remaining variables (first, second, third and fourth). Performing character substitution on a string If regexp is a Find function, then regsub is equivalent to Find and Replace. The regsub command accepts a string and using Regular Expression pattern matching, it locates and, if desired, replaces the pattern with the desired value. The syntax of regsub is similar to regexp as are the switches. However, additional control over the substitution is added. The switches are as listed next: SwitchDescription-allCauses the command to perform substitution for each match found The & and n sequences are handled for each substitution-expandedAllows use of expanded regular expression wherein whitespace and comments are ignored-lineEnables newline sensitive matching similar to passing the -linestop and -lineanchor switches-linestopChanges the behavior of [^] bracket expressions so that they stop at newline characters-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line-nocaseTreats Upper Case characters in the search string as Lower Case-startAllows specification of a character offset in the string from which to start matching Now that we have a background in switches as they apply to the regsub command, let's look at the command: regsub switches expression string substitution variable The regsub command matches the expression against the string provided and either copies the string to the variable or returns the string if a variable is not provided. If a match is located, the portion of the string that matched is replaced by substitution. Whenever a substitution contains an & or a character, it is replaced with the portion of the string that matches the expression. If the substitution contains the switch "n" (where n represents a numeric value between 1 and 9), it is replaced with the portion of the string that matches with the nth sub-expression of the expression. Additional backslashes may be used in the substitution to prevent interpretation of the &, , n, and the backslashes themselves. As both the regsub command and the Tcl interpreter perform backslash substitution, you should enclose the string in curly braces to prevent unintended substitution. How to do it… In the following example, we will substitute every instance of the word one, which is a word by itself, with the word three. Return values from the commands are provided for clarity. Enter the following command: % set original "one two one two one two" one two one two one two % regsub -all {one} $original three new 3 % puts $new three two three two three two How it works… As you can see, the value returned from the regsub command lists the number of matches found. The string original has been copied into the string new, with the substitutions completed. With the addition of additional switches, you can easily parse a lengthy string variable and perform bulk updates. I have used this to rapidly parse a large text file prior to importing data into a database.  
Read more
  • 0
  • 0
  • 5228
article-image-python-multimedia-animation-examples-using-pyglet
Packt
31 Aug 2010
7 min read
Save for later

Python Multimedia: Animation Examples using Pyglet

Packt
31 Aug 2010
7 min read
(For more resources on Python, see here.) Single image animation Imagine that you are creating a cartoon movie where you want to animate the motion of an arrow or a bullet hitting a target. In such cases, typically it is just a single image. The desired animation effect is accomplished by performing appropriate translation or rotation of the image. Time for action – bouncing ball animation Lets create a simple animation of a 'bouncing ball'. We will use a single image file, ball.png, which can be downloaded from the Packt website. The dimensions of this image in pixels are 200x200, created on a transparent background. The following screenshot shows this image opened in GIMP image editor. The three dots on the ball identify its side. We will see why this is needed. Imagine this as a ball used in a bowling game. The image of a ball opened in GIMP appears as shown in the preceding image. The ball size in pixels is 200x200. Download the files SingleImageAnimation.py and ball.png from the Packt website. Place the ball.png file in a sub-directory 'images' within the directory in which SingleImageAnimation.py is saved. The following code snippet shows the overall structure of the code. 1 import pyglet2 import time34 class SingleImageAnimation(pyglet.window.Window):5 def __init__(self, width=600, height=600):6 pass7 def createDrawableObjects(self):8 pass9 def adjustWindowSize(self):10 pass11 def moveObjects(self, t):12 pass13 def on_draw(self):14 pass15 win = SingleImageAnimation()16 # Set window background color to gray.17 pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1)1819 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)2021 pyglet.app.run() Although it is not required, we will encapsulate event handling and other functionality within a class SingleImageAnimation. The program to be developed is short, but in general, it is a good coding practice. It will also be good for any future extension to the code. An instance of SingleImageAnimation is created on line 14. This class is inherited from pyglet.window.Window. It encapsulates the functionality we need here. The API method on_draw is overridden by the class. on_draw is called when the window needs to be redrawn. Note that we no longer need a decorator statement such as @win.event above the on_draw method because the window API method is simply overridden by this inherited class. The constructor of the class SingleImageAnimation is as follows: 1 def __init__(self, width=None, height=None):2 pyglet.window.Window.__init__(self,3 width=width,4 height=height,5 resizable = True)6 self.drawableObjects = []7 self.rising = False8 self.ballSprite = None9 self.createDrawableObjects()10 self.adjustWindowSize() As mentioned earlier, the class SingleImageAnimation inherits pyglet.window.Window. However, its constructor doesn't take all the arguments supported by its super class. This is because we don't need to change most of the default argument values. If you want to extend this application further and need these arguments, you can do so by adding them as __init__ arguments. The constructor initializes some instance variables and then calls methods to create the animation sprite and resize the window respectively. The method createDrawableObjects creates a sprite instance using the ball.png image. 1 def createDrawableObjects(self):2 """3 Create sprite objects that will be drawn within the4 window.5 """6 ball_img= pyglet.image.load('images/ball.png')7 ball_img.anchor_x = ball_img.width / 28 ball_img.anchor_y = ball_img.height / 2910 self.ballSprite = pyglet.sprite.Sprite(ball_img)11 self.ballSprite.position = (12 self.ballSprite.width + 100,13 self.ballSprite.height*2 - 50)14 self.drawableObjects.append(self.ballSprite) The anchor_x and anchor_y properties of the image instance are set such that the image has an anchor exactly at its center. This will be useful while rotating the image later. On line 10, the sprite instance self.ballSprite is created. Later, we will be setting the width and height of the Pyglet window as twice of the sprite width and thrice of the sprite height. The position of the image within the window is set on line 11. The initial position is chosen as shown in the next screenshot. In this case, there is only one Sprite instance. However, to make the program more general, a list of drawable objects called self.drawableObjects is maintained. To continue the discussion from the previous step, we will now review the on_draw method. def on_draw(self): self.clear() for d in self.drawableObjects: d.draw() As mentioned previously, the on_draw function is an API method of class pyglet.window.Window that is called when a window needs to be redrawn. This method is overridden here. The self.clear() call clears the previously drawn contents within the window. Then, all the Sprite objects in the list self.drawableObjects are drawn in the for loop. The preceding image illustrates the initial ball position in the animation. The method adjustWindowSize sets the width and height parameters of the Pyglet window. The code is self-explanatory: def adjustWindowSize(self): w = self.ballSprite.width * 3 h = self.ballSprite.height * 3 self.width = w self.height = h So far, we have set up everything for the animation to play. Now comes the fun part. We will change the position of the sprite representing the image to achieve the animation effect. During the animation, the image will also be rotated, to give it the natural feel of a bouncing ball. 1 def moveObjects(self, t):2 if self.ballSprite.y - 100 < 0:3 self.rising = True4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:5 self.rising = False67 if not self.rising:8 self.ballSprite.y -= 59 self.ballSprite.rotation -= 610 else:11 self.ballSprite.y += 512 self.ballSprite.rotation += 5 This method is scheduled to be called 20 times per second using the following code in the program. pyglet.clock.schedule_interval(win.moveObjects, 1.0/20) To start with, the ball is placed near the top. The animation should be such that it gradually falls down, hits the bottom, and bounces back. After this, it continues its upward journey to hit a boundary somewhere near the top and again it begins its downward journey. The code block from lines 2 to 5 checks the current y position of self.ballSprite. If it has hit the upward limit, the flag self.rising is set to False. Likewise, when the lower limit is hit, the flag is set to True. The flag is then used by the next code snippet to increment or decrement the y position of self.ballSprite. The highlighted lines of code rotate the Sprite instance. The current rotation angle is incremented or decremented by the given value. This is the reason why we set the image anchors, anchor_x and anchor_y at the center of the image. The Sprite object honors these image anchors. If the anchors are not set this way, the ball will be seen wobbling in the resultant animation. Once all the pieces are in place, run the program from the command line as: $python SingleImageAnimation.py This will pop up a window that will play the bouncing ball animation. The next illustration shows some intermediate frames from the animation while the ball is falling down. What just happened? We learned how to create an animation using just a single image. The image of a ball was represented by a sprite instance. This sprite was then translated and rotated on the screen to accomplish a bouncing ball animation. The whole functionality, including the event handling, was encapsulated in the class SingleImageAnimation.
Read more
  • 0
  • 0
  • 5181

article-image-python-multimedia-application-thumbnail-maker
Packt
12 Aug 2010
7 min read
Save for later

A Python Multimedia Application: Thumbnail Maker

Packt
12 Aug 2010
7 min read
(For more resources on Python, see here.) Project: Thumbnail Maker Let's take up a project now. We will apply some of the operations we learned in the previous article, to create a simple Thumbnail Maker utility. This application will accept an image as an input and will create a resized image of that image. Although we are calling it a thumbnail maker, it is a multi-purpose utility that implements some basic image-processing functionality. Before proceeding further, make sure that you have installed all the packages discussed at the beginning of the previous article. The screenshot of the Thumbnail Maker dialog is show in the following illustration. The Thumbnail Maker GUI has two components: The left panel is a 'control area', where you can specify certain image parameters along with options for input and output paths. A graphics area on the right-hand side where you can view the generated image. In short, this is how it works: The application takes an image file as an input. It accepts user input for image parameters such as dimensions in pixel, filter for re-sampling and rotation angle in degrees. When the user clicks the OK button in the dialog, the image is processed and saved at a location indicated by the user in the specified output image format. Time for action – play with Thumbnail Maker application First, we will run the Thumbnail Maker application as an end user. This warm-up exercise intends to give us a good understanding of how the application works. This, in turn, will help us develop/learn the involved code quickly. So get ready for action! Download the files ThumbnailMaker.py, ThumbnailMakeDialog.py, and Ui_ThumbnailMakerDialog.py from Packt website. Place these files in some directory. From the command prompt, change to this directory location and type the following command: python ThumbnailMakerDialog.py The Thumbnail Maker dialog that pops up was shown in the earlier screenshot. Next, we will specify the input-output paths and various image parameters. You can open any image file of your choice. Here, the flower image shown in some previous sections will be used as an input image. To specify an input image, click on the small button with three dots …. It will open a file dialog. The following illustration shows the dialog with all the parameters specified. If "Maintain Aspect Ratio" checkbox is checked, internally it will scale the image dimension so that the aspect ratio of the output image remains the same. When the OK button is clicked, the resultant image is saved at the location specified by the Output Location field and the saved image is displayed in the right-hand panel of the dialog. The following screenshot shows the dialog after clicking OK button. You can now try modifying different parameters such as output image format or rotation angle and save the resulting image. See what happens when the Maintain Aspect Ratio checkbox is unchecked. The aspect ratio of the resulting image will not be preserved and the image may appear distorted if the width and height dimensions are not properly specified. Experiment with different re-sampling filters; you can notice the difference between the quality of the resultant image and the earlier image. There are certain limitations to this basic utility. It is required to specify reasonable values for all the parameters fields in the dialog. The program will print an error if any of the parameters is not specified. What just happened? We got ourselves familiar with the user interface of the thumbnail maker dialog and saw how it works for processing an image with different dimensions and quality. This knowledge will make it easier to understand the Thumbnail Maker code. Generating the UI code The Thumbnail Maker GUI is written using PyQt4 (Python bindings for Qt4 GUI framework). Detailed discussion on how the GUI is generated and how the GUI elements are connected to the main functions is beyond the scope of this article. However, we will cover certain main aspects of this GUI to get you going. The GUI-related code in this application can simply be used 'as-is' and if this is something that interests you, go ahead and experiment with it further! In this section, we will briefly discuss how the UI code is generated using PyQt4. Time for action – generating the UI code PyQt4 comes with an application called QT Designer. It is a GUI designer for QT-based applications and provides a quick way to develop a graphical user interface containing some basic widgets. With this, let's see how the Thumbnail Maker dialog looks in QT Designer and then run a command to generate Python source code from the .ui file. Download the thumbnailMaker.ui file from the Packt website. Start the QT Designer application that comes with PyQt4 installation. Open the file thumbnailMaker.ui in QT Designer. Notice the red-colored borders around the UI elements in the dialog. These borders indicate a 'layout' in which the widgets are arranged. Without a layout in place, the UI elements may appear distorted when you run the application and, for instance, resize the dialog. Three types of QLayouts are used, namely Horizontal, Vertical, and Grid layout. You can add new UI elements, such as a QCheckbox or a QLabel, by dragging and dropping it from the 'Widget Box' of QT Designer. It is located in the left panel by default. Click on the field next to the label "Input file". In the right-hand panel of QT Designer, there is a Property Editor that displays the properties of the selected widget (in this case it's a QLineEdit). This is shown in the following illustration. The Property Editor allows us to assign values to various attributes such as the objectName, width, and height of the widget, and so on. Qt Designer shows the details of the selected widget in Property Editor. QT designer saves the file with extension .ui. To convert this into Python source code, PyQt4 provides a conversion utility called pyuic4. On Windows XP, for standard Python installation, it is present at the following location—C:Python26 Libsite-packagesPyQt4pyuic4.bat. Add this path to your environment variable. Alternatively specify the whole path each time you want to convert ui file to Python source file. The conversion utility can be run from the command prompt as: pyuic4 thumbnailMaker.ui -o Ui_ThumbnailMakerDialog.py This script will generate Ui_ThumbnailMakerDialog.py with all the GUI elements defined. You can further review this file to understand how the UI elements are defined. What just happened? We learned how to autogenerate the Python source code defining UI elements of Thumbnail Maker Dialog from a Qt designer file. Have a go hero – tweak UI of Thumbnail Maker dialog Modify the thumbnailMaker.ui file in QT Designer and implement the following list of things in the Thumbnail Maker dialog. Change the color of all the line edits in the left panel to pale yellow. Tweak the default file extension displayed in the Output file Format combobox such that the first option is .png instead of .jpeg Double click on this combobox to edit it. Add new option .tiff to the output format combobox. Align the OK and Cancel buttons to the right corner. You will need to break layouts, move the spacer around, and recreate the layouts. Set the range of rotation angle 0 to 360 degrees instead of the current -180 to +180 degrees. After this, create Ui_ThumbnailMakerDialog.py by running the pyuic4 script and then run the Thumbnail Maker application.
Read more
  • 0
  • 0
  • 5152