Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-xpath-support-oracle-jdeveloper-xdk-11g
Packt
15 Oct 2009
11 min read
Save for later

XPath Support in Oracle JDeveloper - XDK 11g

Packt
15 Oct 2009
11 min read
With SAX and DOM APIs, node lists have to be iterated over to access a particular node. Another advantage of navigating an XML document with XPath is that an attribute node may be selected directly. With DOM and SAX APIs, an element node has to be selected before an element attribute can be selected. Here we will discuss XPath support in JDeveloper. What is XPath? XPath is a language for addressing an XML document's elements and attributes. As an example, say you receive an XML document that contains the details of a shipment and you want to retrieve the element/attribute values from the XML document. You don't just want to list the values of all the nodes, but also want to output the values of specific elements or attributes. In such a case, you would use XPath to retrieve the values of those elements and attributes. XPath constructs a hierarchical structure of an XML document, a tree of nodes, which is the XPath data model. The XPath data model consists of seven node types. The different types of nodes in the XPath data model are discussed in the following table: Node Type Description Root Node The root node is the root of the DOM tree. The document element (the root element) is a child of the root node. The root node also has the processing instructions and comments as child nodes. Element Node It represents an element in an XML document. The character data, elements, processing instruction, and comments within an element are the child nodes of the element node. Attribute Node It represents an attribute other than the valign="top"> Text Node The character data within an element is a text node. A text node has at least one character of data. A whitespace is also considered as a character of data.  By default, the ignorable whitespace after the end of an element and before the start of the following element is also a text node. The ignorable whitespace can be excluded from the DOM tree built by parsing an XML document. This can be done by setting the whitespace-preserving mode to false with the setPreserveWhitespace(boolean flag) method. Comment Node It represents a comment in an XML document, except the comments within the DOCTYPE declaration. Processing Instruction Node It represents a processing instruction in an XML document except the processing instruction within the DOCTYPE declaration. The XML declaration is not considered as a processing instruction node. Namespace Node It represents a namespace mapping, which consists of a . A namespace node consists of a namespace prefix (xsd in the example) and a namespace URI (http://www.w3.org/2001/XMLSchema in the example). Specific nodes including element, attribute, and text nodes may be accessed with XPath. XPath supports nodes in a namespace. Nodes in XPath are selected with an XPath expression. An expression is evaluated to yield an object of one of the following four types: node set, Boolean, number, or string. For an introduction on XPath refer to the W3C Recommendation for XPath (http://www.w3.org/TR/xpath). As a brief review, expression evaluation in XPath is performed with respect to a context node. The most commonly used type of expression in XPath is a location path . XPath defines two types of location paths: relative location paths and absolute location paths. A relative location path is defined with respect to a context node and consists of a sequence of one or more location steps separated by "/". A location step consists of an axis, a node test, and predicates. An example of a location step is: child::journal[position()=2] In the example, the child axis contains the child nodes of the context node. Node test is the journal node set, and predicate is the second node in the journal node set. An absolute location path is defined with respect to the root node, and starts with "/". The difference between a relative location path and an absolute location path is that a relative location path starts with a location step, and an absolute location path starts with "/". XPath in Oracle XDK 11g Oracle XML Developer's Kit 11g, which is included in JDeveloper, provides the DOMParser class to parse an XML document and construct a DOM structure of the XML document. An XMLDocument object represents the DOM structure of an XML document. An XMLDocument object may be retrieved from a DOMParser object after an XML document has been parsed. The XMLDocument class provides select methods to select nodes in an XML document with an XPath expression. In this article we shall parse an example XML document with the DOMParser class, obtain an XMLDocument object for the XML document, and select nodes from the document with the XMLDocument class select methods. The different select methods in theXMLDocument class are discussed in the following table: Method Name Description selectSingleNode(String XPathExpression) Selects a single node that matches an XPath expression. If more than one node matches the specified expression, the first node is selected. Use this method if you want to select the first node that matches an XPath expression. selectNodes(String XPathExpression) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes. selectSingleNode(String XPathExpression, NSResolver resolver) Selects a single namespace node that matches a specified XPath expression. Use this method if the XML document has nodes in namespaces and you want to select the first node, which is in a namespace and matches an XPath expression. selectNodes(String XPathExpression, NSResolver resolver) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes that are in a namespace. The example XML document that is parsed in this article has a namespace declaration for elements in the namespace with the prefix journal. For an introduction on namespaces in XML refer to the W3C Recommendation on Namespaces in XML 1.0 (http://www.w3.org/TR/REC-xml-names/). catalog.xml, the example XML document, is shown in the following listing: <?xml version="1.0" encoding="UTF-8"?><catalog title="Oracle Magazine" publisher="Oracle Publishing"><journal:journal journal_date="November-December 2008"> <journal:article journal_section="ORACLE DEVELOPER"> <title>Instant ODP.NET Deployment</title> <author>Mark A. Williams</author></journal:article><journal:article journal_section="COMMENT"> <title>Application Server Convergence</title> <author>David Baum</author> </journal:article></journal:journal><journal date="March-April 2008"> <article section="TECHNOLOGY"> <title>Oracle Database 11g Redux</title> <author>Tom Kyte</author> </article><article section="ORACLE DEVELOPER"> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article> </journal></catalog Setting the environment Create an application (called XPath, for example) and a project (called XPath) in JDeveloper. The XPath API will be demonstrated in a Java application. Therefore, create a Java class in the XPath project with File | New. In the New Gallery window select < >Categories | General and Items | Java Class. In the Create Java Class window, specify the class name (XPathParser, for example), the package name (xpath in the example application), and click on the OK button. To develop an application with XPath, add the required libraries to the project classpath. Select the project node in Application Navigator and select Tools | Project Properties. In the Project Properties window, select the Libraries and Classpath node. To add a library, select the Add Library button. Select the Oracle XML Parser v2 library. Click on the OK button in the Project Properties window. We also need to add an XML document that is to be parsed and navigated with XPath. To add an XML document, select File | New. In the New Gallery window, select Categories | General | XML and Items | XML Document. Click on the OK button. In the Create XML File window specify the file name catalog.xml in the File Name field, and click on the OK button. Copy the catalog.xml listing to the catalog.xml file in the Application Navigator. The directory structure of the XPath project is shown in the following illustration: XPath Search In this section, we shall select nodes from the example XML document, catalog.xml, with the XPath Search tool of JDeveloper 11g. The XPath Search tool consists of an Expression field for specifying an XPath expression. Specify an XPath expression and click on OK to select nodes matching the XPath expression. The XPath Search tool has the provision to search for nodes in a specific namespace. An XML namespace is a collection of element and attribute names that are identified by a URI reference. Namespaces are specified in an XML document using namespace declarations. A namespace declaration is an > To navigate catalog.xml with XPath, select catalog.xml in the Application Navigator and select Search | XPath Search. In the following subsections, we shall select example nodes using absolute location paths and relative location paths. Use a relative location path if the XML document is large and a specifi c node is required. Also, use a relative path if the node from which subnodes are to be selected and the relative location path are known. Use an absolute location path if the XML document is small, or if the relative location path is not known. The objective is to use minimum XPath navigation. Use the minimum number nodes to navigate in order to select the required node. Selecting nodes with absolute location paths Next, we shall demonstrate with various examples of selecting nodes using XPath. As an example, select all the title elements in catalog.xml. Specify the XPath expression for selecting the title elements in the Expression field of the Apply an XPath Expression on catalog.xml window. The XPath expression to select all title elements is /catalog/journal/article/title. Click on the OK button to select the title elements. The title elements get selected. Title elements from the journal:article elements in the journal namespace do not get selected because a namespace has not been applied to the XPath expression. As an other example, select the title element in the first article element using the XPath expression /catalog/journal/article[1]/title. We are not using namespaces yet. The XPath expression is specified in the Expression field. The title of the first article element gets selected as shown in the JDeveloper output: Attribute nodes may also be selected with XPath. Attributes are selected by using the "@" prefix. As an example, select the section attribute in the first article element in the journal element. The XPath expression for selecting the section attribute is /catalog/journal/article[1]/@section and is specified in the Expression field. Click on the OK button to select the section attribute. The attribute section gets outputted in JDeveloper. Selecting nodes with relative location paths In the previous examples, an absolute location is used to select nodes. Next, we shall demonstrate selecting an element with a relative location path. As an example, select the title of the first article element in the journal element. The relative location path for selecting the title element is child::catalog/journal/article[position()=1]/title. Specifying the axis as child and node test as catalog selects all the child nodes of the catalog node and is equivalent to an absolute location path that starts with /catalog. If the child nodes of the journal node were required to be selected, specify the node test as journal. Specify the XPath expression in the Expression field and click on the OK button. The title of the first article element in the journal element gets selected as shown here: Selecting namespace nodes XPath Search also has the provision to select elements and attributes in a namespace. To illustrate, select all the title elements in the journal element (that is, in the journal namespace) using the XPath expression /catalog/journal:journal/journal:article/title. First, add the namespaces of the elements and attributes to be selected in the Namespaces text area. Prefix and URI of namespaces are added with the Add button. Specify the prefix in the Prefix column, and the URI in the URI column. Multiple namespace mappings may be added. XPath expressions that select namespace nodes are similar to no-namespace expressions, except that the namespace prefixes are included in the expressions. Elements in the default namespace, which does not have a namespace prefix, are also considered to be in a namespace. Click on the OK button to select the nodes with XPath. The title elements in the journal element (in the journal namespace) get selected and outputted in JDeveloper. Attributes in a namespace may also be selected with XPath Search. As an example, select the section attributes in the journal namespace. Specify the XPath expression to select the section attributes in the Expression field and click on the OK button. Section attributes in the journal namespace get selected.
Read more
  • 0
  • 0
  • 2406

article-image-documenting-our-application-apache-struts-2-part-2
Packt
15 Oct 2009
9 min read
Save for later

Documenting our Application in Apache Struts 2 (part 2)

Packt
15 Oct 2009
9 min read
Documenting web applications Documenting an entire web application can be surprisingly tricky because of the many different layers involved. Some web application frameworks support automatic documentation generation better than others. It's preferable to have fewer disparate parts. For example, Lisp, Smalltalk, and some Ruby frameworks are little more than internal DSLs that can be trivially redefined to produce documentation from the actual application code. In general, Java frameworks are more difficult to limit to a single layer. Instead, we are confronted with HTML, JSP, JavaScript, Java, the framework itself, its configuration methodologies (XML, annotations, scripting languages, etc.), the service layers, business logic, persistence layers, and so on? Complete documentation generally means aggregating information from many disparate sources and presenting them in a way that is meaningful to the intended audience. High-level overviews The site map is obviously a reasonable overview of a web application. A site map may look like a simple hierarchy chart, showing a simple view of a site's pages without showing all of the possible links between pages, how a page is implemented, and so on. This diagram was created by hand and shows only the basic outline of the application flow. It represents minor maintenance overhead since it would need to be updated when there are any changes to the application. Documenting JSPs There doesn't seem to be any general-purpose JSP documentation methodology. It's relatively trivial to create comments inside a JSP page using JSP comments or a regular Javadoc comment inside a scriptlet. Pulling these comments out is then a matter of some simple parsing. This may be done by using one of our favorite tools, regular expressions, or using more HTML-specific parsing and subsequent massaging. Where it gets tricky is when we want to start generating documentation that includes elements such as JSP pages, which may be included using many different mechanisms—static includes, <jsp:include.../> tags, Tiles, SiteMesh, inserted via Ajax, and so on. Similarly, generating connections between pages is fraught with custom cases. We might use general-purpose HTML links, Struts 2 link tags, attach a link to a page element with JavaScript, ad infinitum/nauseum. When we throw in the (perhaps perverse) ability to generate HTML using Java, we have a situation where creating a perfectly general-purpose tool is a major undertaking. However, we can fairly easily create a reasonable set of documentation that is specific to our framework by parsing configuration files (or scanning a classpath for annotations), understanding how we're linking the server-side to our presentation views, and performing (at least limited) HTML/JSP parsing to pull out presentation-side dependencies, links, and anything that we want documented. Documenting JavaScript If only there was a tool such as Javadoc for JavaScript. The JsDoc Toolkit provides Javadoc-like functionality for JavaScript, with additional features to help handle the dynamic nature of JavaScript code. Because of the dynamic nature, we (as developers) must remain diligent in both in how we write our JavaScript and how we document it. Fortunately, the JsDoc Toolkit is good at recognizing current JavaScript programming paradigms (within reason), and when it can't, provides Javadoc-like tags we can use to give it hints. For example, consider our JavaScript Recipe module where we create several private functions intended for use only by the module, and return a map of functions for use on the webpage. The returned map itself contains a map of validation functions. Ideally, we'd like to be able to document all of the different components. Because of the dynamic nature of JavaScript, it's more difficult for tools to figure out the context things should belong to. Java is much simpler in this regard (which is both a blessing and a curse), so we need to give JsDoc hints to help it understand our code's layout and purpose. A high-level flyby of the Recipe module shows a layout similar to the following: var Recipe = function () { var ingredientLabel; var ingredientCount; // ... function trim(s) { return s.replace(/^s+|s+$/g, ""); } function msgParams(msg, params) { // ... } return { loadMessages: function (msgMap) { // ... }, prepare: function (label, count) { // ... }, pageValidators: { validateIngredientNameRequired: function (form) { // ... }, // ... } }; }(); We see several documentable elements: the Recipe module itself, private variables, private functions, and the return map which contains both functions and a map of validation functions. JsDoc accepts a number of Javadoc-like document annotations that allow us to control how it decides to document the JavaScript elements. The JavaScript module pattern, exemplified by an immediately-executed function, is understood by JsDoc through the use of the @namespace annotation. /** * @namespace * Recipe module. */ var Recipe = function () { // ... }(); We can mark private functions with the @private annotation as shown next: /** * @private * Trims leading/trailing space. */ function trim(s) { return s.replace(/^s+|s+$/g, ""); } It gets interesting when we look at the map returned by the Recipe module: return /** @lends Recipe */ { /** * Loads message map. * * <p> * This is generally used to pass in text resources * retrieved via <s:text.../> or <s:property * value="getText(...)"/> tags on a JSP page in lieu * of a normalized way for JS to get Java I18N resources * </p> */ loadMessages: function (msgMap) { _msgMap = msgMap; }, // ... The @lends annotation indicates that the functions returned by the Recipe module belong to the Recipe module. Without the @lends annotation, JsDoc doesn't know how to interpret the JavaScript in the way we probably intend the JavaScript to be used, so we provide a little prodding. The loadMessages() function itself is documented as we would document a Java method, including the use of embedded HTML. The other interesting bit is the map of validation functions. Once again, we apply the @namespace annotation, creating a separate set of documentation for the validation functions, as they're used by our validation template hack and not directly by our page code. /** * @namespace * Client-side page validators used by our template hack. * ... */ pageValidators: { /** * Insures each ingredient with a quantity * also has a name. * * @param {Form object} form * @type boolean */ validateIngredientNameRequired: function (form) { // ... Note also that we can annotate the type of our JavaScript parameters inside curly brackets. Obviously, JavaScript doesn't have typed parameters. We need to tell it what the function is expecting. The @type annotation is used to document what the function is expected to return. It gets a little trickier if the function returns different types based on arbitrary criteria. However, we never do that because it's hard to maintain. JsDoc has the typical plethora of command-line options, and requires the specification of the application itself (written in JavaScript, and run using Rhino) and the templates defining the output format. An alias to run JsDoc might look like the following, assuming the JsDoc installation is being pointed at by the ${JSDOC} shell variable: alias jsdoc='java -jar ${JSDOC}/jsrun.jar ${JSDOC}/app/run.js -t=${JSDOC}/templates/jsdoc' The command line to document our Recipe module (including private functions using the -p options) and to write the output to the jsdoc-out folder, will now look like the following: jsdoc -p -d=jsdoc-out recipe.js The homepage looks similar to a typical JavaDoc page, but more JavaScript-like: A portion of the Recipe module's validators, marked by a @namespace annotation inside the @lends annotation of the return map, looks like the one shown in the next image (the left-side navigation has been removed): We can get a pretty decent and accurate JavaScript documentation using JsDoc, with only a minimal amount of prodding to help with the dynamic aspects of JavaScript, which is difficult to figure out automatically. Documenting interaction Documenting interaction can be surprisingly complicated, particularly in today's highly-interactive Web 2.0 applications. There are many different levels of interactivity taking place, and the implementation may live in several different layers, from the JavaScript browser to HTML generated deep within a server-side framework. UML sequence diagrams may be able to capture much of that interactivity, but fall somewhat short when there are activities happening in parallel. AJAX, in particular, ends up being a largely concurrent activity. We might send the AJAX request, and then do various things on the browser in anticipation of the result. More UML and the power of scribbling The UML activity diagram is able to capture this kind of interactivity reasonably well, as it allows a single process to be split into multiple streams and then joined up again later. As we look at a simple activity diagram, we'll also take a quick look at scribbling, paper, whiteboards, and the humble digital camera. Don't spend so much time making pretty pictures! One of the hallmarks of lightweight, agile development is that we don't spend all of our time creating the World's Most Perfect Diagram™. Instead, we create just enough documentation to get our points across. One result of this is that we might not use a $1,000 diagramming package to create all of our diagrams. Believe it or not, sometimes just taking a picture of a sketched diagram from paper or a whiteboard is more than adequate to convey our intent, and is usually much quicker than creating a perfectly-rendered software-driven diagram. Yes, the image above is a digital camera picture of a piece of notebook paper with a rough activity diagram. The black bars here are used to indicate a small section of parallel functionality, a server-side search and some activity on the browser. The browser programming is informally indicated by the black triangles. In this case, it might not even be worth sketching out. However, for moderately more complicated usage cases, particularly when there is a lot of both server- and client-side activity, a high-level overview is often worth the minimal effort. The same digital camera technique is also very helpful in meetings where various documentation might be captured on a whiteboard. The resulting images can be posted to a company wiki, used in informal specifications, and so on.
Read more
  • 0
  • 0
  • 1478

article-image-schema-validation-oracle-jdeveloper-xdk-11g
Packt
15 Oct 2009
7 min read
Save for later

Schema Validation with Oracle JDeveloper - XDK 11g

Packt
15 Oct 2009
7 min read
JDeveloper built-in schema validation Oracle JDeveloper 11g has built-in support for XML schema validation. If an XML document includes a reference to an XML schema, the XML document may be validated with the XML schema using the built-in feature. An XML schema may be specified in an XML document using the xsi:noNamespaceSchemaLocation attribute or the xsi:namespaceSchemaLocation attribute. Before we discuss when to use which attribute, we need to define the target namespace. A schema is a collection of type definitions and element declarations whose names belong to a particular namespace called a target namespace. Thus, a target namespace distinguishes between type definitions and element declarations from different collections. An XML schema doesn't need to have a target namespace. If the XML schema has a target namespace, specify the schema's location in an XML document using the xsi:namespaceSchemaLocation attribute. If the XML schema does not have a target namespace, specify the schema location using the xsi:noNamespaceSchemaLocation attribute. The xsi:noNamespaceSchemaLocation and xsi:namespaceSchemaLocation attributes are a hint to the processor about the location of an XML schema document. The example XML schema document that we shall create is catalog.xsd and is listed here: <?xml version="1.0" encoding="utf-8"?><xsd:schema > <xsd:element name="catalog"type="catalogType"/> <xsd:complexType name="catalogType"> <xsd:sequence> <xsd:element ref="journal" minOccurs="0"maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> <xsd:element name="journal" type="journalType"/> <xsd:complexType name="journalType"> <xsd:sequence> <xsd:element ref="article" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> <xsd:attribute name="title" type="xsd:string"/> <xsd:attribute name="publisher" type="xsd:string"/> <xsd:attribute name="edition" type="xsd:string"/> </xsd:complexType> <xsd:element name="article" type="articleType"/> <xsd:complexType name="articleType"> <xsd:sequence> <xsd:element name="title" type="xsd:string"/> <xsd:element name="author" type="xsd:string"/> </xsd:sequence> <xsd:attribute name="section" type="xsd:string"/> </xsd:complexType></xsd:schema> The XML document instance that we shall generate from the schema is catalog.xml and is listed as follows: <?xml version="1.0" encoding="utf-8"?><catalog><journal title="Oracle Magazine" publisher="OraclePublishing" edition="September-October 2008"> <article section="Features"> <title>Share 2.0</title> <author>Alan Joch</author> </article></journal><journal title="Oracle Magazine" publisher="OraclePublishing" edition="March-April 2008"> <article section="Oracle Developer"> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article></journal></catalog> Specify the XML schema location in the XML document using the following attribute declaration: xsi:noNamespaceSchemaLocation="catalog.xsd" The XML schema may be in any directory. The example XML document does not include any namespace elements. Therefore, the schema is specified with the xsi:noNamespaceSchemaLocation attribute in the root element catalog. The XML schema may be specified with a relative URL, or a file, or an HTTP URL. The xsi:noNamespaceSchemaLocation attribute we added specifies the relative path to the XML schema document catalog.xsd. To validate the XML document with the XML schema, right-click on the XML document and select Validate XML . The XML document gets validated with the XML schema and the output indicates that the XML document does not have any validation errors. To demonstrate validation errors, add a non-valid element to the XML document. As an example, add the following element to the catalog element after the first journal element: <article></article> To validate the modified XML document, right-click on the XML document and select Validate XML. The output indicates validation errors. All the elements after the non-valid element become non-valid. For example, the journal element is valid as a subelement of the catalog element, but because the second journal element is after the non-valid article element, the journal element also becomes non-valid as indicated in the validation output. XDK 11g also provides a schema validation-specific API known as XSDValidator to validate an XML document with an XML schema. The choice of validation method depends on the additional functionality required in the validation application. XSDValidator is suitable for validation if all that is required is schema validation. Setting the environment Create an application (SchemaValidation, for example) and a project (SchemaValidation, for example) in JDeveloper. To create an application and a project select File | New. In the New Gallery window, select Categories | General and Items | Generic Application. Click on OK. In the Create Generic Application window, specify an Application Name and click on Next. In the Name your Generic project window, specify a Project Name and click on Finish. An application and a project get created. Next, add some XDK 11g JAR files to the project classpath. Select the project node in Application Navigator, and select Tools | Project Properties. In the Project Properties window, select Libraries and Classpath. Click on the Add Library button to add a library. In the Add Library window, select the Oracle XML Parser v2 library and click on the OK button. The Oracle XML Parser v2 library gets added to the project Libraries. Select the Add JAR/Directory button to add JAR file xml.jar from the C:OracleMiddlewarejdevelopermodulesoracle.xdk_11.1.1 directory. First, create an XML document and an XML schema in JDeveloper. To create an XML document, select File | New. In the New Gallery window select Categories | General | XML. In the Items listed select XML Document, and click on the OK button. In the Create XML File wizard, specify the XML file name, catalog.xml, and click on the OK button. An XML document gets added to the SchemaValidation project in Application Navigator. To add an XML schema, select File | New, and General | XML in the New Gallery window. Select XML schema in the Items listed. Click on the OK button. An XML schema document gets added to SchemaValidation project. The example XML document, catalog.xml, consists of a journal catalog. Copy the XML document to the catalog.xml file in the JDeveloper project. The example XML document does not specify the location of the XML schema document to which the XML document must conform to, because we will be setting the XML schema document in the schema validation application. If the XML schema document is specified in the XML document and the schema validation application, the schema document set in the schema validation application is used. Next, copy the example XML schema document, catalog.xsd to catalog.xsd in the JDeveloper project Schema Validation. Each XML schema is required to be in the XML schema namespace http://www.w3.org/2001/XMLSchema. The XML schema namespace is specified with a namespace declaration in the root element, schema, of the XML schema. A namespace declaration is of the format > Next, we will create Java classes for schema validation. Select File | New and subsequently Categories | General and Items | Java Class in the New Gallery window to create a Java class for schema validation. Click on the OK button. In the Create Java Class window specify a Class Name, XMLSchemaValidator, and a package name, schemavalidation, and click on the OK button. A Java class gets added to the SchemaValidation project. Similarly, add Java classes, DOMValidator and SAXValidator. The schema validation applications are shown in the Application Navigator.
Read more
  • 0
  • 0
  • 4124
Banner background image

article-image-documenting-our-application-apache-struts-2-part-1
Packt
15 Oct 2009
12 min read
Save for later

Documenting our Application in Apache Struts 2 (part 1)

Packt
15 Oct 2009
12 min read
Documenting Java Everybody knows the basics of documenting Java, so we won't go into much detail. We'll talk a bit about ways of writing code whose intention is clear, mention some Javadoc tricks we can use, and highlight some tools that can help keep our code clean. Clean code is one of the most important ways we can document our application. Anything we can do to increase readability will reduce confusion later (including our own). Self-documenting code We've all heard the myth of self-documenting code. In theory, code is always clear enough to be easily understood. In reality, this isn't always the case. However, we should try to write code that is as self-documenting as possible. Keeping non-code artifacts in sync with the actual code is difficult. The only artifact that survives a project is the executable, which is created from code, not comments. This is one of the reasons for writing self-documenting code. (Annotations, XDoclet, and so on, make that somewhat less true.) There are little things we can do throughout our code to make our code read as much like our intent as possible and make extraneous comments just that: extraneous. Document why, not what Over-commenting wastes everybody's time. Time is wasted in writing a comment, reading it, keeping that comment in sync with the code, and, most importantly, a lot of time is wasted when a comment is not accurate. Ever seen this? a += 1; // increment a This is the most useless comment in the world. Firstly, it's really obvious we're incrementing something, regardless of what that something is. If the person reading our code doesn't know what += is, then we have more serious problems than them not knowing that we're incrementing, say, an array index. Secondly, if a is an array index, we should probably use either a more common array index or make it obvious that it's an array index. Using i and j is common for array indices, while idx or index is less common. It may make sense to be very explicit in variable naming under some circumstances. Generally, it's nice to avoid names such as indexOfOuterArrayOfFoobars. However, with a large loop body it might make sense to use something such as num or currentIndex, depending on the circumstances. With Java 1.5 and its support for collection iteration, it's often possible to do away with the index altogether, but not always. Make your code read like the problem Buzzphrases like Domain Specific Languages (DSLs) and Fluent Interfaces are often heard when discussing how to make our code look like our problem. We don't necessarily hear about them as much in the Java world because other languages support their creation in more "literate" ways. The recent interest in Ruby, Groovy, Scala, and other dynamic languages have brought the concept back into the mainstream. A DSL, in essence, is a computer language targeted at a very specific problem. Java is an example of a general-purpose language. YACC and regular expressions are examples of DSLs that are targeted at creating parsers and recognizing strings of interest respectively. DSLs may be external, where the implementing language processes the DSL appropriately, as well as internal, where the DSL is written in the implementing language itself. An internal DSL can also be thought of as an API or library, but one that reads more like a "little language". Fluent interfaces are slightly more difficult to define, but can be thought of as an internal DSL that "flows" when read aloud. This is a very informal definition, but will work for our purposes Java can actually be downright hostile to some common DSL and fluent techniques for various reasons, including the expectations of the JavaBean specification. However, it's still possible to use some of the techniques to good effect. One typical practice of fluent API techniques is simply returning the object instance in object methods. For example, following the JavaBean specification, an object will have a setter for the object's properties. For example, a User class might include the following: public class User {private String fname;private String lname;public void setFname(String fname) { this.fname = fname; }public void setLname(String lname) { this.lname = lname; }} Using the class is as simple as we'd expect it to be: User u = new User();u.setFname("James");u.setLname("Gosling"); Naturally, we might also supply a constructor that accepts the same parameters. However, it's easy to think of a class that has many properties making a full constructor impractical. It also seems like the code is a bit wordy, but we're used to this in Java. Another way of creating the same functionality is to include setter methods that return the current instance. If we want to maintain JavaBean compatibility, and there are reasons to do so, we would still need to include normal setters, but can still include "fluent" setters as shown here: public User fname(String fname) {this.fname = fname;return this;}public User lname(String lname) {this.lname = lname;return this;} This creates (what some people believe is) more readable code. It's certainly shorter: User u = new User().fname("James").lname("Gosling"); There is one potential "gotcha" with this technique. Moving initialization into methods has the potential to create an object in an invalid state. Depending on the object this may not always be a usable solution for object initialization. Users of Hibernate will recognize the "fluent" style, where method chaining is used to create criteria. Joshua Flanagan wrote a fluent regular expression interface, turning regular expressions (already a domain-specific language) into a series of chained method calls: Regex socialSecurityNumberCheck =new Regex(Pattern.With.AtBeginning.Digit.Repeat.Exactly(3).Literal("-").Repeat.Optional.Digit.Repeat.Exactly(2).Literal("-").Repeat.Optional.Digit.Repeat.Exactly(4).AtEnd); Whether or not this particular usage is an improvement is debatable, but it's certainly easier to read for the non-regex folks. Ultimately, the use of fluent interfaces can increase readability (by quite a bit in most cases), may introduce some extra work (or completely duplicate work, like in the case of setters, but code generation and/or IDE support can help mitigate that), and may occasionally be more verbose (but with the benefit of enhanced clarity and IDE completion support). Contract-oriented programming Aspect-oriented programming (AOP) is a way of encapsulating cross-cutting functionality outside of the mainline code. That's a mouthful, but essentially it means is that we can remove common code that is found across our application and consolidate it in one place. The canonical examples are logging and transactions, but AOP can be used in other ways as well. Design by Contract (DbC) is a software methodology that states our interfaces should define and enforce precise specifications regarding operation. "Design by Contract" is a registered trademark of Interactive Software Engineering Inc. Other terms include Programming by Contract (PbC) or Contract Oriented Programming (COP). How does COP help create self-documenting code? Consider the following portion of a stack implementation: public void push(final Object o) {stack.add(o);} What happens if we attempt to push a null? Let's assume that for this implementation, we don't want to allow pushing a null onto the stack. /*** Pushes non-null objects on to stack.*/public void push(final Object o) {if (o == null) return;stack.add(o);} Once again, this is simple enough. We'll add the comment to the Javadocs stating that null objects will not be pushed (and that the call will fail/return silently). This will become the "contract" of the push method—captured in code and documented in Javadocs. The contract is specified twice—once in the code (the ultimate arbiter) and again in the documentation. However, the user of the class does not have proof that the underlying implementation actually honors that contract. There's no guarantee that if we pass in a null, it will return silently without pushing anything. The implied contract can change. We might decide to allow pushing nulls. We might throw an IllegalArgumentException or a NullPointerException on a null argument. We're not required to add a throwsclause to the method declaration when throwing runtime exceptions. This means further information may be lost in both the code and the documentation. Eiffel has language-level support for COP with the require/do/ensure/end construct. It goes beyond the simple null check in the above code. It actively encourages detailed pre- and post-condition contracts. An implementation's push() method might check the remaining stack capacity before pushing. It might throw exceptions for specific conditions. In pseudo-Eiffel, we'd represent the push() method in the following way: push (o: Object) require o /= null do -- push end A stack also has an implied contract. We assume (sometimes naively) that once we call the push method, the stack will contain whatever we pushed. The size of the stack will have increased by one, or whatever other conditions our stack implementation requires. Java, of course, doesn't have built-in contracts. However, it does contain a mechanism that can be used to get some of the benefits for a conceptually-simple price. The mechanism is not as complete, or as integrated, as Eiffel's version. However, it removes contract enforcement from the mainline code, and provides a way for both sides of the software to specify, accept, and document the contracts themselves. Removing the contract information from the mainline code keeps the implementation clean and makes the implementation code easier to understand. Having programmatic access to the contract means that the contract could be documented automatically rather than having to maintain a disconnected chunk of Javadoc. SpringContracts SpringContracts is a beta-level Java COP implementation based on Spring's AOP facilities, using annotations to state pre- and post-contract conditions. It formalizes the nature of a contract, which can ease development. Let's consider our VowelDecider that was developed through TDD. We can also use COP to express its contract (particularly the entry condition). This is a method that doesn't alter state, so post conditions don't apply here. Our implementation of VowelDecider ended up looking (more or less) like this: public boolean decide(final Object o) throws Exception {if ((o == null) || (!(o instanceof String))) {throw new IllegalArgumentException("Argument must be a non-null String.");}String s = (String) o;return s.matches(".*[aeiouy]+.*");} Once we remove the original contract enforcement code, which was mixed with the mainline code, our SpringContracts @Precondition annotation looks like the following: @Precondition(condition="arg1 != null && arg1.class.name == 'java.lang.String'",message="Argument must be a non-null String")public boolean decide(Object o) throws Exception {String s = (String) o;return s.matches(".*[aeiouy]+.*");} The pre-condition is that the argument must not be null and must be (precisely) a string. (Because of SpringContracts' Expression Language, we can't just say instanceof String in case we want to allow string subclasses.) We can unit-test this class in the same way we tested the TDD version. In fact, we can copy the tests directly. Running them should trigger test failures on the null and non-string argument tests, as we originally expected an IllegalArgumentException. We'll now get a contract violation exception from SpringContracts. One difference here is that we need to initialize the Spring context in our test. One way to do this is with JUnit's @BeforeClass annotation, along with a method that loads the Spring configuration file from the classpath and instantiates the decider as a Spring bean. Our class setup now looks like this: @BeforeClass public static void setup() {appContext = new ClassPathXmlApplicationContext("/com/packt/s2wad/applicationContext.xml");decider = (VowelDecider)appContext.getBean("vowelDecider");} We also need to configure SpringContracts in our Spring configuration file. Those unfamiliar with Spring's (or AspectJ's) AOP will be a bit confused. However, in the end, it's reasonably straightforward, with a potential "gotcha" regarding how Spring does proxying. <aop:aspectj-autoproxy proxy-target-class="true"/><aop:config><aop:aspect ref="contractValidationAspect"><aop:pointcut id="contractValidatingMethods"expression="execution(*com.packt.s2wad.example.CopVowelDecider.*(..))"/><aop:around pointcut-ref="contractValidatingMethods"method="validateMethodCall"/></aop:aspect></aop:config><bean id="contractValidationAspect"class="org.springcontracts.dbc.interceptor.ContractValidationInterceptor"/><bean id="vowelDecider"class="com.packt.s2wad.example.CopVowelDecider" /> The SpringContracts documentation goes into it a bit more and the Spring documentation contains a wealth of information regarding how AOP works in Spring. The main difference between this and the simplest AOP setup is that our autoproxy target must be a class, which requires CGLib. This could also potentially affect operation. The only other modification is to change the exception we're expecting to SpringContract's ContractViolationCollectionException, and our test starts passing. These pre- and post-condition annotations use the @Documented meta-annotation, so the SpringContracts COP annotations will appear in the Javadocs. It would also be possible to use various other means to extract and document contract information. Getting into details This mechanism, or its implementation, may not be a good fit for every situation. Runtime performance is a potential issue. As it's just some Spring magic, it can be turned off by a simple configuration change. However, if we do, we'll lose the value of the on-all-the-time contract management. On the other hand, under certain circumstances, it may be enough to say that once the contracts are consistently honored under all of the test conditions, the system is correct enough to run without them. This view holds the contracts more as an acceptance test, rather than as run-time checking. Indeed, there is an overlap between COP and unit testing as the way to keep code honest. As unit tests aren't run all the time, it may be reasonable to use COP as a temporary runtime unit test or acceptance test.
Read more
  • 0
  • 0
  • 1585

article-image-building-friend-networks-django-10
Packt
14 Oct 2009
6 min read
Save for later

Building Friend Networks with Django 1.0

Packt
14 Oct 2009
6 min read
An important aspect of socializing in our application is letting users to maintain their friend lists and browse through the bookmarks of their friends. So, in this section we will build a data model to maintain user relationships, and then program two views to enable users to manage their friends and browse their friends' bookmarks. Creating the friendship data model Let's start with the data model for the friends feature. When a user adds another user as a friend, we need to maintain both users in one object. Therefore, the Friendship data model will consist of two references to the User objects involved in the friendship. Create this model by opening the bookmarks/models.py file and inserting the following code in it: class Friendship(models.Model): from_friend = models.ForeignKey( User, related_name='friend_set' ) to_friend = models.ForeignKey( User, related_name='to_friend_set' ) def __unicode__(self): return u'%s, %s' % ( self.from_friend.username, self.to_friend.username ) class Meta: unique_together = (('to_friend', 'from_friend'), ) The Friendship data model starts with defining two fields that are User objects: from_friend and to_friend. from_friend is the user who added to_friend as a friend. As you can see, we passed a keyword argument called related_name to both the fields. The reason for this is that both fields are foreign keys that refer back to the User data model. This will cause Django to try to create two attributes called friendship_set in each User object, which would result in a name conflict. To avoid this problem, we provide a specific name for each attribute. Consequently, each User object will contain two new attributes: user.friend_set, which contains the friends of this user and user.to_friend_set, which contains the users who added this user as a friend. Throughout this article, we will only use the friend_set attribute, but the other one is there in case you need it . Next, we defined a __unicode__ method in our data model. This method is useful for debugging. Finally, we defined a class called Meta. This class may be used to specify various options related to the data model. Some of the commonly used options are: db_table: This is the name of the table to use for the model. This is useful when the table name generated by Django is a reserved keyword in SQL, or when you want to avoid conflicts if a table with the same name already exists in the database. ordering: This is a list of field names. It declares how objects are ordered when retrieving a list of objects. A column name may be preceded by a minus sign to change the sorting order from ascending to descending. permissions: This lets you declare custom permissions for the data model in addition to add, change, and delete permissions. Permissions should be a list of two-tuples, where each two-tuple should consist of a permission codename and a human-readable name for that permission. For example, you can define a new permission for listing friend bookmarks by using the following Meta class: class Meta: permissions = ( ('can_list_friend_bookmarks', 'Can list friend bookmarks'), ) unique_together: A list of field names that must be unique together. We used the unique_together option here to ensure that a Friendship object is added only once for a particular relationship. There cannot be two Friendship objects with equal to_friend and from_friend fields. This is equivalent to the following SQL declaration: UNIQUE ("from_friend", "to_friend") If you check the SQL generated by Django for this model, you will find something similar to this in the code. After entering the data model code into the bookmarks/models.py file, run the following command to create its corresponding table in the database: $ python manage.py syncdb Now let's experiment with the new model and see how to store and retrieve relations of friendship. Run the interactive console using the following command: $ python manage.py shell Next, retrieve some User objects and build relationships between them (but make sure that you have at least three users in the database): >>> from bookmarks.models import *>>> from django.contrib.auth.models import User>>> user1 = User.objects.get(id=1)>>> user2 = User.objects.get(id=2)>>> user3 = User.objects.get(id=3)>>> friendship1 = Friendship(from_friend=user1, to_friend=user2)>>> friendship1.save()>>> friendship2 = Friendship(from_friend=user1, to_friend=user3)>>> friendship2.save() Now, user2 and user3 are both friends of user1. To retrieve the list of Friendship objects associated with user1, use: >>> user1.friend_set.all()[<Friendship: user1, user2>, <Friendship: user1, user3>] (The actual usernames in output were replaced with user1, user2, and user3 for clarity.) As you may have already noticed, the attribute is named friend_set because we called it so using the related_name option when we created the Friendship model. Next, let's see one way to retrieve the User objects of user1's friends: >>> [friendship.to_friend for friendship in user1.friend_set.all()][<User: user2>, <User: user3>] The last line of code uses a Python feature called "list" comprehension to build the list of User objects. This feature allows us to build a list by iterating through another list. Here, we built the User list by iterating over a list of Friendship objects. If this syntax looks unfamiliar, please refer to the List Comprehension section in the Python tutorial. Notice that user1 has user2 as a friend, but the opposite is not true. >>> user2.friend_set.all() [] In other words, the Friendship model works only in one direction. To add user1 as a friend of user2, we need to construct another Friendship object. >>> friendship3 = Friendship(from_friend=user2, to_friend=user1)>>> friendship3.save()>>> user2.friend_set.all() [<Friendship: user2, user1>] By reversing the arguments passed to the Friendship constructor, we built a relationship in the other way. Now user1 is a friend of user2 and vice-versa. Experiment more with the model to make sure that you understand how it works. Once you feel comfortable with it, move to the next section, where we will write views to utilize the data model. Things will only get more exciting from now on!
Read more
  • 0
  • 0
  • 4390

article-image-synchronous-communication-and-interaction-moodle-19-multimedia
Packt
14 Oct 2009
5 min read
Save for later

Synchronous Communication and Interaction with Moodle 1.9 Multimedia

Packt
14 Oct 2009
5 min read
These options can be helpful for distance education, providing new ways of communicating and interacting with our students (and between them) when we are not all in the same physical space. Because Moodle does not provide effective synchronous communication tools (the chat activity could overload the server), the aforementioned tools are presented as extensions that can support our courses, giving them a new level of interaction. In distance courses with considerable duration, such communication can be a motivation and a way of providing support to students when we are online at the same time. Communicating in real-time using text, audio, and video Google Chat is a service from Google that allows text, audio, and video chat amongst Google Mail users meaning, we'll need a Google account. The audio conversation is usually called Voice over IP (VoIP), but as bandwidth allowances increase, the use of video is becoming common. With this tool we can: Meet with colleagues or students, individually or in groups Participate in a distant event (for example, attend a conference) Conduct interviews Teach how to play an instrument (by using the webcam) Teach gestural language (by using the webcam) I find it really useful to use VoIP in distance courses, not only to give feedback to students and get to know them better, but also to create opportunities for students to interact with each other during group tasks outside of these tutor-students meeting times. A good time to use this application is in Module 10 – What's good music—a theme that fits well with an online debate about how to define quality criteria for music. Students will be required to work in groups and debate on what they think is good music and how it can be assessed. This discussion will be facilitated by using this tool. Chat and group chat The chat option is available in Google Mail, on the sidebar on the left. For a start, we can configure some settings, especially privacy settings, by going to Options | Chat settings…: In the section Auto-add suggested contacts, we should select the option Only allow people that I've explicitly approved to chat with me and see when I'm online option, as shown in the screenshot below: We can also disable chat history if we don't want to keep a record of our chat. There is another option during a chat to go "off record", meaning that if the chat history is on, this portion of the conversation (that takes place whilst this option is selected) will not be archived. We are now ready to start a chat. We can search for contacts in the same Google Mail Chat sidebar, using the search form that is available (Search, add or invite) and double-click on the name of the contact that is displayed, or in the Chat link of the pop-up window that is displayed: Because the chat is synchronous, it's obvious that the (two or more) people chatting must be online. We can check if a person is online or not by looking at the small icon next to the people we've located, or in our chat list. If they have a grey, round icon on the left of their name, they are offline (or invisible and don't want to be bothered). If the color is green (available), yellow (idle), or red (busy) it's possible to chat to them. In the pop-up, we can also add the person to the chat list below the search form. After starting the chat, a window similar to the one shown below opens in the lower-right corner of the Google Mail account, and we can start talking: When we are chatting with someone, we can click on the Video & More | Group Chat option to invite one or more friends to join the conversation: A new window will appear, in which we can chat with the participants. Note that if we click on the arrow in the blue bar at the top of our chat window, the window will pop-up from its position in the Google Mail window and we can access it as an independent window. If we paste a URL from a YouTube video into the chat window, a preview of the video will be integrated directly into our conversation, as shown in the screenshot below: Transferring files The easiest way to send files to participants for reading, or supporting discussion or commenting upon is either by using Google Mail, or by uploading them to Moodle or Google Docs and sharing these with the chat participants. This can be useful in many online discussions. Voice and video chat Chat, as we saw, is available by default in Google Mail, on the leftmost sidebar. To add audio and voice capabilities to this chat, we have to install a plug-in that is available at http://mail.google.com/videochat, for Windows and Mac users (again, sorry to Linux users). After installing this plug-in, we can start an audio or video conversation (only one-to-one). If our contacts have a camera or microphone, we can click on the Video & more option again, and the following two options will be available: In the case of voice chat, a call will be started, and we will also keep the text chat functionality: In the case of video chat, the same applies. In the upper area, a video of the person that we are chatting with will be displayed, and in the lower corner, if we have a webcam, our video will be displayed: Image source: Scmoewes (2005). Jimi. Retrieved March 30, 2009,from http://www.flickr.com/photos/cmoewes/30989105/ For distance courses or even in e-learning, Google chat is an option. But if we need more complex functionality, including audio conferencing and desktop sharing, there are other tools that are available. We will now look at one in particular, called Dimdim.
Read more
  • 0
  • 0
  • 1255
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-administration-interface-django-10
Packt
14 Oct 2009
5 min read
Save for later

Creating an Administration Interface with Django 1.0

Packt
14 Oct 2009
5 min read
Activating the administration interface The administration interface comes as a Django application. To activate it, we will follow a simple procedure that is similar to enabling the user authentication system. The administration application is located in the django.contrib.admin package. So the first step is adding the path of this package to the INSTALLED_APPS variable. Open the settings.py file, locate INSTALLED_APPS, and edit it as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.comments', 'django_bookmarks.bookmarks',) Next, run the following command to create the necessary tables for the administration application: $ python manage.py syncdb Now we need to make the administration interface accessible from within our site by adding URL entries for it. The admin application defines many views (as we will see later), so manually adding a separate entry for each view can become a tedious task. Therefore, the admin interface provides a shortcut for this. There is a single object that encapsulates all the admin views. To use it, open the urls.py file and edit it as follows: from django.contrib import adminadmin.autodiscover()urlpatterns = ('', [...] # Admin interface (r'^admin/(.*)', admin.site.root),) Here, we are importing the admin module, calling a method in it, and mapping all the URLs under the path ^admin/ to a view called admin.site.root. This will make the views of the administration interface accessible from within our project. One last thing remains before we see the administration page in action. We need to tell Django what models can be managed in the administration interface. This is done by creating a new file called the admin.py file in the bookmarks directory. Create the bookmarks/admin.py file and add the following code to it: from django.contrib import adminfrom bookmarks.models import *class LinkAdmin(admin.ModelAdmin): passadmin.site.register(Link, LinkAdmin) We created a class derived from the admin.ModelAdmin class and mapped it to the Link model using the admin.site.register method. This effectively tells Django to enable the Link model in the administration interface. The keyword pass means that the class is empty. Later, we will use this class to customize the administration page; so it won't remain empty. Do the same to the Bookmark, Tag, and SharedBookmark models and add it to the bookmarks/admin.py file. Now, create an empty admin class for each of them and register it. The User model is provided by Django and, therefore, we don't have control over it. But fortunately, it already has an Admin class so it's available in the administration interface by default. Next, launch the development server and direct your browser to http://127.0.0.1:8000/admin/. You will be greeted by a login page. The superuser account after writing the database model is the account that you have to use in order to log in: Next, you will see a list of the models that are available to the administration interface. As discussed earlier, only models that have admin classes in the bookmarks/admin.py file will appear on this page. If you click on a model name, you will get a list of the objects that are stored in the database under this model. You can use this page to view or edit a particular object, or to add a new one. The following figure shows the listing page for the Link model: The edit form is generated according to the fields that exist in the model. The Link form, for example, contains a single text field called Url. You can use this form to view and change the URL of a Link object. In addition, the form performs proper validation of fields before saving the object. So if you try to save a Link object with an invalid URL, you will receive an error message asking you to correct the field. The following figure shows a validation error when trying to save an invalid link: Fields are mapped to form widgets according to their type. For example, date fields are edited using a calendar widget, whereas foreign key fields are edited using a list widget, and so on. The following figure shows a calendar widget from the user edit page. Django uses it for date and time fields. As you may have noticed, the administration interface represents models by using the string returned by the __unicode__ method. It was indeed a good idea to replace the generic strings returned by the default __unicode__ method with more helpful ones. This greatly helps when working with the administration page, as well as with debugging. Experiment with the administration pages. Try to create, edit, and delete objects. Notice how changes made in the administration interface are immediately reflected on the live site. Also, the administration interface keeps a track of the actions that you make and lets you review the history of changes for each object. This section has covered most of what you need to know in order to use the administration interface provided by Django. This feature is actually one of the main advantages of using Django. You get a fully featured administration interface from writing only a few lines of code! Next, we will see how to tweak and customize the administration pages. As a bonus, we will learn more about the permissions system offered by Django.
Read more
  • 0
  • 0
  • 1726

article-image-importing-structure-and-data-using-phpmyadmin
Packt
12 Oct 2009
9 min read
Save for later

Importing Structure and Data Using phpMyAdmin

Packt
12 Oct 2009
9 min read
A feature was added in version 2.11.0: an import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defined in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine; so, it must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server using a protocol such as FTP, as described in the Web Server Upload Directories section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or in user-related web server configuration file (such as .htaccess or virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes, depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server,does not count as execution time because the PHP script starts to execute only once the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or the maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to avoid web server child processes from grabbing too much of the server memory—phpMyAdmin also runs as a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php. Finally, file uploads must be allowed by setting file_uploads to On. Otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog, as the connection would be refused later by the PHP component of the web server. Partial imports If the file is too big, there are ways in which we can resolve the situation. If we still have access to the original data, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we will have to use a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. This feature is explained later in this article. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt… checkbox, the import process will interrupt itself if it detects that it is close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Temporary directory On some servers, a security feature called open_basedir can be set up in a way that impedes the upload mechanism. In this case, or for any other reason, when uploads are problematic, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. The dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently selected table (here author) and the actual contents of the SQL file that will be imported. All the contents of the SQL file will be imported, and it is those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statements to select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table. This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. Now it is time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the formats that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. An SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. To start the import, we click Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even in a MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded, and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements sub-section of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between these two formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view.   Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV. We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use REPLACE statement instead of INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can then specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually this is . For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. But this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names to match the source file. When we click Go, the import is executed and we get a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed.INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected.INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 5975

article-image-php-magic-features
Packt
12 Oct 2009
5 min read
Save for later

PHP Magic Features

Packt
12 Oct 2009
5 min read
In this article by Jani Hartikainen, we'll look at PHP's "magic" features: Magic methods, which are class methods with specific names, are used to perform various specialized tasks. They are grouped into two: overloading methods and non-overloading methods. Overloading magic methods are used when your code attempts to access a method or a property which does not exist. Non-overloading methods perform other tasks. Magic functions, which are similar to magic methods, but are just plain functions outside any class. Currently there is only one magic function in PHP. Magic constants, which are similar to constants in notation, but act more like "dynamic" constants - their value depends on where you use them. We'll also look at some practical examples of using some of these, and lastly we'll check out what new features PHP 5.3 is going to add. Magic methods For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods. __construct and __destruct class SomeClass { public function __construct() { } public function __destruct() { }} The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common.  __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct. $obj = new SomeClass(); __destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about  its existence. It gets called when your object falls out of scope or is garbage collected. function someFunc() { $obj = new SomeClass(); //when the function ends, $obj falls out of scope and SomeClass __destruct is called } someFunc(); If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton. __clone class SomeClass { public $someValue; public function __clone() { $clone = new SomeClass(); $clone->someValue = $this->someValue; return $clone; }} The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects. $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = clone $obj1;echo $obj2->someValue;//echos 1 Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following: $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = $obj1;$obj3 = clone $obj1;$obj1->someValue = 2; What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1. If you want to disable cloning, you can make __clone private or protected. __toString class SomeClass { public function __toString() { return 'someclass'; }} The __toString method is called when PHP needs to convert class instances into strings, for example when echoing: $obj = new SomeClass();echo $obj;//will output 'someclass' This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves. __sleep and __wakeup class SomeClass { private $_someVar; public function __sleep() { return array('_someVar'); } public function __wakeup() { }} These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized. $obj = new SomeClass();$serialized = serialize($obj);//__sleep was calledunserialize($serialized);//__wakeup was called You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized. As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message. An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual. __set_state class SomeClass { public $someVar; public static function __set_state($state) { $obj = new SomeClass(); $obj->someVar = $state['someVar']; return $obj; }} This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class. $obj = new SomeClass();$obj->someVar = 'my value';var_export($obj); This code will output something along the lines of: SomeClass::__set_state(array('someVar'=>'my value')); Note that var_export will also export private and protected variables of the class, so they too will be in the array.
Read more
  • 0
  • 0
  • 2461

article-image-looking-apache-axis2
Packt
09 Oct 2009
11 min read
Save for later

Looking into Apache Axis2

Packt
09 Oct 2009
11 min read
(For more resources on Axis2, see here.) Axis2 Architecture Axis2 is built upon a modular architecture that consists of core modules and non-core modules. The core engine is said to be a pure SOAP processing engine (there is not any JAX-PRC concept burnt into the core). Every message coming into the system has to be transformed into a SOAP message before it is handed over to the core engine. An incoming message can either be a SOAP message or a non-SOAP message (REST JSON or JMX). But at the transport level, it will be converted into a SOAP message. When Axis2 was designed, the following key rules were incorporated into the architecture. These rules were mainly applied to achieve a highly flexible and extensible SOAP processing engine: Separation of logic and state to provide a stateless processing mechanism. (This is because Web Services are stateless.) A single information model in order to enable the system to suspend and resume. Ability to extend support to newer Web Service specifications with minimal changes made to the core architecture. The figure below shows all the key components in Axis2 architecture (including core components as well as non-core components). Core Modules XML Processing Model : Managing or processing the SOAP message is the most diffcult part of the execution of a message. The efficiency of message processing is the single most important factor that decides the performance of the entire system. Axis1 uses DOM as its message representation mechanism. However, Axis2 introduced a fresh XML InfoSet-based representation for SOAP messages. It is known as AXIOM (AXIs Object Model). AXIOM encapsulates the complexities of efficient XML processing within the implementation. SOAP Processing Model :  This model involves the processing of an incoming SOAP message. The model defines the different stages (phases) that the execution will walk through. The user can then extend the processing model in specific places. Information Model :  This keeps both static and dynamic states and has the logic to process them. The information model consists of two hierarchies to keep static and run-time information separate. Service life cycle and service session management are two objectives in the information model. Deployment Model :  The deployment model allows the user to easily deploy the services, configure the transports, and extend the SOAP Processing Model. It also introduces newer deployment mechanisms in order to handle hot deployment, hot updates, and J2EE-style deployment. Client API : This provides a convenient API for users to interact with Web Services using Axis2. The API consists of two sub-APIs, for average and advanced users. Axis2 default implementation supports all the eight MEPs (Message Exchange Patterns) defined in WSDL 2.0. The API also allows easy extension to support custom MEPs. Transports :  Axis2 defines a transport framework that allows the user to use and expose the same service in multiple transports. The transports fit into specific places in the SOAP processing model. The implementation, by default, provides a few common transports (HTTP, SMTP, JMX, TCP and so on). However, the user can write or plug-in custom transports, if needed. XML Processing Model Axis2 is built on a completely new architecture as compared to Axis 1.x. One of the key reasons for introducing Axis2 was to have a better, and an efficient XML processing model. Axis 1.x used DOM as its XML representation mechanism, which required the complete object hierarchy (corresponding to incoming message) to be kept in memory. This will not be a problem for a message of small size. But when it comes to a message of large size, it becomes an issue. To overcome this problem, Axis2 has introduced a new XML representation. AXIOM (AXIs Object Model) forms the basis of the XML representation for every SOAP-based message in Axis2. The advantage of AXIOM over other XML InfoSet representations is that it is based on the PULL parser technique, whereas most others are based on the PUSH parser technique. The main advantage of PULL over PUSH is that in the PULL technique, the invoker has full control over the parser and it can request the next event and act upon that, whereas in case of PUSH, the parser has limited control and delegates most of the functionality to handlers that respond to the events that are fired during its processing of the document. Since AXIOM is based on the PULL parser technique, it has on-demand-building capability whereby it will build an object model only if it is asked to do so. If required, one can directly access the underlying PULL parser from AXIOM, and use that rather than build an OM (Object Model). SOAP Processing Model Sending and receiving SOAP messages can be considered two of the key jobs of the SOAP-processing engine. The architecture in Axis2 provides two Pipes ('Flows'), in order to perform two basic actions. The AxisEngine or driver of Axis2 defines two methods, send() and receive() to implement these two Pipes. The two pipes are namedInFlow and OutFlow.   The complex Message Exchange Patterns (MEPs) are constructed by combining these two types of pipes. It should be noted that in addition to these two pipes there are two other pipes as well, and those two help in handling incoming Fault messages and sending a Fault message. Extensibility of the SOAP processing model is provided through handlers. When a SOAP message is being processed, the handlers that are registered will be executed. The handlers can be registered in global, service, or in operation scopes, and the final handler chain is calculated by combining the handlers from all the scopes. The handlers act as interceptors, and they process parts of the SOAP message and provide the quality of service features (a good example of quality of service is security or reliability). Usually handlers work on the SOAP headers; but they may access or change the SOAP body as well. The concept of a flow is very simple and it constitutes a series of phases wherein a phase refers to a collection of handlers. Depending on the MEP for a given method invocation, the number of flows associated with it may vary. In the case of an in-only MEP, the corresponding method invocation has only one pipe, that is, the message will only go through the in pipe (inflow). On the other hand, in the case of in-out MEP, the message will go through two pipes, that is the in pipe (inflow) and the out pipe (outflow). When a SOAP message is being sent, an OutFlow begins. The OutFlow invokes the handlers and ends with a Transport Sender that sends the SOAP message to the target endpoint. The SOAP message is received by a Transport Receiver at the target endpoint, which reads the SOAP message and starts the InFlow. The InFlow consists of handlers and ends with the Message Receiver, which handles the actual business logic invocation. A phase is a logical collection of one or more handlers, and sometimes a phase itself acts as a handler. Axis2 introduced the phase concept as an easy way of extending core functionalities. In Axis 1.x, we need to change the global configuration files if we want to add a handler into a handler chain. But Axis2 makes it easier by using the concept of phases and phase rules. Phase rules specify how a given set of handlers, inside a particular phase, are ordered. The figure below illustrates a flow and its phases. If the message has gone through the execution chain without having any problem, then the engine will hand over the message to the message receiver in order to do the business logic invocation, After this, it is up to the message receiver to invoke the service and send the response, if necessary. The figure below shows how the Message Receiver fits into the execution chain. The two pipes do not differentiate between the server and the client. The SOAP processing model handles the complexity and provides two abstract pipes to the user. The different areas or the stages of the pipes are named 'phases' in Axis2. A handler always runs inside a phase, and the phase provides a mechanism to specify the ordering of handlers. Both pipes have built-in phases, and both define the areas for User Phases, which can be defined by the user, as well. Information Model As  shown  in  the figure below, the information model consists of two hierarchies: Description hierarchy and Context hierarchy. The Description hierarchy represents the static data that may come from different deployment descriptors. If hot deployment is turned off, then the description hierarchy is not likely to change. If hot deployment is turned on, then we can deploy the service while the system is up and running. In this case, the description hierarchy is updated with the corresponding data of the service. The context hierarchy keeps run-time data. Unlike the description hierarchy, the context hierarchy keeps on changing when the server starts receiving messages. These two hierarchies create a model that provides the ability to search for key value pairs. When the values are to be searched for at a given level, they are searched while moving up the hierarchy until a match is found. In the resulting model, the lower levels override the values present in the upper levels. For example, when a value has been searched for in the Message Context and is not found, then it would be searched in the Operation Context, and so on. The search is first done up the hierarchy, and if the starting point is a Context then it would search for in the Description hierarchy as well. This allows the user to declare and override values, with the result being a very flexible configuration model. The flexibility could be the Achilles' heel of the system, as the search is expensive, especially for something that does not exist. Deployment Model The previous versions of Axis failed to address the usability factor involved in the deployment of a Web Service. This was due to the fact that Axis 1.x was released mainly to prove the Web Service concepts. Therefore in Axis 1.x, the user has to manually invoke the admin client and update the server classpath. Then, you need to restart the server in order to apply the changes. This burdensome deployment model was a definite barrier for beginners. Axis2 is engineered to overcome this drawback, and provide a flexible, user-friendly, easily configurable deployment model. Axis2 deployment introduced a J2EE-like deployment mechanism, wherein the developer can bundle all the class files, library files, resources files, and configuration fi  les together as an archive file, and drop it in a specified location in the file system. The concept of hot deployment and hot update is not a new technical paradigm, particularly for the Web Service platform. But in the case of Apache Axis, it is a new feature. Therefore, when Axis2 was developed, hot deployment features were added to the feature list. Hot deployment : This refers to the capability to deploy services while the system is up and running. In a real time system or in a business environment, the availability of the system is very important. If the processing of the system is slow, even for a moment, then the loss might be substantial and it may affect the viability of the business. In the meanwhile, it is required to add new service to the system. If this can be done without needing to shut down the servers, it will be a great achievement. Axis2 addresses this issue and provides a Web Service hot deployment ability, wherein we need not shut down the system to deploy a new Web Service. All that needs to be done is to drop the required Web Service archive into the services directory in the repository. The deployment model will automatically deploy the service and make it available. Hot update : This refers to the ability to make changes to an existing Web Service without even shutting down the system. This is an essential feature, which is best suited to use in a testing environment. It is not advisable to use hot updates in a real-time system, because a hot update could lead a system into an unknown state. Additionally, there is the possibility of loosening the existing service data of that service. To prevent this, Axis2 comes with the hot update parameter set to FALSE by default.
Read more
  • 0
  • 0
  • 1576
article-image-schema-validation-using-sax-and-dom-parser-oracle-jdeveloper-xdk-11g
Packt
08 Oct 2009
6 min read
Save for later

Schema Validation using SAX and DOM Parser with Oracle JDeveloper - XDK 11g

Packt
08 Oct 2009
6 min read
The choice of validation method depends on the additional functionality required in the validation application. SAXParser is recommended if SAX parsing event notification is required in addition to validation with a schema. DOMParser is recommended if the DOM tree structure of an XML document is required for random access and modification of the XML document. Schema validation with a SAX parser In this section we shall validate the example XML document catalog.xml with XML schema document catalog.xsd, with the SAXParser class. Import the oracle.xml.parser.schema and oracle.xml.parser.v2 packages. Creating a SAX parser Create a SAXParser object and set the validation mode of the SAXParser object to SCHEMA_VALIDATION, as shown in the following listing: SAXParser saxParser=new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION); The different validation modes that may be set on a SAXParser are discussed in the following table; but we only need the SCHEMA-based validation modes: Validation Mode Description NONVALIDATING The parser does not validate the XML document. PARTIAL_VALIDATION The parser validates the complete or a partial XML document with a DTD or an XML schema if specified. DTD_VALIDATION The parser validates the XML document with a DTD if any. SCHEMA_VALIDATION The parser validates the XML document with an XML schema if any specified. SCHEMA_LAX_VALIDATION Validates the complete or partial XML document with an XML schema if the parser is able to locate a schema. The parser does not raise an error if a schema is not found. SCHEMA_STRICT_VALIDATION Validates the complete XML document with an XML schema if the parser is able to find a schema. If the parser is not able find a schema or if the XML document does not conform to the schema, an error is raised. Next, create an XMLSchema object from the schema document with which an XML document is to be validated. An XMLSchema object represents the DOM structure of an XML schema document and is created with an XSDBuilder class object. Create an XSDBuilder object and invoke the build(InputSource) method of the XSDBuilder object to obtain an XMLSchema object. The InputSource object is created with an InputStream object created from the example XML schema document, catalog.xsd. As discussed before, we have used an InputSource object because most SAX implementations are InputSource based. The procedure to obtain an XMLSchema object is shown in the following listing: XSDBuilder builder = new XSDBuilder();InputStream inputStream=new FileInputStream(new File("catalog.xsd"));InputSource inputSource=new InputSource(inputStream);XMLSchema schema = builder.build(inputSource); Set the XMLSchema object on the SAXParser object with setXMLSchema(XMLSchema) method: saxParser.setXMLSchema(schema); Setting the error handler As in the previous section, define an error handling class, CustomErrorHandler that extends DefaultHandler class. Create an object of type CustomErrorHandler, and register the ErrorHandler object with the SAXParser as shown here: CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler); Validating the XML document The SAXParser class extends the XMLParser class, which provides the overloaded parse methods discussed in the following table to parse an XML document: Method Description parse(InputSource in) Parses an XML document from an org.xml.sax.InputSouce object. The InputSource-based parse method is the preferred method because SAX parsers convert the input to InputSource no matter what the input type is. parse(java.io.InputStream in) Parses an XML document from an InputStream. parse(java.io.Reader r) Parses an XML document from a Reader. parse(java.lang.String in) Parses an XML document from a String URL for the XML document. parse(java.net.URL url) Parses an XML document from the specified URL object for the XML document. Create an InputSource object from the XML document to be validated, and parse the XML document with the parse(InputSource) object: InputStream inputStream=new FileInputStream(newFile("catalog.xml"));InputSource inputSource=new InputSource(inputStream);saxParser.parse(inputSource); Running the Java application The validation application SAXValidator.java is listed in the following listing with additional explanations: First we declare the import statements for the classes that we need. import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import oracle.xml.parser.schema.*;import oracle.xml.parser.v2.*;import java.io.IOException;import java.io.InputStream;import org.xml.sax.SAXException;import org.xml.sax.SAXParseException;import org.xml.sax.helpers.DefaultHandler;import org.xml.sax.InputSource; We define the Java class SAXValidator for SAX validation. public class SAXValidator { In the Java class we define a method validateXMLDocument. public void validateXMLDocument(InputSource input) {try { In the method we create a SAXParser and set the XML schema on the SAXParser. SAXParser saxParser = new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION);XMLSchema schema=getXMLSchema();saxParser.setXMLSchema(schema); To handle errors we create a custom error handler. We set the error handler on the SAXParser object and parse the XML document to be validated and also output validation errors if any. CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler);saxParser.parse(input);if (errorHandler.hasValidationError == true) {System.err.println("XML Document hasValidation Error:" + errorHandler.saxParseException.getMessage());} else {System.out.println("XML Document validateswith XML schema");}} catch (IOException e) {System.err.println("IOException " + e.getMessage());} catch (SAXException e) {System.err.println("SAXException " + e.getMessage());}} We add the Java method getXMLSchema to create an XMLSchema object. try {XSDBuilder builder = new XSDBuilder();InputStream inputStream =new FileInputStream(new File("catalog.xsd"));InputSource inputSource = newInputSource(inputStream);XMLSchema schema = builder.build(inputSource);return schema;} catch (XSDException e) {System.err.println("XSDException " + e.getMessage());} catch (FileNotFoundException e) {System.err.println("FileNotFoundException " +e.getMessage());}return null;} We define the main method in which we create an instance of the SAXValidator class and invoke the validateXMLDocument method. public static void main(String[] argv) {try { InputStream inputStream = new FileInputStream(new File("catalog.xml")); InputSource inputSource=new InputSource(inputStream); SAXValidator validator = new SAXValidator(); validator.validateXMLDocument(inputSource); } catch (FileNotFoundException e) { System.err.println("FileNotFoundException " + e.getMessage()); }} Finally, we define the custom error handler class as an inner class CustomErrorHandler to handle validation errors. private class CustomErrorHandler extends DefaultHandler {protected boolean hasValidationError = false;protected SAXParseException saxParseException = null;public void error(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void fatalError(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void warning(SAXParseException exception){}}} Copy the SAXValidator.java application to SAXValidator.java in the SchemaValidation project. To demonstrate error handling, add a title element to the journal element. To run the SAXValidator.java application, right-click on SAXValidator.java in Application Navigator, and select Run. A validation error gets outputted. The validation error indicates that the title element is not expected.
Read more
  • 0
  • 0
  • 3302

article-image-interacting-students-using-moodle-19-part-1
Packt
08 Oct 2009
11 min read
Save for later

Interacting with the Students using Moodle 1.9 (part 1)

Packt
08 Oct 2009
11 min read
We are going to carry out a role-play activity. This activity will be geography-based, but the Moodle activities are the same for any subject. Hopefully, this will help you gain some ideas for your own teaching. Having learned about the course of a river and about the landscape at the location where the river meets the coast, the students are now going to be given the job of developers—planning and designing a riverside campsite. The students will undertake various tasks during the project, all of them within Moodle. We, the teachers, having set the scene, are going to sit back and observe their progress, online. When the entire mini-project is complete, we're going to get them to tell us, personally, how much they feel they have learned. We can use their responses to plan our future Moodle activities. How do we do all this? The words in bold above are examples of activities that we can do in Moodle. There are others too, but for you—as a newbie—these activities are more than enough. To set up any of them, we first need to turn editing on, either via the button on the upper right of the screen, or via the Administration block. Then, in the topic section where we want to add our activity, we click on the space next to Add an activity…. This will bring up a list of options, which might vary depending upon your particular Moodle course. The following screenshot shows some typical options that might show up when you click Add an activity....Be aware that Certificate and Game don't appear in standard Moodle courses. Getting our class to reflect and discuss Have you ever come across a child who is too shy to speak up in class but then produces the most thoughtful written work? Moodle is Godsend for these students, because it has a forum and a chat facility, both of which enable classes and their teachers to have a discussion without actually being together, in the same room. And often, the shy child will happily have their say online, where they can plan it out first and feel comfortable without the interference of their peers. We're going to set some homework where the students will discuss, in general, the kinds of things to keep in mind when planning a riverside campsite. Hopefully, someone will realize it's not a good idea to have it too close to the water. Time for action-setting up a discussion forum on Moodle Let's create an online discussion area for the students to share their views and comments. This discussion area is called the Forum. With editing turned on, click on Add an activity and select Forum. As a result, the following information will appear on the screen. In the Forum type field, click on the drop-down arrow and choose Single Simple Discussion (we'll investigate on the other options later). In the Forum name field, enter some text that will invite your students to click on it to join the discussion. In the Message field, enter your starting topic, with images and hyperlinks, if you wish. Change the option Force everyone to be subscribed to Yes, if you want people to get an email every time somebody adds their comments or suggestions to the forum. Leave the option Read tracking as it is, and people can decide whether to track read or unread messages. The option, Maximum attachment size lets you decide how big a file or an image people can attach with a message. Grade—alter these settings to give each post a mark. But be aware that this will put younger children off. You can put a number in the Post threshold for blocking option if you want to limit the number of posts that a student can make. Click on Save and return to course. What just happened? We have just set up an online discussion area (forum) on a specific topic for our class. Let's go back to our course page and click on the forum that we just prepared. The final output will look like this: Our students will see an icon (usually with two faces) that will prompt them to join the preliminary discussion on where best to locate the campsite. They'll click on the Reply link at the bottom (as you can see in the preceding screenshot), to post their response. How do we moderate the forum? Hopefully, we can just read the students' responses and let them discuss the topic amongst themselves. But as a teacher, we do have three other options: We can edit the response posted by the student (change the wording if it's inappropriate). We can delete the post altogether. We can reply to it when we think it is really important to do so. A student only has the option to reply (as you see if you click on Student view at the upper right of the course page). When we need to get rid of an unsuitable post, or perhaps alter the wording of something one of our students has typed, this extra power we teachers have is helpful. For our starter discussion, we chose a Single Simple Discussion as we wanted the students to focus totally on one issue. However, in other situations, you might need a slightly different type of forum. So the following table gives a brief overview of the other kinds that are available, and explains how you could use them: Name What it does Why use it Single Simple Discussion Only one question they can all answer Best for focused discussions-they can't get distracted Standard forum  Everyone can start a new topic More scope for older students Q and A Pupils must answer first before they can see any replies Useful for avoiding peer pressure issues Each person posts 1 discussion Pupils can post ONE new topic only Handy if you need to restrict posting but still allow some freedom Why use a forum? Here are a few other thoughts on forums, based on my own experiences: A cross-year or cross-class forum can be useful, as the older students can pass on their experiences to the younger students. For example, each year my first year high school students make a volcano as a homework project. As they enter their second year, they use a dedicated forum to pass on their wisdom and answer technical questions sought by the inexperienced first-year students—who are about to begin their own creations. A homework exercise could be set on a forum, as a reflective plenary to the learning done in the class. Once, my class watched a documentary based on the Great New Orleans flood of 2004, and the students were asked, on a forum, to imagine they had been there. They had to suggest some words or phrases to describe their feelings—which we then collated into the next lesson to make poems about the flood. Let's add a little bit of confusion. Instead of simply asking a question, why not make a statement that you know will inspire, annoy, and divide the students. As a result, you can see the variety in the responses. I once posted the topic: “If people live near rivers, and their homes get flooded out, it is surely their own fault for living near rivers. Why should the rest of us have to help them?” In response to this, some violently disagreed with the statement—quoting examples from developing countries, whereas some agreed with the statement—and were then blasted down by their classmates for doing so. But at least the forum got visited! Carrying on the conversation in real time—outside of school A discussion forum, as illustrated above, is a useful tool to get the children to think and to contribute their ideas. It has an advantage over the usual class discussion, in that the shyer pupils are more likely to open up in such discussion forums. However, there is no spontaneity involved. You might post a comment in the morning, and the response may arrive at dinnertime, and so on. Why not combine the advantages of online communication with the advantages of a real time conversation, and make a Moodle chat room? If your students live several miles away from each other, as my students do, and are keen to get on with the project, Moodle chat rooms can have real benefits. We can set a time for the chat—say, Saturday afternoon. This would be a time when we can be present too, if we wish, and the students can move ahead with their plans even though they're not with each other in the classroom. Even though this implementation has its own drawbacks, it provides us with a set-up. Eventually, we can see how it goes and then think about how best we can use it. Time for action-setting up a chat room in Moodle Let's create a chat room where the students can have a chat even when they are not in the same classroom. This way, they can share their views, comments, and suggestions. With editing turned on, go to Add an activity, and select the Chat option. In the Name of this chat room field, enter an appropriate title for the discussion. In the Introduction text field, type in what the discussion is going to be about. For the Next chat time option, choose when you want to open the chat room. For the Repeat sessions option, choose whether you want to chat regularly, or just once. For the Save past sessions option, choose how long, if at all, to keep a record of the conversation. It is up to you to decide whether to allow everyone to view the past sessions or not. For now, you can ignore the Common module settings option. Click on Save and Display. What just happened? We have set up the place and time on Moodle for our students to talk to each other—and even with us if necessary—online. Students will see an icon—usually a speech bubble—on the course page, alongside the name given to the chat. The students just have to click on the icon at the correct time for the chat. When the room is open, they will see this: Clicking on the link will take the students to a box where they'll be able to see their user photo (if they have one) and the time at which they have entered the chat. When others join in, their photos will be shown, and the time of their arrival will be recorded. You talk by typing into the long box at the bottom of the screen, and when submitted, your words appear in the larger box above it. This can get quite confusing if a lot of people are typing at the same time, as the contributions appear one under the other, and do not always follow on from the question or response to which they are referring. Why use Chat? (and why not?) I have to confess that I have switched off the ability for people to use Chat on my school's Moodle site. Chat does have one advantage over the Forum, which is that you can hold discussions in real time with the others who are not physically present in the same room as you. This could be useful on occasions, such as when the teacher is absent from school (but available online). He or she can contact the class at the start of the lesson to check whether they know what they are doing. The students in our school council use Chat for meetings out of school hours, as do our school Governors. You can also read the transcript of a chat (the chat log) after it has happened. However, everyone really has to stay focused on the discussion topic, otherwise, you risk having nothing but a list of trite comments, and no real substance. I've found this to be the case with younger children. Personally, I find a single and a simple discussion in a Forum to be of much more value, than Chat. However, you can try using Chat and see what you think. Your experience could be different from mine.
Read more
  • 0
  • 0
  • 2010

article-image-develop-php-web-applications-netbeans-virtualbox-and-turnkey-lamp-appliance
Packt
01 Oct 2009
4 min read
Save for later

Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance

Packt
01 Oct 2009
4 min read
A couple of days ago, a client asked me for an easy way to develop PHP applications. Naturally, I told him the best way would be to pay me for doing it! Just kidding… "Use the NetBeans IDE", I said to him, almost automatically. "But… isn’t NetBeans something related to Java?" he answered back, with a startled grin on his face. "My dear friend, you can use NetBeans to develop software on Java, PHP, C++, and almost any other programming language I’ve heard of! You could even use it to work on a WordPress live Web site!" I said to him triumphantly. And that’s when a small light bulb lit up inside my mind... I introduced him to VirtualBox and the world of virtual machines, the Turnkey Linux LAMP appliance, and how to make all these software applications collaborate between each other. And that’s how this article was born. So now, my dear readers, let’s get on with the action! Oh, and feel free to skip any section you don’t need, ok? Download and Install VirtualBox Go to the VirtualBox Web site and download the most recent version: http://download.virtualbox.org/virtualbox/3.0.4/VirtualBox-3.0.4-50677-Win.exe After downloading VirtualBox, install it with all the default options (remember to register with Sun!). Download the Turnkey LAMP appliance Go to the Turnkey Linux Web site and download the LAMP appliance: http://www.turnkeylinux.org/download?file=turnkey-lamp-2009.02-hardy-x86.iso . Download the NetBeans PHP IDE Go to the Sun Web site and download the NetBeans PHP IDE here. After downloading NetBeans, install it with the default options. Create a virtual machine Open VirtualBox and click on the New button to create a virtual machine (VM). In the following screenshot I used the VirtualPHPDev name for my VM, but you can use whatever you want; just don’t use strange characters or signs: Choose Linux and Ubuntu as the Operating System and Version, respectively. Click on Next to continue. Leave the default values for RAM and hard disk. When finished, click on the VirtualBox Settings button to open your virtual machine’s settings dialog, select the CD/DVD-ROM category, enable the Mount CD/DVD drive option, select the ISO Image File button and then click on the Invoke Virtual Media Manager button . Click on the Add button in the Virtual Media Manager to open up the Select a CD/DVD-ROM disk image file, go to the directory where you downloaded the Turnkey LAMP appliance and double-click on it to add it to the Virtual Media Manager: Click on Select to close the Virtual Media Manager, and then click on OK to exit the Settings dialog. Now you can click on the Start button to start your LAMP virtual machine: Select the Install to hard disk option in the Turnkey Linux menu screen and press Enter to start installing the LAMP appliance on your virtual machine. Eventually the following screen will show up: Select the Guided option and press Enter to continue. The installer will ask if you want to save changes to disk. Select Yes and press Enter. The installer will start the disk formatting process and then you’ll get to the root password screen. Type the password twice for the root user, and then do the same for the MySQL user. The installer will continue and, after a while, the following installation completion screen will show up: Select No, press Enter and then close the virtual machine typing shutdown -r now at the system prompt. You’ll return to the VirtualBox main screen. With your LAMP virtual machine selected, click on the Settings button to open the settings dialog box. Select the CD/DVD-ROM category and disable the Mount CD/DVD drive option. Then select the Network category, change the Attached to: option to Bridged Adapter and click on OK to continue: You can start your LAMP virtual machine now. When ready, the following screen will appear: The Bridged Adapter mode lets your virtual machine act as if it were another PC on the LAN, and consequently, it will have a different IP address. Write down the IP address shown in the Turnkey Linux Configuration Console screen, because you’ll need it to configure the NetBeans IDE later. You can minimize the LAMP virtual machine window now.
Read more
  • 0
  • 0
  • 5016
article-image-agile-works-best-php-projects-2
Packt
30 Sep 2009
8 min read
Save for later

Agile Works Best in PHP Projects

Packt
30 Sep 2009
8 min read
What is agility Agility includes effective, that is, rapid and adaptive, response to change. This requires effective communication among all of the stakeholders. Stakeholders are those who are going to benefit from the project in some form or another. The key stakeholders of the project include the developers and the users. Leaders of the customer organization, as well as the leaders of the software development organizations, are also among the stakeholders. Rather than keeping the customer away, drawing the customer into the team helps the team to be more effective. There can be various types of customers, some are annoying, and some who tend to forget what they once said. There are also those who will help steer the project in the right direction. The idea of drawing the customer into the team is not to let them micromanage the team. Rather, it is for them to help the team to understand the user requirements better. This needs to be explained to the customers up front, if they seem to hinder the project, rather than trying to help in it. After all, it is the team that consists of the technical experts, so the customer should understand this. Organizing a team, in such a manner so that it is in control of the work performed, is also an important part of being able to adapt to change effectively. The team dynamics will help us to respond to changes in a short period of time without any major frictions. Agile processes are based on three key assumptions. These assumptions are as follows: It is difficult to predict in advance, which requirements or customer priorities will change and which will not. For many types of software, design and construction activities are interweaved. We can use construction to prove the design. Analysis, design, and testing are not as predictable from the planning's perspective as we software developers like them to be. To manage unpredictability, the agile process must be adapted incrementally by the project's team. Incremental adaptation requires customer's feedback. Based on the evaluation of delivered software, it increments or executes prototypes over short time periods. The length of the time periods should be selected based on the nature of the user requirements. It is ideal to restrict the length of a delivery to get incremented by two or three weeks. Agility yields rapid, incremental delivery of software. This makes sure that the client will get to see the real up-and-running software in quick time. Characteristics of an agile process An agile process is driven by the customer's demand. In other words, the process that is delivered is based on the users' descriptions of what is required. What the project's team builds is based on the user-given scenarios. The agile process also recognizes that plans are short lived. What is more important is to meet the users' requirements. Because the real world keeps on changing, plans have little meaning. Still, we cannot eliminate the need for planning. Constant planning will make sure that we will always be sensitive to where we're going, compared to where we are. Developing software iteratively, with a greater emphasis on construction activities, is another characteristic of the agile process. Construction activities make sure that we have something working all of the time. Activities such as requirements gathering for system modeling are not construction activities. Those activities, even though they're useful, do not deliver something tangible to the users. On the other hand, activities such as design, design prototyping, implementation, unit testing, and system testing are activities that deliver useful working software to the users. When our focus is on construction activities, it is a good practice that we deliver the software in multiple software increments. This gives us more time to incorporate user feedback, as we go deeper into implementing the product. This ensures that the team will deliver a high quality product at the end of the project's life cycle because the latter increments of software are based on clearly-understood requirements. This is as opposed to those, which would have been delivered with partially understood requirements in the earlier increments. As we go deep into the project's life cycle, we can adopt the project's team as well as the designs and the PHP code that we implement as changes occur. Principles of agility Our highest priority is to satisfy the customer through early and continuous delivery of useful and valuable software. To meet this requirement, we need to be able to embrace changes. We welcome changing requirements, even late in development life cycle. Agile processes leverage changes for the customer's competitive advantage. In order to attain and sustain competitive advantage over the competitors, the customer needs to be able to change the software system that he or she uses for the business at the customer's will. If the software is too rigid, there is no way that we can accommodate agility in the software that we develop. Therefore, not only the process, but also the product, needs to be equipped with agile characteristics. In addition, the customer will need to have new features of the software within a short period of time. This is required to beat the competitors with state of the art software system that facilitate latest business trends. Therefore, deliver the working software as soon as possible. A couple of weeks to a couple of months are always welcome. For example, the customer might want to improve the reports that are generated at the presentation layer based on the business data. Moreover, some of this business data will not have been captured in the data model in the initial design. Still, as the software development team, we need to be able to upgrade the design and implement the new set of reports using PHP in a very short period of time. We cannot afford to take months to improve the reports. Also, our process should be such that we will be able to accommodate this change and deliver it within a short period of time. In order to make sure that we can understand these types of changes, we need to make the business people and the developers daily work together throughout the project. When these two parties work together, it becomes very easy for them to understand each other. The team members are the most important resource in a software project. The motivation and attitude of these team members can be considered the most important aspect that will determine the success of the project. If we build the project around motivated individuals, give them the environment and support they need, trust them to get the job done, the project will be a definite success. Obviously, the individual team members need to work with each other in order to make the project a success. The most efficient and effective method of conveying information to and within a development team is a face-to-face conversation. Even though various electronic forms of communication, such as instant messaging, emails, and forums makes effective communication possible, there is nothing comparable to face-to-face communication. When it comes to evaluating progress, working software should be the primary measure of progress. We need to make sure that we clearly communicate this to all of the team members. They should always focus on making sure that the software they develop is in a working state at all times. It is not a bad idea to tie their performance reviews and evaluations based on how much effort they have put in. This is in order to make sure that whatever they deliver (software) is working all of the time. An agile process promotes sustainable development. This means that people are not overworked, and they are not under stress in any condition. The sponsors, managers, developers, and users should be able to maintain a constant pace of development, testing, evaluation, and evolution, indefinitely. The team should pay continuous attention to technical excellence. This is because good design enhances agility. Technical reviews with peers and non-technical reviews with users will allow corrective action to any deviations from the expected result. Aggressively seeking technical excellence will make sure that the team will be open minded and ready to adopt corrective action based on feedback. With PHP, simplicity is paramount. Simplicity should be used as the art of maximizing the amount of work that is not done. In other words, it is essential that we prevent unwanted wasteful work, as well as rework, at all costs. PHP is a very good vehicle to achieve this. The team members that we have should be smart and capable. If we can get those members to reflect on how to become more effective, at regular intervals, we can get the team to tune and adjust its behavior to enhance the process over time. The best architectures, requirements, and designs emerge from self-organizing teams. Therefore, for a high quality product, the formation of the team can have a direct impact.
Read more
  • 0
  • 0
  • 2595