Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-facelets-templating-jsf-20
Packt
20 Jun 2011
7 min read
Save for later

Facelets Templating in JSF 2.0

Packt
20 Jun 2011
7 min read
One advantage that Facelets has over JSP is its templating mechanism. Templates allow us to specify page layout in one place, then we can have template clients that use the layout defined in the template. Since most web applications have consistent layout across pages, using templates makes our applications much more maintainable, since changes to the layout need to be made in a single place. If at one point we need to change the layout for our pages (add a footer, or move a column from the left side of the page to the right side of the page, for example), we only need to change the template, and the change is reflected in all template clients. NetBeans provides very good support for facelets templating. It provides several templates "out of the box", using common web page layouts. We can then select from one of several predefined templates to use as a base for our template or simply to use it "out of the box". NetBeans gives us the option of using HTML tables or CSS for layout. For most modern web applications, CSS is the preferred approach. For our example we will pick a layout containing a header area, a single left column, and a main area. After clicking on Finish, NetBeans automatically generates our template, along with the necessary CSS files. The automatically generated template looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link href="./resources/css/default.css" rel="stylesheet" type="text/css" /> <link href="./resources/css/cssLayout.css" rel="stylesheet" type="text/css" /> <title>Facelets Template</title> </h:head> <h:body> <div id="top" class="top"> <ui:insert name="top">Top</ui:insert> </div> <div> <div id="left"> <ui:insert name="left">Left</ui:insert> </div> <div id="content" class="left_content"> <ui:insert name="content">Content</ui:insert> </div> </div> </h:body> </html> As we can see, the template doesn't look much different from a regular Facelets file. Adding a Facelets template to our project We can add a Facelets template to our project simply by clicking on File | New File, then selecting the JavaServer Faces category and the Facelets Template file type. Notice that the template uses the following namespace: Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets. This namespace allows us to use the <ui:insert> tag, the contents of this tag will be replaced by the content in a corresponding <ui:define> tag in template clients. Using the template To use our template, we simply need to create a Facelets template client, which can be done by clicking on File | New File, selecting the JavaServer Faces category and the Facelets Template Client file type. After clicking on Next >, we need to enter a file name (or accept the default), and select the template that we will use for our template client. After clicking on Finish, our template client is created. <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> top </ui:define> <ui:define name="left"> left </ui:define> <ui:define name="content"> content </ui:define> </ui:composition> </body> </html> As we can see, the template client also uses the Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets" namespace. In a template client, the <ui:composition> tag must be the parent tag of any other tag belonging to this namespace. Any markup outside this tag will not be rendered; the template markup will be rendered instead. The <ui:define> tag is used to insert markup into a corresponding <ui:insert> tag in the template. The value of the name attribute in <ui:define> must match the corresponding <ui:insert> tag in the template. After deploying our application, we can see templating in action by pointing the browser to our template client URL. Notice that NetBeans generated a template that allows us to create a fairly elegant page with very little effort on our part. Of course, we should replace the markup in the <ui:define> tags to suit our needs. Here is a modified version of our template, adding markup to be rendered in the corresponding places in the template: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> <h2>Welcome to our Site</h2> </ui:define> <ui:define name="left"> <h3>Links</h3> <ul> <li> <h:outputLink value="http://www.packtpub.com"> <h:outputText value="Packt Publishing"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.net"> <h:outputText value="Ensode.net"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.com"> <h:outputText value="Ensode Technology, LLC"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.netbeans.org"> <h:outputText value="NetBeans.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.glassfish. org"> <h:outputText value="GlassFish.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.oracle.com/technetwork/ java/javaee/overview/index.html"> <h:outputText value="Java EE 6"/> </h:outputLink> </li> <li><h:outputLink value="http://www.oracle.com/ technetwork/java/index.html"> <h:outputText value="Java"/> </h:outputLink></li> </ul> </ui:define> <ui:define name="content"> <p> In this main area we would put our main text, images, forms, etc. In this example we will simply use the typical filler text that web designers love to use. </p> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc venenatis, diam nec tempor dapibus, lacus erat vehicula mauris, id lacinia nisi arcu vitae purus. Nam vestibulum nisi non lacus luctus vel ornare nibh pharetra. Aenean non lorem lectus, eu tempus lectus. Cras mattis nibh a mi pharetra ultricies. In consectetur, tellus sit amet pretium facilisis, enim ipsum consectetur magna, a mattis ligula massa vel mi. Maecenas id arcu a erat pellentesque vestibulum at vitae nulla. Nullam eleifend sodales tincidunt. Donec viverra libero non erat porta sit amet convallis enim commodo. Cras eu libero elit, ac aliquam ligula. Quisque a elit nec ligula dapibus porta sit amet a nulla. Nulla vitae molestie ligula. Aliquam interdum, velit at tincidunt ultrices, sapien mauris sodales mi, vel rutrum turpis neque id ligula. Donec dictum condimentum arcu ut convallis. Maecenas blandit, ante eget tempor sollicitudin, ligula eros venenatis justo, sed ullamcorper dui leo id nunc. Suspendisse potenti. Ut vel mauris sem. Duis lacinia eros laoreet diam cursus nec hendrerit tellus pellentesque. </p> </ui:define> </ui:composition> </body> After making the above changes, our template client now renders as follows: As we can see, creating Facelets templates and template clients with NetBeans is a breeze.
Read more
  • 0
  • 0
  • 1928

article-image-how-create-new-jsf-project
Packt
20 Jun 2011
17 min read
Save for later

How to Create a New JSF Project

Packt
20 Jun 2011
17 min read
  Java EE 6 Development with NetBeans 7 Develop professional enterprise Java EE applications quickly and easily with this popular IDE       Introduction to JavaServer faces Before JSF existed, most Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available, often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE specification resulted in having a standard web application framework available in any Java EE compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all. However, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework per se, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE component framework, one benefit of JSF is that it provides good support for tools vendors, allowing tools such as NetBeans to take advantage of the JSF component model with drag and drop support for components.   Developing our first JSF application From an application developer's point of view, a JSF application consists of a series of XHTML pages containing custom JSF tags, one or more JSF managed beans, and an optional configuration file named faces-config.xml. faces-config.xml used to be required in JSF 1.x, however, in JSF 2.0, some conventions were introduced that reduce the need for configuration. Additonally, a lot of JSF configuration can be specified using annotations, reducing, and in some cases, eliminating the need for this XML configuration file. Creating a new JSF project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next>, we need to enter a project name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the server, Java EE version, and context path of our application. In our example we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. When clicking on Finish, the wizard generates a skeleton JSF project for us, consisting of a single facelet file called index.xhtml, a web.xml configuration file. web.xml is the standard, optional configuration file needed for Java web applications, this file became optional in version 3.0 of the Servlet API, which was introduced with Java EE 6. In many cases, web.xml is not needed anymore, since most of the configuration options can now be specified via annotations. For JSF applications, however, it is a good idea to add one, since it allows us to specify the JSF project stage. <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/index.xhtml</welcome-file> </welcome-file-list> As we can see, NetBeans automatically sets the JSF project stage to Development, setting the project stage to development configures JSF to provide additional debugging help not present in other stages. For example, one common problem when developing a page is that while a page is being developed, validation for one or more of the fields on the page fails, but the developer has not added an <h:message> or <h:messages> tag to the page. When this happens and the form is submitted, the page seems to do nothing, or page navigation doesn't seem to be working. When setting the project stage to Development, these validation errors will automatically be added to the page, without the developer having to explicitly add one of these tags to the page (we should, of course, add the tags before releasing our code to production, since our users will not see the automatically generated validation errors). The following are the valid values for the javax.faces.PROJECT_STAGE context parameter for the faces servlet: Development Production SystemTest UnitTest The Development project stage adds additional debugging information to ease development. The Production project stage focuses on performance. The other two valid values for the project stage (SystemTest and UnitTest), allow us to implement our own custom behavior for these two phases. The javax.faces.application.Application class has a getProjectStage() method that allows us to obtain the current project stage. Based on the value of this method, we can implement the code that will only be executed in the appropriate stage. The following code snippet illustrates this: public void someMethod() { FacesContext facesContext = FacesContext.getCurrentInstance(); Application application = facesContext.getApplication(); ProjectStage projectStage = application.getProjectStage(); if (projectStage.equals(ProjectStage.Development)) { //do development stuff } else if (projectStage.equals(ProjectStage.Production)) { //do production stuff } else if (projectStage.equals(ProjectStage.SystemTest)) { // do system test stuff } else if (projectStage.equals(ProjectStage.UnitTest)) { //do unit test stuff } } As illustrated in the snippet above, we can implement the code to be executed in any valid project stage, based on the return value of the getProjectStage() method of the Application class. When creating a Java Web project using JSF, a facelet is automatically generated. The generated facelet file looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Facelet Title</title> </h:head> <h:body> Hello from Facelets </h:body> </html> As we can see, a facelet is nothing but an XHTML file using some facelets-specific XML name spaces. In the automatically generated page above, the following namespace definition allows us to use the "h" (for HTML) JSF component library: The above namespace declaration allows us to use JSF specific tags such as <h:head> and <h:body> which are a drop in replacement for the standard HTML/XHTML <head> and <body> tags, respectively. The application generated by the new project wizard is a simple, but complete JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's default page. Modifying our page to capture user data The generated application, of course, is nothing but a starting point for us to create a new application. We will now modify the generated index.xhtml file to collect some data from the user. The first thing we need to do is add an <h:form> tag to our page. The <h:form> tag is equivalent to the <form> tag in standard HTML pages. After typing the first few characters of the <h:form> tag into the page, and hitting Ctrl+Space, we can take advantage of NetBeans' excellent code completion. After adding the <h:form> tag and a number of additional JSF tags, our page now looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Registration</title> <h:outputStylesheet library="css" name="styles.css"/> </h:head> <h:body> <h3>Registration Page</h3> <h:form> <h:panelGrid columns="3" columnClasses="rightalign,leftalign,leftalign"> <h:outputLabel value="Salutation: " for="salutation"/> <h:selectOneMenu id="salutation" label="Salutation" value="#{registrationBean.salutation}" > <f:selectItem itemLabel="" itemValue=""/> <f:selectItem itemLabel="Mr." itemValue="MR"/> <f:selectItem itemLabel="Mrs." itemValue="MRS"/> <f:selectItem itemLabel="Miss" itemValue="MISS"/> <f:selectItem itemLabel="Ms" itemValue="MS"/> <f:selectItem itemLabel="Dr." itemValue="DR"/> </h:selectOneMenu> <h:message for="salutation"/> <h:outputLabel value="First Name:" for="firstName"/> <h:inputText id="firstName" label="First Name" required="true" value="#{registrationBean.firstName}" /> <h:message for="firstName" /> <h:outputLabel value="Last Name:" for="lastName"/> <h:inputText id="lastName" label="Last Name" required="true" value="#{registrationBean.lastName}" /> <h:message for="lastName" /> <h:outputLabel for="age" value="Age:"/> <h:inputText id="age" label="Age" size="2" value="#{registrationBean.age}"/> <h:message for="age"/> <h:outputLabel value="Email Address:" for="email"/> <h:inputText id="email" label="Email Address" required="true" value="#{registrationBean.email}"> </h:inputText> <h:message for="email" /> <h:panelGroup/> <h:commandButton id="register" value="Register" action="confirmation" /> </h:panelGrid> </h:form> </h:body> </html> The following screenshot illustrates how our page will be rendered at runtime: All JSF input fields must be inside an <h:form> tag. The <h:panelGrid> helps us to easily lay out JSF tags on our page. It can be thought of as a grid where other JSF tags will be placed. The columns attribute of the <h:panelGrid> tag indicates how many columns the grid will have, each JSF component inside the <h:panelGrid> component will be placed in an individual cell of the grid. When the number of components matching the value of the columns attribute (three in our example) has been placed inside <h:panelGrid>, a new row is automatically started. The following table illustrates how tags will be laid out inside an <h:panelGrid> tag: Each row in our <h:panelGrid> consists of an <h:outputLabel> tag, an input field, and an <h:message> tag. The columnClasses attribute of <h:panelGrid> allows us to assign CSS styles to each column inside the panel grid, its value attribute must consist of a comma separated list of CSS styles (defined in a CSS stylesheet). The first style will be applied to the first column, the second style will be applied to the second column, the third style will be applied to the third column, so on and so forth. Had our panel grid had more than three columns, then the fourth column would have been styled using the first style in the columnClasses attribute, the fifth column would have been styled using the second style in the columnClasses attribute, so on and so forth. If we wish to style rows in an <h:panelGrid>, we can do so with its rowClasses attribute, which works the same way that the columnClasses works for columns. Notice the <h:outputStylesheet> tag inside <h:head> near the top of the page, this is a new tag that was introduced in JSF 2.0. One new feature that JSF 2.0 brings to the table is standard resource directories. Resources such as CSS stylesheets, JavaScript files, images, and so on, can be placed under a top level directory named resources, and JSF tags will have access to those resources automatically. In our NetBeans project, we need to place the resources directory under the Web Pages folder. We then need to create a subdirectory to hold our CSS stylesheet (by convention, this directory should be named css), then we place our CSS stylesheet(s) on this subdirectory. The value of the library attribute in <h:outputStylesheet> must match the directory where our CSS file is located, and the value of its name attribute must match the CSS file name. In addition to CSS files, we should place any JavaScript files in a subdirectory called javascript under the resources directory. The file can then be accessed by the <h:outputScript> tag using "javascript" as the value of its library attribute and the file name as the value of its name attribute. Similarly, images should be placed in a directory called images under the resources directory. These images can then be accessed by the JSF <h:graphicImage> tag, where the value of its library attribute would be "images" and the value of its name attribute would be the corresponding file name. Now that we have discussed how to lay out elements on the page and how to access resources, let's focus our attention on the input and output elements on the page. The <h:outputLabel> tag generates a label for an input field in the form, the value of its for attribute must match the value of the id attribute of the corresponding input field. <h:message> generates an error message for an input field, the value of its for field must match the value of the id attribute for the corresponding input field. The first row in our grid contains an <h:selectOneMenu>. This tag generates an HTML <select> tag on the rendered page. Every JSF tag has an id attribute, the value for this attribute must be a string containing a unique identifier for the tag. If we don't specify a value for this attribute, one will be generated automatically. It is a good idea to explicitly state the ID of every component, since this ID is used in runtime error messages. Affected components are a lot easier to identify if we explicitly set their IDs. When using <h:label> tags to generate labels for input fields, or when using <h:message> tags to generate validation errors, we need to explicitly set the value of the id tag, since we need to specify it as the value of the for attribute of the corresponding <h:label> and <h:message> tags. Every JSF input tag has a label attribute. This attribute is used to generate validation error messages on the rendered page. If we don't specify a value for the label attribute, then the field will be identified in the error message by its ID. Each JSF input field has a value attribute, in the case of <h:selectOneMenu>, this attribute indicates which of the options in the rendered <select> tag will be selected. The value of this attribute must match the value of the itemValue attribute of one of the nested <f:selectItem> tags. The value of this attribute is usually a value binding expression, that means that the value is read at runtime from a JSF managed bean. In our example, the value binding expression #{registrationBean.salutation} is used. What will happen is at runtime JSF will look for a managed bean named registrationBean, and look for an attribute named salutation on this bean, the getter method for this attribute will be invoked, and its return value will be used to determine the selected value of the rendered HTML <select> tag. Nested inside the <h:selectOneMenu> there are a number of <f:selectItem> tags. These tags generate HTML <option> tags inside the HTML <select> tag generated by <h:selectOneMenu>. The value of the itemLabel attribute is the value that the user will see while the value of the itemValue attribute will be the value that will be sent to the server when the form is submitted. All other rows in our grid contain <h:inputText> tags, this tag generates an HTML input field of type text, which accept a single line of typed text as input. We explicitly set the id attribute of all of our <h:inputText> fields, this allows us to refer to them from the corresponding <h:outputLabel> and <h:message> fields. We also set the label attribute for all of our <h:inputText> tags, this results in more user-friendly error messages. Some of our <h:inputText> fields require a value, these fields have their required attribute set to true, each JSF input field has a required attribute, if we need to require the user to enter a value for this attribute, then we need to set this attribute to true. This attribute is optional, if we don't explicitly set a value for it, then it defaults to false. In the last row of our grid, we added an empty <h:panelGroup> tag. The purpose of this tag is to allow adding several tags into a single cell of an <h:panelGrid>. Any tags placed inside this tag are placed inside the same cell of the grid where <h:panelGrid> is placed. In this particular case, all we want to do is to have an "empty" cell in the grid so that the next tag, <h:commandButton>, is aligned with the input fields in the rendered page. <h:commandButton> is used to submit a form to the server. The value of its value attribute is used to generate the text of the rendered button. The value of its action attribute is used to determine what page to display after the button is pressed. In our example, we are using static navigation. When using JSF static navigation, the value of the action attribute of a command button is hard-coded in the markup. When using static navigation, the value of the action attribute of <h:commandButton> corresponds to the name of the page we want to navigate to, minus its .xhtml extension. In our example, when the user clicks on the button, we want to navigate to a file named confirmation.xhtml, therefore we used a value of "confirmation" for its action attribute. An alternative to static navigation is dynamic navigation. When using dynamic navigation, the value of the action attribute of the command button is a value binding expression resolving to a method returning a String in a managed bean. The method may then return different values based on certain conditions. Navigation would then proceed to a different page depending on the value of the method. As long as it returns a String, the managed bean method executed when using dynamic navigation can contain any logic inside it, and is frequently used to save data in a managed bean into a database. When using dynamic navigation, the return value of the method executed when clicking the button must match the name of the page we want to navigate to (again, minus the file extension). In earlier versions of JSF, it was necessary to specify navigation rules in facesconfig.xml, with the introduction of the conventions introduced in the previous paragraphs, this is no longer necessary.  
Read more
  • 0
  • 0
  • 7265

article-image-customization-using-adf-meta-data-services
Packt
15 Jun 2011
8 min read
Save for later

Customization using ADF Meta Data Services

Packt
15 Jun 2011
8 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF      Why customization? The reason ADF has customization features built-in is because Oracle Fusion Applications need them. Oracle Fusion Applications is a suite of programs capable of handling every aspect of a large organization—personnel, finance, project management, manufacturing, logistics, and much more. Because organizations are different, Oracle has to offer a way for each customer organization to fit Oracle Fusion Applications to their requirements. This customization functionality can also be very useful for organizations that do not use Oracle Fusion Applications. If you have two screens that work with the same data, but one of the screens must show more fields than the other, you can create one screen with all the fields and use customization to create another version of the same screen with fewer fields for other users. For example, the destination management application might have a data entry screen showing all details of a task to a dispatcher, but only the relevant details to an airport transfer guide: Companies such as DMC Solutions that produce software for sale realize additional benefit from the customization features in ADF. DMC Solu a base application, sell it to different customers and customize each in application to that customer without changing the base application. How does an ADF customization work? More and more Oracle products are using something called Meta Data Services to store metadata. Metadata is data that describes other pieces of information—where it came from, what it means, or how it is intended to be used. An image captured by a digital camera might include metadata about where and when the picture was taken, which camera settings were used, and so on. In the case of an ADF application, the metadata describes how the application is intended to be used. There are three kinds of customizations in ADF: Seeded customizations:They are customizations defined in advance (before the user runs the application) by customization developers. User customizations(sometimes called personalizations): They are changes to aspects of the user interface by application end users. The ADF framework offers a few user customization features, but you need additional software such as Oracle WebCenter for most user customizations. User customizations are outside the scope of this article. Design time at runtime:They are advanced customization of the application by application administrators and/or properly authorized end users. This requires that application developers have prepared the possible customizations as part of application development—it is complicated to program using only ADF, but Oracle WebCenter provides advanced components that make this easier. This is outside the scope of this article. Your customization metadata is stored in either files or a database repository. If you are only planning to use seeded customizations, a file-based repository is fine. However, if you plan to allow user customizations or design time at runtime, you should set up your production server to store customizations in a metadata database. Refer to the Fusion Middleware Administrator's Guide for information about setting up a metadata database. Applying the customization layers When an ADF application is customized, the ADF framework applies one or more customization layers on top of the base application. Each layer has a value, and customizations are assigned to a specific customization layer and value. The concept of multiple layers makes it possible to apply, for example: Industry customization (customizing the application for example, the travel industry: industry=travel) Organization customization (customizing the application for a specific travel company: org=xyztravel) Site customization (customizing the application for the Berlin office) Role-based customization (customizing the application for casual, normal, and advanced users) The XDM application that DMC Solution is building could be customized in one way for ABC Travel and in another way for XYZ Travel, and XYZ Travel might decide to further customize the application for different types of users: You can have as many layers as you need—Oracle Fusion Applications is reported to use 12 layers, but your applications are not likely to be that complex. For each customization layer, the developer of the base application must provide a customization class that will be executed at runtime, returning a value for each customization layer. The ADF framework will then apply the customizations that the customization developer has specified for that layer/value combination. This means that the same application can look in many different ways, depending on the values returned by the customization classes and the customizations registered:     Org layer value Role layer value Result qrstravel any Base application, because there are no customizations defined for QRS Travel abctravel any The application customized for ABC Travel, because there are no role layer customizations for ABC Travel, the value of the role layer does not change the application xyztravel normal The application customized for XYZ Travel and further customized for normal users in XYZ Travel xyztravel superuser The application customized for XYZ Travel and further customized for super users in XYZ Travel Making an application customizable To make an application customizable, you need to do three things: Develop a customization class for each layer of customization. Enable seeded customization in the application. Link the customization class to the application. The customization developer, who will be developing the customizations, will additionally have to set up JDeveloper correctly so that all customization levels can be accessed. This setup is described later in the article. Developing the customization classes For each layer of customization, you need to develop a customization class with a specific format—technically, it has to extend the Oracle-supplied abstract class oracle.mds.cust.CustomizationClass. A customization class has a name (returned by the getName() method) and a value (returned by the getValue() method). At runtime, the ADF framework will execute the customization classes for all layers to determine the customization value at each level. Additionally, the customization class has to return a short unique prefix to use for all customized items, and a cache hint telling ADF if this is a static or dynamic customization. Building the classes Your customization classes should go in your Common Code workspace. A customization class is a normal Java class, that is, it is created with File | New | General | Java Class. In the Create Java Class dialog, give your class a name (OrgLayerCC) and place it into a customization package (for example, com.dmcsol. xdm.customization). Choose to extend oracle.mds.cust.CustomizationClass and check the Implement Abstract Methods checkbox: Create a similar class called RoleLayerCC. Implementing the methods Because you asked the JDeveloper to implement the abstract methods, your classes already contain three methods: getCacheHint() getName() getValue(RestrictedSession, MetadataObject) The getCacheHint() method must return an oracle.mds.cust.CacheHint constant that tells ADF if the value of this layer is static (common for all users) or dynamic (depending on the user). The normal values here are ALL_USERS for static customizations or MULTI_USER for customizations that apply to multiple users. In the XDM application, you will use: ALL_USERS for OrgLevelCC, because this customization layer will apply to all users in the organization MULTI_USER for RoleLevelCC, because the role-based customization will apply to multiple users, but not necessarily to all Refer to the chapter on customization with MDS in Fusion Developer's Guide for Oracle Application Development Framework for information on other possible values. The getName() method simply returns the the name of the customization layer. The getValue() method must return an array of String objects. It will normally make most sense to return just one value—the application is running for exactly one organization, you are either a normal user or a super user. For advanced scenarios, it is possible to return multiple values, in such a case multiple customizations will be applied at the same layer. Each customization that a customization developer defines will be tied to a specific layer and value—there might be a customization that happens when org has the value xyztravel. For the OrgLayerCC class, the value is static and is defined when DMC Solutions installs the application for XYZ Travel—for example, in a property file. For the RoleLayerCC class , the value is dynamic, depending on the current user, and can be retrieved from the ADF security context. The OrgLayerCC class could look like the following: package com.dmcsol.xdm.customization; import ... public class RoleLayerCC extends CustomizationClass { public CacheHint getCacheHint() { return CacheHint.MULTI_USER; } public String getName() { return "role"; } public String[] getValue(RestrictedSession restrictedSession, MetadataObject metadataObject) { String[] roleValue = new String[1]; SecurityContext sec = ADFContext.getCurrent(). getSecurityContext(); if (sec.isUserInRole("superuser")) { roleValue[0] = "superuser"; } else { roleValue[0] = "normal"; } return roleValue; } } The GetCacheHint() method returns MULTI_USER because this is a dynamic customization—it will return different values for different users. The GetName() method simply returns the name of the layer. The GetValue() method uses oracle.adf.share.security.SecurityContext to look up if the user has the super user role and returns the value superuser or normal. Deploying the customization classes Because you place your customization class in the Common Code project, you need to deploy the Common Code project to an ADF library and have the build/ configuration manager copy it to your common library directory.
Read more
  • 0
  • 0
  • 2528
Banner background image

article-image-how-scribus-different-other-software
Packt
13 Jun 2011
7 min read
Save for later

How Scribus is Different from Other Software

Packt
13 Jun 2011
7 min read
  Scribus 1.3.5: Beginner's Guide You might be fully interested in free software, may be running Linux or any other system except Apple Mac OS or Microsoft Windows, and in this case, you don't have much choice except for Scribus, Scribus, or Scribus. This is mostly because proprietary equivalent software such as Adobe InDesign or Quark Xpress is not available for Linux-based platforms. Desktop publishing software versus text processors If you have already used layout software before, these arguments are not new to you. However, if you come from any other computer-assisted profession, you may be surprised at the way such software is organized. Especially, most of you would have certainly used text processors such as Microsoft Word, OpenOffice.org Writer, and maybe Microsoft Publisher. Once you go deeper into the details, you'll see how Scribus is different. I've heard many people explain that they were trying Scribus, because they thought or heard it was a better piece of software. Text processors are very qualitative when it's time to handle text (and this is an important point) but not when there is a need to customize a document. Just take a look around: you can identify any magazine or any book collection because of their visual identity, which is made possible by the Desktop Publishing set of software. Could you identify as easily the origin of a Microsoft Word or OpenOffice document? I'm not sure, because all of these documents will be very similar. Generally, you won't use a layout program if you need to save time and work very quickly, because it is not intended to save time, but to let you be as free as possible to create a unique document: the one that will make you change the world, or the one that will help you improve the communication of your company and make it more efficient. Scribus will give you everything to be as productive as possible. However, every time you need to choose a color, every time you need to add a shape, or every time you need to change the text settings, every single little task that you will find yourself doing to get the best graphically designed final document will add to the time taken. This is a very important point if you want your layout project to succeed. I have experienced many projects where people really underestimated the time taken to perform these tasks. To help you create your document, remember that a layout program is not based on text handling, but on the page. In Scribus, the page is an object that you'll be able to manipulate. On the page, you'll add shapes or frames that you'll place precisely, one by one, and each of these will have their own properties. Especially in a layout program, images are drastically apart from the text, whereas in a text processor both will be in the same flow. This again results in a different way of considering the elements you will have and may change the way you work. This is for the best, and once you get used to this, once you have the major but quite simple software possibilities integrated, and once you have the print process specificities in your work, you'll be more free than you've ever been to create a unique document. This document will be the result of your own creativity and not only the default settings defined by a product or another. To InDesign and Xpress users If you've already used a layout program, you will certainly have questions such as: Is this software as good as mine? Can I import what I've done with my actual software so that I won't have to do everything again? Will I have many things to learn to be as productive as I actually am? For the first question, Scribus is in some ways very good and has very original features but in some other ways it is less than perfect. The real question is: what do you already use in the software you have and does Scribus have it? I used to be an Xpress teacher and I've often met graphic designers who don't even use styles or master page and Scribus has it. Scribus can use spot color, set bleeds, and many other features required. As an answer to the second question I could simply say "No"—mainly "No". As far as I know, it's always the tricky part in whatever software you use. Scribus will soon be able to import Xpress tags and IDXML, but it is still in development and is actually not usable; if you use Microsoft Publisher, there is really no way. As for the last question, I don't think there are so many things to learn. Scribus has an original user interface but can be inspired by some de facto standards. And mainly, the principles are the same in Scribus and in InDesign or Xpress. Of course, you will use some of your habits, but in two or three days of Scribus testing, everything will be perfect again and you'll feel comfortable with it. Shortcuts will certainly be the most difficult to learn. Xpress users, especially, use them a lot and even InDesign users use them for text handling. Scribus shortcut defaults are much simpler. You can use the Keyboard Shortcuts category of Preferences to change them. Simply select the function for the shortcut you want to change in the Action list, click on the User Defined Key option, click on the Set Key button, and perform the shortcut you'd like to assign. If it is already being used, you won't be able to assign it unless you find where it is assigned and erase it. Applying master pages: In Scribus, unlike in InDesign, the left-hand side master page can be applied to a right-hand side page. Scribus never automates the way master pages are applied, except when creating the document. So, if you're confused by that, don't worry; you'll be able to do what you want even if you have chosen the bad side. Frame conversion and text to outlines: In Scribus, frames are central. Adobe InDesign, in some ways tries to avoid them by using a single tool for text edit and text frame, and at the same time it can import pictures without requiring a frame. But in any case, a frame is made even if automatically. Another good feature with Scribus frames is that they can easily be converted to any other kind of frame. So, if you created a Text Frame and want to put an image into it, you can still do so without deleting and drawing a new frame. This is very important because the default frame shape is set to rectangle and cannot be changed. Importing several pictures In Scribus, it is actually impossible to import several pictures (as it can be done in InDesign) at once. This can be done with Scribus Python scripting. There are already some scripts for this on the Scribus wiki at http://wiki.scribus.net. Check for the script that suits your needs. Summary In this article we saw how Scribus is different from other kinds of software. Further resources on this subject: Scribus: Managing Colors [Article] Scribus: Importing Images [Article] Scribus: Creating a Layout [Article] Working with Colors in Scribus [Article] Scribus: Manipulate and Place Objects in a Layout [Article]
Read more
  • 0
  • 0
  • 4033

article-image-adf-proof-concept
Packt
10 Jun 2011
12 min read
Save for later

The ADF Proof of Concept

Packt
10 Jun 2011
12 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF You can compare the situation at the start of a project to standing in front of a mountain with the task to excavate a tunnel. The mountainsides are almost vertical, and there is no way for you to climb the mountain to figure out how wide it is. You can take two approaches: You can either start blasting and drilling in the full width of the tunnel you need You can start drilling a very small pilot tunnel all through the mountain, and then expand it to full width later It's probably more efficient to build in the full width of the tunnel straight from the beginning, but this approach has some serious disadvantages as well. You don't know how wide the mountain is, so you can't tell how long it will take to build the tunnel. In addition, you don't know what kind of surprises might lurk in the mountain—porous rock, aquifers, or any number of other obstacles to your tunnel building. That's why you should build the pilot tunnel first—so you know the size of the task and have an idea of the obstacles you might meet on the way. The Proof of Concept is that pilot tunnel. The very brief ADF primer Since you have decided to evaluate ADF for your enterprise application, you probably already have a pretty good idea of its architecture and capabilities. Therefore, this section will only give a very brief overview of ADF—there are many whitepapers, tutorials, and demonstrations available at the Oracle Technology Network website. Your starting point for ADF information is http://otn.oracle. com/developer-tools/jdev/overview. Enterprise architecture A modern enterprise application typically consists of a frontend, user-facing part and a backend business service part. Frontend The frontend part is constructed from several layers. In a web-based application, these are normally arranged in the common Model-View-Controller (MVC) pattern as illustrated next: The View layer is interacting with the user, displaying data as well as receiving updates and user actions. The Controller layer is in charge of interpreting user actions and deciding which screens are presented to the user in which order. And the Model layer is representing the backend business services to the View and Controller, hiding the complexity of storing and retrieving data. This architecture implements a clean separation of duties— the page doesn't have to worry about where to go next, because that is the task of the controller. And the controller doesn't have to worry about how to store data in the data service, because that is the task of the model. Other Frontends An enterprise application could also have a desktop application frontend, and might have additional frontends for mobile users or even use existing desktop applications like Microsoft Excel to interact with data. In the ADF technology stack, all of these alternative frontends interact with the same model, making it easy to develop multiple frontend applications against the same data services. Backend The backend part consists of a business service layer that implements the business logic and provide some way of accessing the underlying data services. Business services can be implemented as API code written in Java, PL/SQL or other languages, web services, or using a business service framework such as ADF Business Components. Under the business services layer there will be a data service layer actually storing persistent data. Typically, this is based on relational tables, but it could also be XML files in a file system or data in other systems accessed through an interface. ADF architecture There are many different ways of building applications with Oracle Application Development Framework, but Oracle has chosen a modern SOA-based architecture for Oracle Fusion Applications. This brand new product has been built from the ground up as the successor to Oracle E-Business Suite, Siebel, PeopleSoft, J.D. Edwards and many other applications Oracle has acquired over the last couple of years. If it is good enough for Oracle Fusion Applications, arguably the biggest enterprise application development effort ever undertaken by mankind, it is probably good enough for you, too. Oracle Fusion Applications are using the following parts of the ADF framework: ADF Faces Rich Client (ADFv), a very rich set of user interface components implementing advanced functionality in a web application. ADF Controller (ADFc), implementing the features of a normal JSF controller, but extended with the possibility to define modular, reusable page flows. ADFc also allows you to declare transaction boundaries so one database transaction can span many pages. ADF binding layer (ADFm), standard defining a common backend model that the user interface can communicate with. ADF Business Components (ADFbc), a highly productive, declarative way of defining business services based on relational tables. You can see all of these in the following figure: There are many ways of getting from A to B—this article is about travelling the straight and well-paved road Oracle has built for Fusion Applications. However, other routes might be appropriate in some situations: You could build the user interface as a desktop application using ADF Swing components, you could use ADF for a mobile device, or you could use ADF Desktop Integration to access your data directly from within Microsoft Excel. Your business services could be based on Web Services, EJBs or many other technologies, using the ADF binding layer to connect to the user interface. Entity objects and associations Entity objects (EOs) takes care of object-relational mapping: Making your relational tables available to the application as Java objects. Entity objects are the base that view objects are built on, and all data modifications go through the entity object. You will normally have one entity object for every database table or database view your application uses, and this object is responsible for producing the correct SQL statements to insert, update or delete in the underlying relational tables. The entity objects helps you build scalable and well-performing applications by intelligently caching records on the application server in order to minimize the load the application places on the database. Like entity objects are the middle-tier reflection of database tables and database views, Associations are the reflection of foreign key relationships between tables. An association represents a connection between two entity objects and allows ADF to relate data in one entity object with data in another. JDeveloper is normally able to create these automatically by simply inspecting the database, but in case your database does not contain foreign keys, you can build associations by hand to tell ADF about the relationships in your data. View objects and View Links While you do not really need to make any major decisions when building the entity objects for the Proof of Concept, you do need to consider the consumers of your business services when you start building view objects—for example, what information you would display on a screen. View objects are typically based on entity objects and you will be using them for two purposes: To provide data for your screens To provide data for lists of values (LOVs) The data handling view objects are normally specific for each screen or business service. One screen can use multiple view objects—in general, you need to create one view object for each master-detail level you wish to display on your screen. One view object can pull together data from several entity objects, so if you just need to retrieve a reference value from another table, you do not need to create a separate view object for this. The LOV view objects are used for drop-down lists and other selections in your user interface. They will typically be defined as read-only and because they are reusable, you will define them once and re-use them everywhere you need a drop-down list on a specific data set. View Links are used to define the relationships between the view objects and are typically based on associations (again often based on foreign keys in the database). The following figure shows an example of two ways to display the data from the familiar EMP and DEPT tables. The left-hand illustration shows a situation where you wish to display a department with all the employees of the department in a master-detail screen. In this case, you create two view objects connected by a view link. The right-hand illustration shows a situation where you wish to display all employees, together with the name of the department where they work. In this case, you only need one view object, pulling together data from both the EMP and DEPT tables through the entity objects. Application modules Application modules encapsulate the view object instances and business service methods necessary to perform a unit of work. Each application module has its own transactional context and holds its own database connection. This means that all of the work a user performs using view objects from one application module is part of one database transaction. Application modules can have different granularity, but typically, you will have one application module for each major piece of functionality. If your requirements are specified with use cases, there will often be one application module for each major use case. However, multiple use cases can also be grouped together into one application module – indeed, it is possible to build a small application using just one application modules. Application modules for Oracle Forms If you come from an Oracle Forms background and are developing a replacement for an Oracle Forms application, your application will often have a relatively small number of complex, major Forms, and larger number of simple data maintenance Forms. You will often create one Application Module per major Form, and a few application modules that each provide data for a number of simple Forms. If you wish, you can combine multiple application modules inside one root application module. This is called nesting and allows several application modules to participate in the transaction of the root application module. This also saves database connections because only the root application module needs a connection. The ADF user interface The preferred way to build the user interface in an ADF enterprise application is with JavaServer Faces (JSF). JSF is a component-based framework for building webbased user interfaces that overcome many of the limitations of earlier technologies like JavaServer Pages (JSP). In a JSF application, the user interface does not contain any code, but is instead built from configurable components from a component library. For your application, you will want to use the sophisticated ADF 11g JavaServer Faces (JSF) component library, known as the ADF Faces Rich Client. There are other JSF component libraries—for example, the previous version of the ADF Faces components (version 10g) has been released by Oracle as Open Source and is now part of the Apache MyFaces Trinidad project. But for a modern enterprise application, use ADF Faces Rich Client. ADF Task Flows One of the great improvements in ADF 11g was the addition of ADF Task Flows. It had long been clear to web developers that in a web application, you cannot just let each page decide where to go next—you need the controller from the MVC architecture. Various frameworks and technologies have implemented controllers (both the popular Struts framework and JSF has this), but the controller in ADF Task Flows is the first controller capable of handling large enterprise applications. An ADF web application has one Unbounded Task Flow where you place all the publicly accessible pages and define the navigation between them. This corresponds to other controller architectures. But ADF also has Bounded Task Flows, which are complete, reusable mini-applications that can be called from the unbounded task flow or from another bounded task flow. A bounded task flow has a well-defined entry point, accepts input parameters and can deliver an outcome back to the caller. For example, you might build a customer management task flow to handle customer data. In this way, your application can be built in a modular fashion—the developers in charge of implementing each use case can define their own bounded task flow with a well-defined interface for others to call. The team building the customer management task flow is thus free to add new pages or change the navigation flow without affecting the rest of the application. ADF pages and fragments In your task flows, you can define either pages or page fragments. Pages are complete web pages that you can run on their own, while page fragments are reusable components that you place inside regions on pages. An enterprise application will often have a small number of pages (possibly only one), and a larger number of page fragments that dynamically replace each other inside a region. This design means that the user does not see the whole browser window redraw itself—only parts of the page will change as one fragment is replaced with another. It is this technique that makes an ADF application seem more like a desktop application than a traditional web application. On your pages or page fragments, you add content using layout components, data components and control components: The layout components are containers for other components and control the screen layout. Often, multiple layout components are nested inside each other to achieve the desired layout. The data components are the fields, drop-down lists, radio buttons and so on that the user interacts with to create and modify data. The control components are the buttons and links used to perform actions in an ADF application.
Read more
  • 0
  • 0
  • 1190

article-image-java-refactoring-netbeans
Packt
08 Jun 2011
7 min read
Save for later

Java Refactoring in NetBeans

Packt
08 Jun 2011
7 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Be warned that many of the refactoring techniques presented in this article might break some code. NetBeans, and other IDEs for that matter too, make it easier to revert changes but of course be wary of things going wrong. With that in mind, let's dig in. Renaming elements This recipe focuses on how the IDE handles the renaming of all elements of a project, being the project itself, classes, methods, variables, and packages. How to do it... Let's create the code to be renamed: Create a new project, this can be achieved by either clicking File and then New Project or pressing Ctrl+Shift+N. On New Project window, choose Java on the Categories side, and on the Projects side select Java Application. Then click Next. Under Name and Location: name the project as RenameElements and click Finish. With the project created we will need to clear the RenameElements.java class of the main method and insert the following code: package renameelements; import java.io.File; public class RenameElements { private void printFiles(String string) { File file = new File(string); if (file.isFile()) { System.out.println(file.getPath()); } else if (file.isDirectory()) { for(String directory : file.list()) printFiles(string + file.separator + directory); } if (!file.exists()) System.out.println(string + " does not exist."); } } The next step is to rename the package, so place the cursor on top of the package name, renameelements, and press Ctrl+R. A Rename dialog pops-up with the package name. Type util under New Name and click on Refactor. Our class contains several variables we can rename: Place the cursor on the top of the String parameter named string and press Ctrl+R. Type path and press Enter Let's rename the other variables: Rename file into filePath. To rename methods, perform the steps below: Place the cursor on the top of the method declaration, printFiles, right-click it then select Refactor and Rename.... On the Rename Method dialog, under New Name enter recursiveFilePrinting and press Refactor. Then let's rename classes: To rename a class navigate to the Projects window and press Ctrl+R on the RenameElements.java file. On the Rename Class dialog enter FileManipulator and press Enter. And finally renaming an entire project: Navigate to the Project window, right-click on the project name, RenamingElements, and choose Rename.... Under Project Name enter FileSystem and tick Also Rename Project Folder; after that, click on Rename. How it works... Renaming a project works a bit differently from renaming a variable, since in this action NetBeans needs to rename the folder where the project is placed. The Ctrl+R shortcut is not enough in itself so NetBeans shows the Rename Project dialog. This emphasizes to the developer that something deeper is happening. When renaming a project, NetBeans gives the developer the possibility of renaming the folder where the project is contained to the same name of the project. This is a good practice and, more often than not, is followed. Moving elements NetBeans enables the developer to easily move classes around different projects and packages. No more breaking compatibility when moving those classes around, since all are seamlessly handled by the IDE. Getting ready For this recipe we will need a Java project and a Java class so we can exemplify how moving elements really work. The exisiting code, created in the previous recipe, is going to be enough. Also you can try doing this with your own code since moving classes are not such a complicated step that can't be undone. Let's create a project: Create a new project, which can be achieved either by clicking File and then New Project or pressing Ctrl+Shift+N. In the New Project window, choose Java on the Categories side and Java Application on the Projects side, then click Next. Under Name and Location, name the Project as MovingElements and click Finish. Now right-click on the movingelements package, select New... and Java Class.... On the New Java Class dialog enter the class name as Person. Leave all the other fields with their default values and click Finish. How to do it... Place the cursor inside of Person.java and press Ctrl+M. Select a working project from Project field. Select Source Packages in the Location field. Under the To Package field enter: classextraction: How it works... When clicking the Refactor button the class is removed from the current project and placed in the project that was selected from the dialog. The package in that class is then updated to match. Extracting a superclass Extracting superclasses enables NetBeans to add different levels of hierarchy even after the code is written. Usually, requirements changing in the middle of development, and rewriting classes to support inheritance would quite complicated and time-consuming. NetBeans enables the developer to create those superclasses in a few clicks and, by understanding how this mechanism works, even creates superclasses that extend other superclasses. Getting ready We will need to create a Project based on the Getting Ready section of the previous recipe, since it is very similar. The only change from the previous recipe is that this recipe's project name will be SuperClassExtraction. After project creation: Right-click on the superclassextraction package, select New... and Java Class.... On the New Java Class dialog enter the class name as DataAnalyzer. Leave all the other fields with their default values and click Finish. Replace the entire content of the DataAnalyzer.java with the following code: package superclassextraction; import java.util.ArrayList; public class DataAnalyzer { ArrayList<String> data; static final boolean CORRECT = true; static final boolean INCORRECT = false; private void fetchData() { //code } void saveData() { } public boolean parseData() { return CORRECT; } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } Now let's extract our superclass. How to do it... Right-click inside of the DataAnalyzer.java class, select Refactor and Extract Superclass.... When the Extract Superclass dialog appears, enter Superclass Name as Analyzer. On Members to Extract, select all members, but leave saveData out. Under the Make Abstract column select analyzeData() and leave parseData(), saveData(), fetchData() out. Then click Refactor. How it works... When the Refactor button is pressed, NetBeans copies the marked methods from DataAnalyzer.java and re-creates them in the superclass. NetBeans deals intelligently with methods marked as abstract. The abstract methods are moved up in the hierarchy and the implementation is left in the concrete class. In our example analyzeData is moved to the abstract class but marked as abstract; the real implementation is then left in DataAnalyzer. NetBeans also supports the moving of fields, in our case the CORRECT and INCORRECT fields. The following is the code in DataAnalyzer.java: public class DataAnalyzer extends Analyzer { public void saveData() { //code } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } The following is the code in Analyzer.java: public abstract class Analyzer { static final boolean CORRECT = true; static final boolean INCORRECT = false; ArrayList<String> data; public Analyzer() { } public abstract String analyzeData(ArrayList<String> data, int offset); public void fetchData() { //code } public boolean parseData() { //code return DataAnalyzer.CORRECT; } } There's more... Let's learn how to implement parent class methods. Implementing parent class methods Let's add a method to the parent class: Open Analyzer.java and enter the following code: public void clearData(){ data.clear(); } Save the file. Open DataAnalyzer.java, press Alt+Insert and select Override Method.... In the Generate Override Methods dialog select the clearData() option and click Generate. NetBeans will then override the method and add the implementation to DataAnalyzer.java: @Override public void clearData() { super.clearData(); }  
Read more
  • 0
  • 0
  • 17418
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-getting-started-sage
Packt
08 Jun 2011
7 min read
Save for later

Getting started with Sage

Packt
08 Jun 2011
7 min read
Sage Beginner's Guide Remember that you don't actually have to install Sage to start using it. You can start learning Sage by utilizing one of the free public notebook servers that can be found at http://www.sagenb.org/. However, if you find that Sage suits your needs, you will want to install a copy on your own computer. This will guarantee that Sage is always available to you, and it will reduce the load on the public servers so that others can experiment with Sage. In addition, your data will be more secure, and you can utilize more computing power to solve larger problems. Before you begin At the moment, Sage is fully supported on certain versions of the following platforms: some Linux distributions (Fedora, openSUSE, Red Hat, and Ubuntu), Mac OS X, OpenSolaris, and Solaris. Sage is tested on all of these platforms before each release, and binaries are always available for these platforms. The latest list of supported platforms is available at http://wiki.sagemath.org/SupportedPlatforms. The page also contains information about platforms that Sage will probably run on, and the status of efforts to port Sage to various platforms. When downloading Sage, the website attempts to detect which operating system you are using, and directs you to the appropriate download page. If it sends you to the wrong download page, use the "Download" menu at the top of the page to choose the correct platform. If you get stuck at any point, the official Sage installation guide is available at http://www.sagemath.org/doc/installation/. Installing a binary version of Sage on Windows Installing Sage on Windows is slightly more involved than installing a typical Windows program. Sage is a collection of over 90 different tools. Many of these tools are developed within a UNIX-like environment, and some have not been successfully ported to Windows. Porting programs from UNIX-like environments to Windows requires the installation of Cygwin (http://www.cygwin.com/), which provides many of the tools that are standard on a Linux system. Rather than attempting to port all of the necessary tools to Cygwin on Windows, the developers of Sage have chosen to distribute Sage as a virtual machine that can run on Windows with the use of the free VMWare Player. A port to Cygwin is in progress, and more information can be found at http://trac.sagemath.org/sage_trac/wiki/CygwinPort. Downloading VMware Player The VMWare Player can be found at http://www.vmware.com/products/player/. Clicking the Download link will direct you to a registration form. Fill out and submit the form. You will receive a confirmation email that contains a link that must be clicked to complete the registration process and take you to the download page. Choose Start Download Manager, which downloads and runs a small application that performs the actual download and saves the file to a location of your choice. Installing VMWare Player After downloading VMWare Player, double-click the saved file to start the installation wizard. Follow the instructions in the wizard to install the Player. You will have to reboot the computer when instructed. Downloading and extracting Sage Download Sage by following the Download link from http://www.sagemath.org. The site should automatically detect that you are using Windows, and direct you to the right download page. Choose the closest mirror and download the compressed virtual machine. Be aware that the file is nearly 1GB in size. Once the download is complete, right-click the compressed file and choose Extract all from the pop-up menu. Launching the virtual machine Launch VMware Player and accept the license terms. When the Player has started, click Open a Virtual Machine and select the Sage virtual machine, which is called sage-vmware.vmx. Click Play virtual machine to run Sage. If you have run Sage before, it should appear in the list of virtual machines on the left side of the dialog box, and you can double-click to run it. When the virtual machine launches, you may receive one or more warnings about various devices (such as Bluetooth adapters) that the virtual machine cannot connect to. Don't worry about this, since Sage doesn't need these devices. Start Sage Once the virtual machine is running, you will see three icons. Double-clicking the Sage Notebook icon starts the Sage notebook interface, while the Sage icon starts the commandline interface. The first time you run Sage, you will have to wait while it regenerates files. When it finishes, you are ready to go. You may get the warning "External network not set up" when launching the notebook interface. This does not cause any problems. When you are done using Sage, choose Shut Down… from the System menu at the top of the window, and a dialog will appear. Click the Shut Down button to close the virtual machine.   Installing a binary version of Sage on OS X On Mac OS X, you have the option of installing a pre-built binary application, or downloading the source code and compiling Sage yourself. One advantage of the pre-built binary is that it is very easy to install, because it contains everything you need to run Sage. Another advantage of the binary is that building Sage from source requires a lot of computational resources, and may take a long time on older machines. However, there are a number of disadvantages to prebuilt binaries. The binary download is quite large, and the installed files take up a lot of disk space. Many of the tools in the binary may be duplicates of tools you already have on your system. Pre-built binaries cannot be tuned to take advantage of the hardware features of a particular platform, so building Sage from source is preferred if you are looking for the best performance on CPU-intensive tasks. You will have to choose which method is right for you. Downloading Sage Download Sage by following the Download link from http://www.sagemath.org. The site should automatically detect that you are using OS X, and direct you to the right download page. Choose a mirror site close to you. Select your architecture (Intel for new Macs, or PowerPC for older G4 and G5 macs). Then, click the link for the correct .dmg file for you version of Mac OS X. If you aren't sure, click the Apple menu on the far left side of the menu bar and choose About This Mac. Installing Sage Once the download is complete, double-click the .dmg file to mount the disk image. Drag the Sage folder from the disk image to the desired location on your hard drive (such as the Apps folder). If the copy procedure fails, you will need to do it from the command line. Open the Terminal application and enter the following commands. Be sure to change the name sage-4.5-OSX-64bit-10.6-i386-Darwin.dmg to the name of the file you just downloaded: $ cd /Applications $ cp -R -P /Volumes/sage-4.5-OSX-64bit-10.6-i386-Darwin.dmg /sage . After the copy process is complete, right-click on the icon for the disk image, and choose Eject. Starting Sage Use the Finder to visit the Sage folder that you just created. Double-click on the icon called Sage. It should open with the Terminal application. If it doesn't start, right-click on the icon, go to the Open With submenu and choose Terminal.app. The Sage command line will now be running in a Terminal window. The first time you run Sage, you will have to wait while it regenerates files. When it finishes, you are ready to go. There are three ways to exit Sage: type exit or quit at the Sage command prompt, or press Ctrl-D in the Terminal window. You can then quit the Terminal application.  
Read more
  • 0
  • 0
  • 4614

article-image-working-user-defined-values-sap-business-one
Packt
03 Jun 2011
8 min read
Save for later

Working with User Defined Values in SAP Business One

Packt
03 Jun 2011
8 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business         Read more about this book       The User-Defined Values function enables SAP Business One users to enter values, originated by a predefined search process, for any field in the system (including user-defined fields). This function enables the user to enter data more efficiently and – perhaps most importantly – more accurately. In fact, the concept is sort of a "Workflow Light" implementation. It can both save user time and reduce data double entries. In this article by Gordon Du, author of Mastering SQL Queries for SAP Business One, we will see how to work with User-Defined Values. (For more resources on Microsoft, see here.) How to work with User-Defined Values To access the User-Defined Values, you can choose menu item Tools | User-Defined Values. You can also use the shortcut key Shift+Alt+F2 instead. Another option is to access it directly from a non-assigned field by using Shift+F2. This will be discussed later. You must notice that the option will not be available until you brought up at least one form. This is because the UDV has to be associated with a form. It can't stand alone. The following screenshots are taken from A/R Down Payment Invoice. It is one of the standard marketing documents. From the UDV point of view, there is no big difference between this and the other types of documents, namely, Sales Order, Purchase Order, Invoice, and so on. After a form is opened, a UDV can be defined. We will start from an empty screen to show you the first step: bringing up a form. When a form is opened, you can define or change any UDV. In this case, we stop our cursor on the Due Date field and then enter Shift+F2. A system message will pop up as shown in the following screenshot: If you click on Yes, it will bring up the same window in the manner you select the menu item mentioned earlier from the Tools menu or press Shift+Alt+F2. When you get the User-Define Values-Setup screen, you have three options. Apart from the default option: Without Search User-Define Values, you actually have only two choices: Search in Existing User-Define Values Search in Existing User-Define Values according to Saved Query Let's go through the last option first: Search in Existing User-Define Values according to Saved Query. The topic related to query will always be assigned with the top priority. There are quite a few screenshots that will help you understand the entire process. Search in existing User-Defined Values according to the saved queries The goal for this example is to input the due date as the current date automatically. The first thing to do for this option is to click on the bottom radio button among three options. The screenshot is shown next: After you have clicked the Search in Existing User-Defined Values according to Saved Query radio button, you will find a long empty textbox in a grey color and a checkbox for Auto Refresh When Field Changes underneath. Don't get confused by the color. Even though in other functions throughout SAP Business One, a gray colored field normally means that you cannot input or enter information into the field. That is not the case here. You can double-click it to get the User-Defined Values. When you double-click on the empty across-window text box, you can bring up the query manager window to select a query. You can then browse the query category that relates to Formatted Searches and find the query you need. The query called Auto Date Today in the showcase is very simple. The query script is as simple as this: SELECT GetDate() This query returns the current date as the result. You need to double-click to select the query and then go back to the previous screen but with the query name, as shown in the following screenshot: It may not be good enough to select only query because if you stop here you have to always manually trigger the FMS query run by entering Shift+F2. To automate the FMS query process, you can click on the checkbox under the selected query. After you check this box, another long text box will be displayed with a drop-down list button. Under the text box, there are two radio buttons for Auto Refresh When Field Changes: Refresh Regularly Display Saved User-Defined Value Display Saved User-Defined Values will be the default selection, if you do not change it. When you click on the drop-down list arrow button, you will get a list of fields that are associated with the current form. You can see in the following screenshot that Customer/Vendor Code field has been selected. For header document UDV, this field is often the most useful field to auto refresh the UDV. In theory, you can select any fields from the list. However, in reality only a few fields are good candidates for the task. These include Customer/Vendor Code, Document Currency, Document Number, and Document Total for document header; Item Code and Quantity for document lines. Choosing the correct data field from this drop-down list is always the most difficult step in Formatted Search, and you should test your data field selection fully. Now, the text box is filled with Customer/Vendor Code for automatically refreshing the UDV. Between two options, this query can only select the default option of Display Saved User-Defined Value. Otherwise, the date will always change to the date you have updated the document on. That will invalidate the usage of this UDV. The Refresh Regularly option is only suitable to the value that is closely related to the changed field that you have selected. In general, Display Saved User-Defined Value is always a better option than Refresh Regularly. At least it gives the system less burden. If you have selected Refresh Regularly, it means you want to get the UDV changed whenever the base field changes. The last step to set up this UDV is by clicking Update. As soon as you click the button, the User-Defined Values–Setup window will be closed. You can find a green message on the bottom-left of the screen saying Operation Completed Successfully. You can find a small "magnifying glass" added to the right corner of the Due Date field. This means the Formatted Search is successfully set up. You can try it for yourself. Sometimes this "magnifying glass" disappears for no reason. Actually, there are reasons but not easy to be understood. The main reason is that you may have assigned some different values to the same field on different forms. Other reasons may be related to add-on, and so on. In order to test this FMS, the first thing to try is to use the menu function or key combination Shift+F2. The other option is to just click on the "magnifying glass". Both functions have the same result. It will force the query to run. You can find that the date is filled by the same date as posting date and document date. You may find some interesting date definitions in SAP Business One, such as Posting Date is held by the field DocDate. Document Date however, is saved under TaxDate. Be careful in dealing with dates. You must follow the system's definition in using those terms, so that you get the correct result. A better way to use this FMS query is by entering the customer code directly without forcing FMS query to run first. The following screenshot shows that the customer code OneTime has been entered. Please note that the DueDate field is still empty. Is there anything wrong? No. That is the system's expected behavior. Only if your cursor leaves the Customer Code field, can the FMS query be triggered. That is a perfect example of When Field Value Changes. The system can only know that the field value is changed when you tab out of the field. When you are working with the field, the field is not changed yet. Be careful to follow system requirements while entering data. Never press Enter in most of the forms unless you are ready for the last step to add or update data. If you do, you may add the wrong documents to the system and they are irrevocable. The previous screenshot shows the complete process of setting up search in Existing User-Define Values according to Saved Query. Now it is time to discuss the $ sign field.
Read more
  • 0
  • 0
  • 12307

article-image-python-testing-coverage-analysis
Packt
01 Jun 2011
13 min read
Save for later

Python Testing: Coverage Analysis

Packt
01 Jun 2011
13 min read
Python Testing Cookbook Over 70 simple but incredibly effective recipes for taking control of automated testing using powerful Python testing tools Introduction A coverage analyzer can be used while running a system in production, but what are the pros and cons, if we used it this way? What about using a coverage analyzer when running test suites? What benefits would this approach provide compared to checking systems in production? Coverage helps us to see if we are adequately testing our system. But it must be performed with a certain amount of skepticism. This is because, even if we achieve 100 percent coverage, meaning every line of our system was exercised, in no way does this guarantee us having no bugs. A quick example involves a code we write and what it processes is the return value from a system call. What if there are three possible values, but we only handle two of them? We may write two test cases covering our handling of it, and this could certainly achieve 100 percent statement coverage. However, it doesn't mean we have handled the third possible return value; thus, leaving us with a potentially undiscovered bug. 100 percent code coverage can also be obtained by condition coverage but may not be achieved with statement coverage. The kind of coverage we are planning to target should be clear. Another key point is that not all testing is aimed at bug fixing. Another key purpose is to make sure that the application meets our customer's needs. This means that, even if we have 100 percent code coverage, we can't guarantee that we are covering all the scenarios expected by our users. This is the difference between 'building it right' and 'building the right thing'. In this article, we will explore various recipes to build a network management application, run coverage tools, and harvest the results. We will discuss how coverage can introduce noise, and show us more than we need to know, as well as introduce performance issues when it instruments our code. We will also see how to trim out information we don't need to get a concise, targeted view of things. This article uses several third-party tools in many recipes. Spring Python (http://springpython.webfactional.com) contains many useful abstractions. The one used in this article is its DatabaseTemplate, which offers easy ways to write SQL queries and updates without having to deal with Python's verbose API. Install it by typing pip install springpython. Install the coverage tool by typing pip install coverage. This may fail because other plugins may install an older version of coverage. If so, uninstall coverage by typing pip uninstall coverage, and then install it again with pip install coverage. Nose is a useful test runner.   Building a network management application For this article, we will build a very simple network management application, and then write different types of tests and check their coverage. This network management application is focused on digesting alarms, also referred to as network events. This is different from certain other network management tools that focus on gathering SNMP alarms from devices. For reasons of simplicity, this correlation engine doesn't contain complex rules, but instead contains simple mapping of network events onto equipment and customer service inventory. We'll explore this in the next few paragraphs as we dig through the code. How to do it... With the following steps, we will build a simple network management application. Create a file called network.py to store the network application. Create a class definition to represent a network event. class Event(object): def __init__(self, hostname, condition, severity, event_time): self.hostname = hostname self.condition = condition self.severity = severity self.id = -1 def __str__(self): return "(ID:%s) %s:%s - %s" % (self.id, self.hostname, self.condition, self.severity) hostname: It is assumed that all network alarms originate from pieces of equipment that have a hostname. condition: Indicates the type of alarm being generated. Two different alarming conditions can come from the same device. severity: 1 indicates a clear, green status; and 5 indicates a faulty, red status. id: The primary key value used when the event is stored in a database. Create a new file called network.sql to contain the SQL code. Create a SQL script that sets up the database and adds the definition for storing network events. CREATE TABLE EVENTS ( ID INTEGER PRIMARY KEY, HOST_NAME TEXT, SEVERITY INTEGER, EVENT_CONDITION TEXT ); Code a high-level algorithm where events are assessed for impact to equipment and customer services and add it to network.py. from springpython.database.core import* class EventCorrelator(object): def __init__(self, factory): self.dt = DatabaseTemplate(factory) def __del__(self): del(self.dt) def process(self, event): stored_event, is_active = self.store_event(event) affected_services, affected_equip = self.impact(event) updated_services = [ self.update_service(service, event) for service in affected_services] updated_equipment = [ self.update_equipment(equip, event) for equip in affected_equip] return (stored_event, is_active, updated_services, updated_equipment) The __init__ method contains some setup code to create a DatabaseTemplate. This is a Spring Python utility class used for database operations. See http://static.springsource.org/spring- python/1.2.x/sphinx/html/dao.html for more details. We are also using sqlite3 as our database engine, since it is a standard part of Python. The process method contains some simple steps to process an incoming event. We first need to store the event in the EVENTS table. This includes evaluating whether or not it is an active event, meaning that it is actively impacting a piece of equipment. Then we determine what equipment and what services the event impacts. Next, we update the affected services by determining whether it causes any service outages or restorations. Then we update the affected equipment by determining whether it fails or clears a device. Finally, we return a tuple containing all the affected assets to support any screen interfaces that could be developed on top of this. Implement the store_event algorithm. def store_event(self, event): try: max_id = self.dt.query_for_int("""select max(ID) from EVENTS""") except DataAccessException, e: max_id = 0 event.id = max_id+1 self.dt.update("""insert into EVENTS (ID, HOST_NAME, SEVERITY, EVENT_CONDITION) values (?,?,?,?)""", (event.id, event.hostname, event.severity, event.condition)) is_active = self.add_or_remove_from_active_events(event) return (event, is_active) This method stores every event that is processed. This supports many things including data mining and post mortem analysis of outages. It is also the authoritative place where other event-related data can point back using a foreign key. The store_event method looks up the maximum primary key value from the EVENTS table. It increments it by one. It assigns it to event.id. It then inserts it into the EVENTS table. Next, it calls a method to evaluate whether or not the event should be add to the list of active events, or if it clears out existing active events. Active events are events that are actively causing a piece of equipment to be unclear. Finally, it returns a tuple containing the event and whether or not it was classified as an active event. For a more sophisticated system, some sort of partitioning solution needs to be implemented. Querying against a table containing millions of rows is very inefficient. However, this is for demonstration purposes only, so we will skip scaling as well as performance and security. Implement the method to evaluate whether to add or remove active events. def add_or_remove_from_active_events(self, event): """Active events are current ones that cause equipment and/or services to be down.""" if event.severity == 1: self.dt.update("""delete from ACTIVE_EVENTS where EVENT_FK in ( select ID from EVENTS where HOST_NAME = ? and EVENT_CONDITION = ?)""", (event.hostname,event.condition)) return False else: self.dt.execute("""insert into ACTIVE_EVENTS (EVENT_FK) values (?)""", (event.id,)) return True When a device fails, it sends a severity 5 event. This is an active event and in this method, a row is inserted into the ACTIVE_EVENTS table, with a foreign key pointing back to the EVENTS table. Then we return back True, indicating this is an active event. Add the table definition for ACTIVE_EVENTS to the SQL script. CREATE TABLE ACTIVE_EVENTS ( ID INTEGER PRIMARY KEY, EVENT_FK, FOREIGN KEY(EVENT_FK) REFERENCES EVENTS(ID) ); This table makes it easy to query what events are currently causing equipment failures. Later, when the failing condition on the device clears, it sends a severity 1 event. This means that severity 1 events are never active, since they aren't contributing to a piece of equipment being down. In our previous method, we search for any active events that have the same hostname and condition, and delete them. Then we return False, indicating this is not an active event. Write the method that evaluates the services and pieces of equipment that are affected by the network event. def impact(self, event): """Look up this event has impact on either equipment or services.""" affected_equipment = self.dt.query( """select * from EQUIPMENT where HOST_NAME = ?""", (event.hostname,), rowhandler=DictionaryRowMapper()) affected_services = self.dt.query( """select SERVICE.* from SERVICE join SERVICE_MAPPING SM on (SERVICE.ID = SM.SERVICE_FK) join EQUIPMENT on (SM.EQUIPMENT_FK = EQUIPMENT.ID where EQUIPMENT.HOST_NAME = ?""", (event.hostname,), rowhandler=DictionaryRowMapper()) return (affected_services, affected_equipment) We first query the EQUIPMENT table to see if event.hostname matches anything. Next, we join the SERVICE table to the EQUIPMENT table through a many-to many relationship tracked by the SERVICE_MAPPING table. Any service that is related to the equipment that the event was reported on is captured. Finally, we return a tuple containing both the list of equipment and list of services that are potentially impacted. Spring Python provides a convenient query operation that returns a list of objects mapped to every row of the query. It also provides an out-of-the-box DictionaryRowMapper that converts each row into a Python dictionary, with the keys matching the column names. Add the table definitions to the SQL script for EQUIPMENT, SERVICE, and SERVICE_MAPPING. CREATE TABLE EQUIPMENT ( ID INTEGER PRIMARY KEY, HOST_NAME TEXT UNIQUE, STATUS INTEGER ); CREATE TABLE SERVICE ( ID INTEGER PRIMARY KEY, NAME TEXT UNIQUE, STATUS TEXT ); CREATE TABLE SERVICE_MAPPING ( ID INTEGER PRIMARY KEY, SERVICE_FK, EQUIPMENT_FK, FOREIGN KEY(SERVICE_FK) REFERENCES SERVICE(ID), FOREIGN KEY(EQUIPMENT_FK) REFERENCES EQUIPMENT(ID) ); Write the update_service method that stores or clears service-related even and then updates the service's status based on the remaining active events. def update_service(self, service, event): if event.severity == 1: self.dt.update("""delete from SERVICE_EVENTS where EVENT_FK in ( select ID from EVENTS where HOST_NAME = ? and EVENT_CONDITION = ?)""", (event.hostname,event.condition)) else: self.dt.execute("""insert into SERVICE_EVENTS (EVENT_FK, SERVICE_FK) values (?,?)""", (event.id,service["ID"])) try: max = self.dt.query_for_int( """select max(EVENTS.SEVERITY) from SERVICE_EVENTS SE join EVENTS on (EVENTS.ID = SE.EVENT_FK) join SERVICE on (SERVICE.ID = SE.SERVICE_FK) where SERVICE.NAME = ?""", (service["NAME"],)) except DataAccessException, e: max = 1 if max > 1 and service["STATUS"] == "Operational": service["STATUS"] = "Outage" self.dt.update("""update SERVICE set STATUS = ? where ID = ?""", (service["STATUS"], service["ID"])) if max == 1 and service["STATUS"] == "Outage": service["STATUS"] = "Operational" self.dt.update("""update SERVICE set STATUS = ? where ID = ?""", (service["STATUS"], service["ID"])) if event.severity == 1: return {"service":service, "is_active":False} else: return {"service":service, "is_active":True} Service-related events are active events related to a service. A single event can be related to many services. For example, what if we were monitoring a wireless router that provided Internet service to a lot of users, and it reported a critical error? This one event would be mapped as an impact to all the end users. When a new active event is processed, it is stored in SERVICE_EVENTS for each related service. Then, when a clearing event is processed, the previous service event must be deleted from the SERVICE_EVENTS table. Add the table defnition for SERVICE_EVENTS to the SQL script. CREATE TABLE SERVICE_EVENTS ( ID INTEGER PRIMARY KEY, SERVICE_FK, EVENT_FK, FOREIGN KEY(SERVICE_FK) REFERENCES SERVICE(ID), FOREIGN KEY(EVENT_FK) REFERENCES EVENTS(ID) ); It is important to recognize that deleting an entry from SERVICE_EVENTS doesn't mean that we delete the original event from the EVENTS table. Instead, we are merely indicating that the original active event is no longer active and it does not impact the related service. Prepend the entire SQL script with drop statements, making it possible to run the script for several recipes DROP TABLE IF EXISTS SERVICE_MAPPING; DROP TABLE IF EXISTS SERVICE_EVENTS; DROP TABLE IF EXISTS ACTIVE_EVENTS; DROP TABLE IF EXISTS EQUIPMENT; DROP TABLE IF EXISTS SERVICE; DROP TABLE IF EXISTS EVENTS; Append the SQL script used for database setup with inserts to preload some equipment and services. INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (1, 'pyhost1', 1); INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (2, 'pyhost2', 1); INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (3, 'pyhost3', 1); INSERT into SERVICE (ID, NAME, STATUS) values (1, 'service-abc', 'Operational'); INSERT into SERVICE (ID, NAME, STATUS) values (2, 'service-xyz', 'Outage'); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (1,1); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (1,2); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (2,1); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (2,3); Finally, write the method that updates equipment status based on the current active events. def update_equipment(self, equip, event): try: max = self.dt.query_for_int( """select max(EVENTS.SEVERITY) from ACTIVE_EVENTS AE join EVENTS on (EVENTS.ID = AE.EVENT_FK) where EVENTS.HOST_NAME = ?""", (event.hostname,)) except DataAccessException: max = 1 if max != equip["STATUS"]: equip["STATUS"] = max self.dt.update("""update EQUIPMENT set STATUS = ?""", (equip["STATUS"],)) return equip Here, we need to find the maximum severity from the list of active events for a given host name. If there are no active events, then Spring Python raises a DataAccessException and we translate that to a severity of 1. We check if this is different from the existing device's status. If so, we issue a SQL update. Finally, we return the record for the device, with its status updated appropriately. How it works... This application uses a database-backed mechanism to process incoming network events, and checks them against the inventory of equipment and services to evaluate failures and restorations. Our application doesn't handle specialized devices or unusual types of services. This real-world complexity has been traded in for a relatively simple application, which can be used to write various test recipes. Events typically map to a single piece of equipment and to zero or more services. A service can be thought of as a string of equipment used to provide a type of service to the customer. New failing events are considered active until a clearing event arrives. Active events, when aggregated against a piece of equipment, define its current status. Active events, when aggregated against a service, defines the service's current status.  
Read more
  • 0
  • 0
  • 2471

article-image-netbeans-ide-7-building-ejb-application
Packt
01 Jun 2011
10 min read
Save for later

NetBeans IDE 7: Building an EJB Application

Packt
01 Jun 2011
10 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic. These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability. The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer. If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book. For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in. Some of the capabilities supported by EJB and enforced by Application Servers are: Remote access Transactions Security Scalability NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development. NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It's as easy as a project node right-click. Creating EJB project In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org. There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen, but at least one is necessary. In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package. How to do it... Lets create a new project by either clicking File and then New Project, or by pressing Ctrl+Shift+N. In the New Project window, in the categories side, choose Java Web and in Projects side, select WebApplication, then click Next. In Name and Location, under Project Name, enter EJBApplication. Tick the Use Dedicated Folder for Storing Libraries option box. Now either type the folder path or select one by clicking on browse. After choosing the folder, we can proceed by clicking Next. In Server and Settings, under Server, choose GlassFish Server 3.1. Tick Enable Contexts and Dependency Injection. Leave the other values with their default values and click Finish. The new project structure is created. How it works... NetBeans creates a complete file structure for our project. It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor. The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.   Adding JPA support The Java Persistence API (JPA) is one of the frameworks that equips Java with object/relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database. With the release of JPA 2.0, there are many areas that were improved, such as: Domain Modeling EntityManager Query interfaces JPA query language and others We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html. NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA. In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project. Getting ready We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe. How to do it... Right-click on EJBApplication node and select New Entity Classes from Database.... In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize Java DB. When Available Tables is populated, select MANUFACTURER, click Add, and then click Next. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish. How it works... NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package. Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself. The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java: @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation, which has a name parameter, points out the table in the Database where the information is stored. The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to: SELECT m FROM Manufacturer m On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods: @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the XML code. A persistence unit, or persistence.xml, is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example: <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU, using the jdbc/sample as the data source. To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor. This is an example of adding another PU to our project:   Creating Stateless Session Bean A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client. Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean. This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client. It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fly by our servlet. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org. We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code. How to do it... Right-click on EJBApplication node and select New and Session Bean.... For Name and Location: Name the EJB as ManufacturerEJB. Under Package, enter beans. Leave Session Type as Stateless. Leave Create Interface with nothing marked and click Finish. Here are the steps for us to create business methods: Open ManufacturerEJB and inside the class body, enter: @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } Press Ctrl+Shift+I to resolve the following imports: java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; Creating the Servlet: Right-click on the EJBApplication node and select New and Servlet.... For Name and Location: Name the servlet as ManufacturerServlet. Under package, enter servlets. Leave all the other fields with their default values and click Next. For Configure Servlet Deployment: Leave all the default values and click Finish. With the ManufacturerServlet open: After the class declaration and before the processRequest method, add: @EJB ManufacturerEJB manufacturerEJB; Then inside the processRequest method, first line after the try statement, add: List<Manufacturer> l = manufacturerEJB.findAll(); Remove the /* TODO output your page here and also */. And finally replace: out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); With: for(int i = 0; i < 10; i++ ) out.println("<b>City</b>"+ l.get(i).getCity() +", <b>State</b>"+ l.get(i).getState() +"<br>" ); Resolve all the import errors and save the file. How it works... To execute the code produced in this recipe, right-click on the EJBApplication node and select Run. When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names. One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that: @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs. We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation. Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.  
Read more
  • 0
  • 0
  • 4604
article-image-getting-started-gnucash
Packt
30 May 2011
8 min read
Save for later

Getting Started with GnuCash

Packt
30 May 2011
8 min read
  Gnucash 2.4 Small Business Accounting: Beginner's Guide Manage your accounts with this desktop financial manager application How do I pronounce GnuCash? Some people use the proper "Guh-noo-cash" and others prefer the easier "NewCash". Go by whatever works for you. Installing GnuCash on Windows Before you can use GnuCash, you have to install it. We will walk you through the steps needed to get it installed successfully on your Windows PC, whether you have Windows 7, Vista, or XP. Time for action – installing GnuCash on Windows Let us go through the steps for downloading and installing GnuCash: GnuCash is an open source software developed by volunteers, often for their own use, and shared with the community. It can be downloaded for free. Download the latest stable release of the installer for Microsoft Windows XP/Vista/7 from the www.gnucash.org website. The file should have a name like gnucash-2.4.1- setup.exe. The size of the file should be about 90MB. Save the file to a convenient location on your PC, such as the Temp folder in your C drive. The GnuCash website will also have other development versions of the software. These are unstable and are for testing purposes only. These are not suitable for business use. Make sure you download the stable release. Launch the GnuCash setup program by double-clicking this file in Windows Explorer. Windows security might pop a message like The publisher could not be verified. Are you sure you want to run this software? or Do you want to allow the following program from an unknown publisher to make changes to this computer?. Click on Run or Yes to continue. The language selection dialog will appear with English already selected. Click on OK to continue. The Welcome screen of the GnuCash setup wizard will appear. Close any other application that may be running and click on Next. The License Agreement will appear. Select I accept the agreement and click on Next. The location dialog will show that GnuCash will be installed in C:Program Files gnucash. It will also tell you how much free space is required on your hard disk for installing the program (about 350 MB). Make sure you have the required free space and click on Next. On Windows 7, the default location will be C:Program Files (x86)gnucash. The next screen will show that a Full Installation will be done. Click on Next to continue. The next screen will show that a GnuCash folder will be created for the menu items. Click on Next to continue. The next screen will show that a desktop icon and a start menu link will be created. Click on Next to continue. The next screen is simply a recap of all the selections made by you so far. Click on Install to start the installation. This may take several minutes, giving you time for a coffee break. When the installation is completed successfully, you should see a window with the title Information. Click on Next to continue. Next, the Completing the GnuCash Setup Wizard window will appear. The Run GnuCash now box will be checked. Click on Finish to complete the installation. The GnuCash Tip of the Day will pop up. You can close this. You should see the Welcome to GnuCash window with Create a new set of accounts checked. We are going to do that soon. But for now, click on Cancel. Say No to the Display Welcome Dialog Again? question. You should see the Unsaved book – GnuCash window: What just happened? Congratulations! You have just installed GnuCash successfully and you are ready to start learning, hands-on, how to use it. Other operating systems In addition to Windows, GnuCash runs on Mac OS X (on the newer Intel as well as the older Power PC) and several flavors of Linux. If you have one of those operating systems, you can download the install package and get installation instructions for those operating systems from the GnuCash.org website. Other download locations In addition to the GnuCash.org website, you can also download GnuCash from popular open source repositories such as SourceForge. Wherever you download from, be careful that you are downloading from a genuine site and that the download is free of viruses and malware. But first, a tip to make your life easier with auto-save Before we start the main show, here is a quick tip to make your life easier. GnuCash has a friendly feature to auto-save changes every few minutes. Some people find this very useful while entering transactions. However, at the time of going through the tutorial, you don't want this auto-save to kick in. Why? You want to have some breathing time to recover from any errors and correct any mistakes and then save it at your convenience. It is even possible, heaven forbid, that you might want to abandon the changes instead of trying to rectify them. To do this, you might want to exit GnuCash without saving the changes. So, let us politely tell GnuCash, "STOP HELPING ME"! Launch the GnuCash Preferences dialog from Edit | Preferences. Select the General tab. As shown in the following image, set the Auto-save time interval to 0 minutes. By setting this to 0, the auto-save feature is turned off. Also, uncheck the Show auto-save confirmation question, if it is checked. As we said, users have found that this ability to auto-save is a big life saver. So, don't forget to turn this back on when you are done with the tutorials and start keeping your business books. Taking the drudgery out of setting up accounts Even the smallest of businesses may need as many as a hundred accounts. If your business is somewhat larger, you may need to create a lot more than a hundred accounts. Am I going to make you create that many accounts one by one? No, I am going to show you how you can create the entire set of accounts needed for a typical small business in under a dozen clicks. Time for action – creating the default business accounts We are going to create the account hierarchy for our sample business, Mid Atlantic Computer Services (MACS). This will give you the hands-on feel to create accounts for your business, when you are ready to do that. Select from the menu File | New | New File. This will launch the New Account Hierarchy Setup assistant. GnuCash uses the term assistant to describe what you may have seen in other Windows applications called a wizard. Assistants help you perform tasks that are complex or not frequently performed. Assistants present you with a sequence of dialog boxes that lead you through a series of well-defined steps. Click on Forward to go to the Choose Currency screen. You will find that US Dollar is selected by default. You can leave it as it is and click Forward to go to the Choose accounts to create screen. You will find that Common Accounts is checked by default. This option is for users who want to set up personal accounts. We want to set up a business account. So, uncheck this and check Business Accounts, as shown in the next screenshot and then click on Forward: In the Setup selected accounts screen, click on the Checking Account line and it will become highlighted. Click under the Opening Balance column in this line, a text box will appear allowing you to enter data. Enter an opening balance of 2000, as shown in the next screenshot, tab out, and click on Forward. In the Finish Account Setup screen, click on Apply. With the previous step, the New Account Hierarchy Setup assistant has completed its job. You should now be back in the GnuCash main window showing the freshly minted set of accounts with the title Unsaved Book - Accounts. The Save As dialog should open. If it doesn't, select File | Save As… change the Save in folder to your desired folder, put in the filename MACS without any extension, and click on Save As. If your screen looks like the following screenshot, you have now successfully created the default business account hierarchy for MACS: Most Windows applications require you to save files with a 3 or 4 letter extension. Microsoft Word, for example, requires a .docx or .doc file extension. However, GnuCash uses the longer .gnucash extension. If you fill in the file name, GnuCash will automatically add the .gnucash extension. What just happened? There you are. With a small amount of effort, you have not only created a complete set of accounts that would be needed for a typical small business, but you have also learned how to enter opening balances as well. Now that we have that under our belt, let us discuss the key aspects of setting up accounts.
Read more
  • 0
  • 0
  • 4472

article-image-high-availability-oracle-11g-r1-r2-real-application-clusters-rac
Packt
20 May 2011
12 min read
Save for later

High Availability: Oracle 11g R1 R2 Real Application Clusters (RAC)

Packt
20 May 2011
12 min read
High availability is a discipline within database technology that provides a solution to protect against data loss and against downtime, which is costly to mission-critical database systems. As such, we will provide details on what constitutes high availability and what does not. By having the proper framework, you will understand how to leverage Oracle RAC and auxiliary technologies including Oracle Data Guard to maximize the Return On Investment (ROI) for your data center environment. High availability concepts High availability provides data center environments that run mission-critical database applications with the resiliency to withstand failures that may occur due to natural, human, or environmental conditions. For example, if a hurricane wipes out the production data center that hosts a financial application's production database, high availability would provide the much-needed protection to avoid data loss, minimize downtime, and maximize the availability of the firm's resources and database applications. Let's now move to the high availability concepts. Planned versus unplanned downtime The distinction needs to be made between planned downtime and unplanned downtime. In most cases, planned downtime is the result of maintenance that is disruptive to system operations and cannot be avoided with current system designs for a data center. An example of planned downtime would be a DBA maintenance activity such as database patching to an Oracle database, which would require taking an outage to take the system offline for a period of time. From the database administrator's perspective, planned downtime situations usually are the result of management-initiated events. On the other hand, unplanned downtime issues frequently occur due to a physical event caused by a hardware, software, or environmental failure or caused by human error. A few examples of unplanned downtime events include hardware server component failures such as CPU, disk, or power outages. Most data centers will exclude planned downtime from the high availability factor in terms of calculating the current total availability percentage. Even so, both planned and unplanned maintenance windows affect high availability. For instance, database upgrades require a few hours of downtime. Another example would be a SAN replacement. Such items make comprehensive four nine solutions nigh impossible to implement without additional considerations. The fact is that implementing a true 100% high availability is nearly impossible without exorbitant costs. To have complete high availability for all components within the data center requires an architecture for all systems and databases that eliminates any Single Point of Failure (SPOF) and allows for total online availability for all server hardware, network, operating systems, applications, and database systems. Service Level Agreements for high availability When it comes to determining high availability ratios, this is often expressed as the percentage of uptime in a given year. The following table shows the approximate downtime that is allowed for a specific percentage of high availability, granted that the system is required to operate continuously. Service Level Agreements (SLAs) usually refer to monthly downtime or availability in order to calculate service levels to match monthly financial cycles. The following table from the International Organization for Standardization (ISO) illustrates the correlation between a given availability percentage and the relevant amount of time a system would be unavailable per year, month, or week: For monthly calculations, a 30-day month is used. It should be noted that availability and uptimes are not the same thing. For instance, a database system may be online but not available, as in the case of application outages such as when a user's SQL script cannot be executed. In most cases, the number of nines is not often used by the database or system professional when measuring high availability for data center environments because it is difficult to extrapolate such hard numbers without a large test environment. For practical purposes, availability is calculated more as a probability or average downtime given per annual basis. High availability interpretations When it comes to discussing how availability is measured, there is a debate on the correct method of interpretation for high availability ratios. For instance, an Oracle database server that has been online for 365 days in a given non-leap year might have been eclipsed by an application failure that lasted for nine hours during a peak usage period. As a consequence, the users will see the complete system as unavailable, whereas the Oracle database administrator will claim 100% "uptime." However, given the true definition of availability, the Oracle database will be approximately 99.897% available (8751 hours of available timeout of 8760 hours per non-leap year). Furthermore, Oracle database systems experiencing performance problems are often deemed partially or entirely unavailable by users, while in the eyes of the database administrator the system is fine and available. Another situation that presents a challenge in terms of what constitutes availability would be the scenario in which the availability of a mission-critical application might go offline yet is not viewed as unavailable by the Oracle DBA, as the database instance could still be online and thus available. However, the application in question is offline to the end user, thus presenting a status of unavailable from the perspective of the end user. This illustrates the key point that a true availability measure must be from a holistic perspective and not strictly from the database's point of view. Availability should be measured with comprehensive monitoring tools that are themselves highly available and present the proper instrumentation. If there is a lack of instrumentation, systems supporting high-volume transaction processing frequently during the day and night, such as credit-card-processing database servers, are often inherently better monitored than systems that experience a periodic lull in demand. Currently, custom scripts can be developed in conjunction with third-party tools to provide a measure of availability. One such tool that we recommend for monitoring database, server, and application availability is that provided by Oracle Grid Control, which also includes Oracle Enterprise Manager. Oracle Grid Control provides instrumentation via agents and plugin modules to measure availability and performance on a system-wide enterprise level, thereby greatly aiding the Oracle database professional to measure, track, and report to management and users on the status of availability with all mission-critical applications and system components. However, the current version of Oracle Enterprise Manager will not provide a true picture of availability until 11g Grid Control is released in the future. Recovery time and high availability Recovery time is closely related to the concept of high availability. Recovery time varies based on system design and failure experienced, in that a full recovery may well be impossible if the system design prevents such recovery options. For example, if the data center is not designed correctly with the required system and database backups and a standby disaster recovery site in place, then a major catastrophe such as a fire or earthquake will almost always result in complete unavailability until a complete MAA solution is implemented. In this case, only a partial recovery may be possible. This drives home the point that for all major data center operations, you should always have a backup plan with an offsite secondary disaster-recovery data center to protect against losing all critical systems and data. In terms of database administration for Oracle data centers, the concept of data availability is essential when dealing with recovery time and planning for highly available options. Data availability references the degree to which databases such as Oracle record and report transactions. Data management professionals often focus just on data availability in order to judge what constitutes an acceptable data loss with different types of failure events. While application service interruptions are inconvenient and sometimes permitted, data loss is not to be tolerated. As one Chief Information Officer (CIO) and executive once told us while working for a large financial brokerage, you can have the system down to perform maintenance but never ever lose my data! The next item related to high availability and recovery standards is that of Service Level Agreements or SLAs for data center operations. The purpose of the Service Level Agreement is to actualize the availability objectives and requirements for a data center environment per business requirements into a standard corporate information technology (IT) policy. System design for high availability Ironically, by adding further components to the overall system and database architecture design, you may actually undermine your efforts to achieve true high availability for your Oracle data center environment. The reason for this is by their very nature, complex systems inherently have more potential failure points and thus are more difficult to implement properly. The most highly available systems for Oracle adhere to a simple design pattern that makes use of a single, high quality, multipurpose physical system with comprehensive internal redundancy running all interdependent functions, paired with a second like system at a separate physical location. An example would be to have a primary Oracle RAC clustered site with a second Disaster Recovery site at another location with Oracle Data Guard and perhaps dual Oracle RAC clusters at both sites connected by stretch clusters. The best possible way to implement an active standby site with Oracle would be to have Oracle Streams and Oracle Data Guard. Large commercial banking and insurance institutions would benefit from this model for Oracle data center design to maximize system availability. Business Continuity and high availability Business Continuity Planning (BCP) refers to the creation and validation of a rehearsed operations plan for the IT organization that explains the procedures of how the data center and business unit will recover and restore, partially or completely, interrupted business functions within a predetermined time after a major disaster. In its simplest terms, BCP is the foundation for the IT data center operations team to maintain critical systems in the event of disaster. Major incidents could include events such as fires, earthquakes, or national acts of terrorism. BCP may also encompass corporate training efforts to help reduce operational risk factors associated with the lack of information technology (IT) management controls. These BCP processes may also be integrated with IT standards and practices to improve security and corporate risk management practices. An example would be to implement BCP controls as part of Sarbanes-Oxley (SOX) compliance requirements for publicly traded corporations. The origins for BCP standards arose from the British Standards Institution (BSI) in 2006 when the BSI released a new independent standard for business continuity named BS 25999-1. Prior to the introduction of this standard for BCP, IT professionals had to rely on the previous BSI information security standard, BS 7799, which provided only limited standards for business continuity compliance procedures. One of the key benefits of these new standards was to extend additional practices for business continuity to a wider variety of organizations, to cover needs for public sector, government, non-profit, and private corporations. Disaster Recovery Disaster Recovery (DR) is the process, policies, and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after either a natural or human-caused disaster. Disaster Recovery Planning (DRP) is a subset of larger processes such as Business Continuity and should include planning for resumption of applications, databases, hardware, networking, and other IT infrastructure components. A Business Continuity Plan includes planning for non-IT-related aspects, such as staff member activities, during a major disaster as well as site facility operations, and it should reference the Disaster Recovery Plan for IT-related infrastructure recovery and business continuity procedures and guidelines. Business Continuity and Disaster Recovery guidelines The following recommendations will provide you with a blueprint to formulate your requirements and implementation for a robust Business Continuity and Disaster Recovery plan: Identifying the scope and boundaries of your Business Continuity Plan:The first step enables you to define the scope of your new Business Continuity Plan. It provides you with an idea of the limitations and boundaries of the Business Continuity Plan. It also includes important audit and risk analysis reports for corporate assets. Conducting a Business Impact Analysis session:Business Impact Analysis (BIA) is the assessment of financial losses to institutions, which usually results as the consequence of destructive events such as the loss or unavailability of mission-critical business services. Obtaining support for your business continuity plans and goals from the executive management team:You will need to convince senior management to approve your business continuity plan, so that you can flawlessly execute your disaster recovery planning. Assign stakeholders as representatives on the project planning committee team, once approval is obtained from the corporate executive team. Understanding its specific role:In the possible event of a major disaster, each of your departments must be prepared to take immediate action. In order to successfully recover your mission-critical database systems with minimal loss, each team must understand the BCP and DRP plans, as well as follow them correctly. Furthermore, it is also important to maintain your DRP and BCP plans, as well as conduct periodic training of your IT staff members on a regular basis to have successful response time for emergencies. Such "smoke tests" to train and keep your IT staff members up to date on the correct procedures and communications will pay major dividends in the event of an unforeseen disaster. One useful tool for creating and managing BCP plans is available from the National Institute of Standards and Technologies (NIST). The NIST documentation can be used to generate templates that can be used as an excellent starting point for your Business Continuity and Disaster Recovery planning. We highly recommend that you download and review the following NIST publication for creating and evaluating BCP plans, Contingency Planning Guide for Information Technology Systems, which is available online at http://csrc.nist.gov/publications/nistpubs/800-34/sp800-34.pdf. Additional NIST documents may also provide insight into how best to manage new or current BCP or DRP plans. A complete listing of NIST publications is available online at http://csrc.nist.gov/publications/PubsSPs.html.
Read more
  • 0
  • 0
  • 1958

article-image-sql-query-basics-sap-business-one
Packt
18 May 2011
7 min read
Save for later

SQL Query Basics in SAP Business One

Packt
18 May 2011
7 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business Who can benefit from using SQL Queries in SAP Business One? There are many different groups of SAP Business One users who may need this tool. To my knowledge, there is no standard organization chart for Small and Midsized enterprises. Most of them are different. You may often find one person that handles more than one role. You may check the following list to see if anything applies to you: Do you need to check specific sales results over certain time periods, for certain areas or certain customers? Do you want to know who the top vendors from certain locations for certain materials are? Do you have dynamic updated version of your sales force performance in real time? Do you often check if approval procedures are exactly matching your expectations? Have you tried to start building your SQL query but could not get it done properly? Have you experienced writing SQL query but the results are not always correct or up to your expectations? Consultant If you are an SAP Business One consultant, you have probably mastered SQL query already. However, if that is not the case, this would be a great help to extend your consulting power. It will probably become a mandatory skill in the future that any SAP Business One consultant should be able to use SQL query. Developer If you are an SAP Business One add-on developer, these skills will be good additions to your capabilities. You may find this useful even in some other development work like coding or programming. Very often you need to embed SQL query to your codes to complete your Software Development Kit (SDK) project. SAP Business One end user If you are simply a normal SAP Business One end user, you may need this more. This is because SQL query usage is best applied for the companies who have SAP Business One live data. Only you as the end users know better than anyone else what you are looking for to make Business Intelligence a daily routine job. It is very important for you to have an ability to create a query report so that you can map your requirement by query in a timely manner. SQL query and related terms Before going into the details of SQL query, I would like to briefly introduce some basic database concepts because SQL is a database language for managing data in Relational Database Management Systems (RDBMS). RDBMS RDBMS is a Database Management System that is based on the relation model. Relational here is a key word for RDBMS. You will find that data is stored in the form of Tables and the relationship among the data is also stored in the form of tables for RDBMS. Table Table is a key component within a database. One table or a group of tables represent one kind of data. For example, table OSLP within SAP Business One holds all Sales Employee Data. Tables are two-dimensional data storage place holders. You need to be familiar with their usage and their relationships with each other. If you are familiar with Microsoft Excel, the worksheet in Excel is a kind of two-dimensional table. Table is also one of the most often used concepts. Relationships between each table may be more important than tables themselves because without relation, nothing could be of any value. One important function within SAP Business One is allowing User Defined Table (UDT). All UDTs start with "@". Field A field is the lowest unit holding data within a table. A table can have many fields. It is also called a column. Field and column are interchangeable. A table is comprised of records, and all records have the same structure with specific fields. One important concept in SAP Business One is User Defined Field (UDF). All UDFs start with U_. SQL SQL is often referred to as Structured Query Language. It is pronounced as S-Q-L or as the word "Sequel". There are many different revisions and extensions of SQL. The current revision is SQL: 2008, and the first major revision is SQL-92. Most of SQL extensions are built on top of SQL-92. T-SQL Since SAP Business One is built on Microsoft SQL Server database, SQL here means Transact-SQL or T-SQL in brief. It is a Microsoft's/Sybase's extension of general meaning for SQL. Subsets of SQL There are three main subsets of the SQL language: Data Control Language (DCL) Data Definition Language (DDL) Data Manipulation Language (DML) Each set of the SQL language has a special purpose: DCL is used to control access to data in a database such as to grant or revoke specified users' rights to perform specified tasks. DDL is used to define data structures such as to create, alter, or drop tables. DML is used to retrieve and manipulate data in the table such as to insert, delete, and update data. Select, however, becomes a special statement belonging to this subset even though it is a read-only command that will not manipulate data at all. Query Query is the most common operation in SQL. It could refer to all three SQL subsets. You have to understand the risks of running any Add, Delete, or Update queries that could potentially alter system tables even if they are User Defined Fields. Only SELECT query is legitimate for SAP Business One system table. Data dictionary In order to create working SQL queries, you not only need to know how to write it, but also need to have a clear view regarding the relationship between tables and where to find the information required. As you know, SAP Business One is built on Microsoft SQL Server. Data dictionary is a great tool for creating SQL queries. Before we start, a good Data Dictionary is essential for the database. Fortunately, there is a very good reference called SAP Business One Database Tables Reference readily available through SAP Business One SDK help Centre. You can find the details in the following section. SAP Business One—Database tables reference The database tables reference file named REFDB.CHM is the one we are looking for. SDK is usually installed on the same server as the SAP Business One database server. Normally, the file path is: X:Program FilesSAPSAP Business One SDKHelp. Here, "X" means the drive where your SAP Business One SDK is installed. The help file looks like this: In this help file, we will find the same categories as the SAP Business One menu with all 11 modules. The tables related to each module are listed one by one. There are tree structures in the help file if the header tables have row tables. Each table provides a list of all the fields in the table along with their description, type, size, related tables, default value, and constraints. Naming convention of tables for SAP Business One To help you understand the previous mentioned data dictionary quickly, we will be going through the naming conventions for the table in SAP Business One. Three letter words Most tables for SAP Business One have four letters. The only exceptions are numberending tables, if the numbers are greater than nine. Those tables will have five letters. To understand table names easily, there is a three letter abbreviation in SAP Business One. Some of the commonly used abbreviations are listed as follows: ADM: Administration ATC: Attachments CPR: Contact Persons CRD: Business Partners DLN: Delivery Notes HEM: Employees INV: Sales Invoices ITM: Items ITT: Product Trees (Bill of Materials) OPR: Sales Opportunities PCH: Purchase Invoices PDN: Goods Receipt PO POR: Purchase Orders QUT: Sales Quotations RDR: Sales Orders RIN: Sales Credit Notes RPC: Purchase Credit Notes SLP: Sales Employees USR: Users WOR: Production Orders WTR: Stock Transfers  
Read more
  • 0
  • 1
  • 7369
article-image-sage-tips-and-tricks
Packt
17 May 2011
6 min read
Save for later

Sage: Tips and Tricks

Packt
17 May 2011
6 min read
  Sage Beginner's Guide Unlock the full potential of Sage for simplifying and automating mathematical computing         Read more about this book       (For more resources related to this topic, see here.) Calling the reset() function Tip: If you start getting strange results from your calculations, you may have accidentally re-defined a built-in function or constant. Try calling the reset() function and running the calculation again. Remember that reset will delete any variables or functions that you may have defined, so your calculation will have to start over from the beginning.   The value of variable i Tip: Although the variable i is often used as a loop counter, the default value of i in Sage is the square root of negative one. Remember that you can use the command restore('i') to restore i to its default value.   Calling Maxima directly Tip: Sage uses Maxima, an open-source computer algebra system, to handle many symbolic calculations. You can interact directly with Maxima from a Sage worksheet or the interactive shell by using the maxima object. For example, the following command will factor an expression using Maxima: F = maxima.factor('x^5 - y^5')   The factor function Tip: The factor function in Sage is used to factor both polynomials and integers. This behaviour is different from Mathematica, where Factor[] is used to factor polynomials and FactorInteger[] is used to factorize integers.   Logarithms in Sage Tip: The log function in Sage assumes the base of the logarithm is e. If you want to use a different base (such as 10), use the optional argument with keyword base to specify the base. For example: log(x, base=10)   Specifying colors in Sage Tip: There are several ways to specify a color in Sage. For basic colors, you can use a string containing the name of the color, such as red or blue. You can also use a tuple of three floating-point values between 0 and 1.0. The first value is the amount of red, the second is the amount of green, and the third is the amount of blue. For example, the tuple (0.5, 0.0, 0.5) represents a medium purple color.   Organizing code blocks Tip: If you find a block of code occurring more than once in your program, stop and move that block of code to a function. Duplicate blocks of code will make your programs harder to read and more prone to bugs.   The for statement Tip: Don't forget to put a colon at the end of the for statement! Remember to consistently indent every statement in the loop body.   Manipulating the data in an object Tip: As you start using objects, you may be frustrated by the lack of direct access to the data. You may find yourself tempted to avoid using the methods defined by the object, and directly manipulate the data in the object. This defeats the purpose of using objects! If the methods seem to be hindering your use of the object, you probably aren't using them right. Take another look at the documentation and examples, and re-think your approach.   Items of different types in a list Tip: The items in a list usually have the same type. Technically, it is possible to mix types in a list, but this is generally not a good idea for keeping your code organized and readable. If the need arises to use items of different types, it may be better to use a dictionary.   Ordered dictionaries Tip: Python 2.7 and versions above 3.1.3 contain a new class called OrderedDict, which works just like an ordinary dictionary except that it remembers the order in which items were inserted. This class is not available in Sage 4.6.1 because Sage is still using Python 2.6, but it should be available soon.   Runtime errors Tip: if statements are not ideal for catching runtime errors. Exceptions are a much more elegant way to deal with runtime errors.   Using exceptions correctly Tip: The whole idea of using exceptions is to make it easier to identify and handle specific runtime errors in your programs. You defeat the purpose of using exceptions if you place too many lines of code in a try block, because then it's hard to tell which statement raised the exception. It's also a bad idea to have a bare except: statement that doesn't specify the exception type that is being caught. This syntax will catch any type of exception, including SystemExit and KeyboardInterrupt exceptions, making it hard to terminate a misbehaving program. It's also considered bad practice to catch an exception without properly handling it, as this practice can mask errors.   reload a module after making changes Tip: Let's say you created a module called tank.py and used import tank to make its names available in a Sage script, or on the Sage command line. During testing, you found and fixed a bug, and saved the module file. However, Sage won't recognize that you changed anything unless you use the command reload(tank) to force it to reload the module. When working with multiple modules in a package, you may need to import a module on the command line (or in a worksheet cell) before reloading it.   SAGE_BROWSER settings Tip: Sage can be used with LaTeX to typeset complex mathematical formulae and save the results as PDF or DVI files. If you have set the SAGE_BROWSER environment variable to force Sage to use a particular web browser, you might have trouble viewing PDF or DVI files in an external viewer. If this occurs, unset SAGE_BROWSER, and change the default web browser for your operating system so that Sage will use the correct browser.   Optimizing innermost loops Tip: Many numerical algorithms consist of nested loops. The statements in the innermost loop are executed more times than statements in the outer loops, so you will get the most "bang for your buck" by focusing your optimization efforts on the innermost loop. When loops are nested, the code in the innermost loop executes most often. When a calculation needs to run fast, you will get the greatest speed increase by optimizing the code in the innermost loop.   Summary In this article we took a look at some tips and tricks for working with Sage and using Python more effectively. Further resources on this subject: Sage: 3D Data Plotting [Article] Plotting Data with Sage [Article] Creating Line Graphs in R [Article] What Can You Do with Sage Math? [Article] Python Multimedia: Enhancing Images [Article] Python Multimedia: Fun with Animations using Pyglet [Article]
Read more
  • 0
  • 0
  • 1374

article-image-foreword-microsoft-dynamics-sure-step-practitioners
Packt
17 May 2011
1 min read
Save for later

Foreword by Microsoft Dynamics Sure Step Practitioners

Packt
17 May 2011
1 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions         Read more about this book       (For more resources on this subject, see here.) "Investing in a business application—be it managing one's customers, tracking inventory, coordinating global resources, or just being able to get real-time visibility to cash flow—has never been so important. Gone are the days when companies invested in business applications, such as CRM and ERP, to simply streamline their supply chain or manage their sales pipeline. And gone are the days when these business applications were selected, implemented, and deployed by the IT organizations alone. Companies, and individuals within them, are relying on these business solutions to provide them a competitive advantage—an advantage that includes not only using the facts and data to generate information, but also to transform it to the knowledge that can be applied to gain a deeper understanding of the environment and provide a reliable business operating system for enabled intuition. This intuition of where to invest, how to plan, and when to execute in a well-planned, analysis-rich, and coordinated manner is what provides a competitive advantage to today's organizations. The expectations of business transformation that business solutions can provide through product or service innovation, customer delight, and operational efficiency are making it even more critical to "get it right" and "provide the business backbone". Sales, marketing, operations, and services are joining the finance and IT organizations to enable this collaborative change. We need to ask ourselves what we can do to not only provide this competitive advantage to our customers, but also to provide a solution to our customers, for them to be able to manage their own customers and businesses with better decision making. When Microsoft decided to invest in a methodology for Microsoft Dynamics solutions, there was one goal in mind—provide our customers with a Microsoft Dynamics purchase, implementation, and an ongoing experience that is unparalleled in the business solutions industry. We determined that we needed a Sure Step way to achieve this customer experience—an experience that is predicated on learning from successful implementations, and equally from the ones that went sideways due to a lack of integrated due diligence and execution approach. Sure Step provides our partners, our value-added resellers (VARs), our independent software vendors (ISVs), and Microsoft Consulting Services and field teams, with valuable guidance on people, process, and technology aspects that need to come together in a timely, predictable, and disciplined manner to help our prospects and eventual customers "get it right". Microsoft Dynamics Sure Step is the culmination and ongoing journey to make this vision and experience real. Are we indeed investing in the success of our customers, and through that the success of the Microsoft eco-systems of partners and ISVs, keeping these principles in mind? I have always believed (and known from first-hand experience!) that getting into college is only the first part of the arduous life-changing experience. Getting through college with the right skills, social temperament, informed career choices, and maybe, having fun through the experience, is often the most critical success factor for sustainable lifestyle. Investing in a business application such as Microsoft Dynamics CRM or one of the Microsoft Dynamics ERP products is not dissimilar. Making that right license purchase of software or signing up for the subscription of one of our online solutions is the key; making sure that the software indeed helps guide our customers to ensure their business success and meet their business goals is more critical. Understanding whether the solution is being analyzed, designed, developed, deployed, and eventually adopted and operated in context of the specific industry, with the right level of individual empowerment, in a relevant yet scalable manner to grow with the company, and eventually feel enamored and positively transformed by the experience, is what ensures success. Are we thinking about the customer investment and relationship we develop as transactional events, or as a strategic relationship we wish to develop and watch our customers graduate successfully from the implementation of the solution to reaping the rewards of their due diligence and implementation? For our partners, Microsoft Services, and IT organizations of our customers, understanding the fundamental principles of any methodology, applying that framework to one's business, and driving adoption of a familiar albeit new way of managing customer expectations requires de-mystifying the method behind the perceived madness! It also becomes critical for each of you to understand how you can use the power and persuasion of Sure Step to not only adapt it to the needs of your organization, but also for the specifics of the customer engagement that you are managing, and as a result help provide you a competitive advantage against the other business applications that may provide the capabilities but may not provide the "customer-focused" approach to lifecycle management. Are you willing to invest time and effort in putting more discipline and accountability into the commitment that you are making for your customers' successes? Chandru Shankar and Vincent Bellefroid have been loyal thought-leaders, advocates, and evangelists of Microsoft Dynamics Sure Step from the day we embarked on this journey of on-time, on-spec, on-budget Microsoft Dynamics engagements. Chandru Shankar has tapped into his extensive experience working in the partner channel implementing business solutions, and through the architecture of Microsoft Dynamics Sure Step, the deep insights, best-practice values, and the easy-to-comprehend guidance on why Microsoft Dynamics Sure Step recommends what to be done by whom, when, and how. He delves into the details and helps understand the value proposition of Sure Step not only from a sales or implementation perspective, but also ensuring that our customers are getting the most out of their investment now, and forever. The "brain behind the brawn" makes it an enjoyable journey (yes, for a methodology read!) through self discovery and relevant research that will hit close to home for many of you. Vincent Bellefroid has extensive experience dealing with the accolades and brickbats associated with going fearlessly where only the best and bravest readiness, adoption, and training experts can venture. He demystifies how you can embark on a journey of Sure Step adoption, and eventual excellence, within your organizations, by applying some time-tested techniques including Project and Change Management, real-life sales and deployment scenarios, and a roadmap of your success through structured roadmaps. It is hard for me think of a more qualified team to land the message, value, and approach of Microsoft Dynamics Sure Step for our business solutions-focused, business-savvy audiences. Business-ready organizations are looking to unleash the power of their Microsoft Dynamics investments as they look to drive better decisions, based on operationally efficient business solutions. These organizations have managed their businesses to date. Can they now measure and improve? Do they have the solutions, people, and processes adopted, deployed, and executed in a manner that helps them drive the shift towards integrated end-to-end business management? This book will provide the understanding and approach you need to measure your success through the success of your customers and their business solutions." Aditya Mohan - Director, Product Management, Microsoft Dynamics Sure Step   "One of the most important avenues to a partner's business success—both short and long term—is their ability to manage customer expectations and deliver high quality solutions on time, on budget, and on spec. Sure Step encompasses a number of tools and guidance that enable partners to do just that—helping them drive profitable projects along with customer satisfaction and loyalty at the same time. Partners with a proven methodology have a distinct competitive advantage, by offering customers peace-of-mind. We have been observing an increasing number of prospects asking for Sure Step-capable partners, so we absolutely recommend that existing as well as prospective Microsoft Dynamics partners adopt Sure Step. As an added benefit, partners will, instead of spending valuable resources developing and maintaining their own methodology, take full advantage of Microsoft's ongoing investments to make Sure Step even more comprehensive and robust. Partners who want to add their own flavor to Sure Step have the opportunity to do exactly that, by treating Sure Step as a methodology platform and developing "the last mile" themselves, much like ISVs build differentiating solutions on top of our ERP and CRM applications. No matter how a partner plans to leverage Sure Step, this book should help not only explain what Sure Step is about, but also how to get it implemented and adopted within the partner's organization." Anders Spatzek - Director, Microsoft Dynamics Services & Partner Readiness   "Global organizations are typically geographically dispersed, and possess cross-functional teams with varying skill sets in different regions. Business solutions delivery for such organizations requires the ability to manage requirements and schedules, dictated by multiple forces. Also, influencers and power brokers can easily create scope creep and other issues to derail these important initiatives. A consistent methodology and taxonomy is an absolute must for dealing with the pulls and demands across these organizations, to ensure that the project stays on course. Global delivery typically necessitates the involvement of multiple delivery teams, from the customer, to Microsoft, to partner organizations. Regardless of who owns the delivery of these engagements, it is of paramount importance that all the delivery resources are performing to the "same sheet of music". This is also where it is essential to have a common and consistent framework of delivery. For our practice, Microsoft Dynamics Sure Step is the tool to ensure success not only for our global practice, but more importantly for our customers and partners. We require that our consulting organization is adept with the methodology, advocating certification on the methodology, and also selecting partners who can work well within these parameters. This book will be an additional asset to help our delivery resources understand the core principles behind the methodology." Kundan Prakash - Director Business Solutions, Microsoft Services Global Delivery   "Providing Microsoft's entrepreneurial partners and customers with industry best practices is vital for ensuring successful business growth. Microsoft Dynamics Sure Step is one of those tools that save time on implementations with the added benefit of bringing together the communication between a sales team and a consulting practice! Stocked with a multitude of templates aligned to a phased implementation process, you can find the right tools to use at each stage of a customer engagement. In delivering the best knowledge to a global group of partners, Microsoft seeks out top business partners to provide insight and create new content that aligns to Microsoft product releases and industry direction. The result is a tool that brings over 800 pages of project management based guidance along with more than 700 templates, samples, and links to Microsoft resources. As Sure Step can fit to any size of project, product line, a number of industry solutions, as well as both pre- and post-implementation activities, a new Dynamics team will benefit from guidance that will get them started down the right path to adopting Sure Step and applying it to their customer's lifecycles. This book is sure to find its way to the front of many consultants' bookshelves as the go-to reference for optimizing their use of Microsoft Dynamics Sure Step." Lori Thalmann Pytlik - Sure Step R&D Manager   "Successful ERP and CRM implementations are dependent as much on the product itself, as they are on the people and processes used to implement them. Accordingly, ERP and CRM sales processes are successful when, besides proving ease-of-use and showing relevant product feature sets, they help build confidence in the minds of the customers that a well-defined path exists to get their vision and objectives materialized. Simply put, Microsoft Dynamics Sure Step is the tool that provides the confidence in the pre-sales cycle and assurance during the delivery, which makes a difference. For our Microsoft Dynamics practice in Microsoft Consulting Services (MCS), we require all our consultants and project managers to be fully proficient and certified in Microsoft Dynamics Sure Step methodology. This helps us in maintaining the high rate of customer satisfaction that we have in this business, as well as providing for an agile and responsive workforce that speaks the same language regardless of the project they are on, or at what point in the lifecycle of a project they were introduced. This book does a great job in not only detailing out what Sure Step is, but how to best use it in various pre-sales and delivery situations to provide the confidence, consistency, and predictability in execution, so that it becomes one of the core differentiators." Muhammad Alam - Dynamics US CTO, Microsoft Consulting Services Further resources on this subject: Installing Microsoft Dynamics NAV [Article] Planning: Microsoft Dynamics GP System [Article] Microsoft Dynamics GP: Data Management [Article] Securing Dynamics NAV Applications [Article] Installing the Dynamics AX Base Server Components for Microsoft [Article] Fine-tuning the SQL Server database for Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 985