Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-creating-web-application-jboss-5
Packt
31 Dec 2009
7 min read
Save for later

Creating a Web Application on JBoss AS 5

Packt
31 Dec 2009
7 min read
Wonder what was the first message sent through Internet? At 22:30 hours on October 29, 1969, a message was transmitted using ARPANET (the predecessor of the global Internet) on a host-to-host connection. It was meant to transmit "login". However, it transmitted just "lo" and crashed. Developing web layout The basic component of any Java web application is the servlet. Born in the middle of the 90s, servlets quickly gained success against their competitors, the CGI scripts. This was because of some innovative features, especially the ability to execute requests concurrently, without the overhead of creating a new process for each request. However, a few things were missing, for example, the servlet API did not address any APIs specifically for creating the client GUI. This resulted in multiple ways of creating the presentation tier, generally with tag libraries that differed from job to job and from individual developers. The second thing that was missing in the servlet specification was a clear distinction between the presentation tier and the backend. A plethora of web frameworks tried to fill this gap; particularly the Struts framework effectively realized a clean separation of the model (application logic that interacts with a database) from the view (HTML pages presented to the client) and the controller (instance that passes information between view and model). However, the limitation of these frameworks was that even if they realized a complete modular abstraction, they still failed as they always exposed theHttpServletRequest and HttpServletSessionobjects to their action(s). Their actions, in turn, needed to accept the interface contracts such as ActionForm, ActionMapping, and so on. The JavaServer Faces that emerged on the stage a few years later pursued a different approach. Unlike request-driven Model–View–Controller (MVC) web frameworks, JSF chose a component-based approach that ties the user interface component to a well-defined request processing lifecycle. This greatly simplifies the development of web applications. The JSF specification allows you to have presentation components be POJOs. This creates a cleaner separation from the servlet layer and makes it easier to do testing by not requiring the POJOs to be dependent on the servlet classes. In the following sections, we will describe how to create a web layout for our application store using the JSF technology. For an exhaustive explanation of the JSF framework, we suggest you to surf the JSF homepage at http://java.sun.com/javaee/javaserverfaces/. Installing JSF on JBoss AS JBoss AS already ships with the JSF libraries, so the good news is that you don't need to download or install them in the application server. There are different implementations of the JSF libraries. Earlier JBoss releases adopted the Apache MyFaces library. JBoss AS 4.2 and 5.x ship with the Common Development and Distribution License (CDDL) implementation (now called "Project Mojarra") of the JSF 1.2 specification that is available from the java.net open source community. Switching to another JSF implementation is anyway possible. All you have to do is package your JSF libraries with your web application and configure your web.xml to ignore the JBoss built-in implementation: <context-param><param-name>org.jboss.jbossfaces.WAR_BUNDLES_JSF_IMPL</param-name><param-value>true</param-value></context-param> We will start by creating a new JSF project. From the File menu, select New | Other | JBoss Tools Web | JSF | JSF Web project. The JSF applet wizard will display, requesting the Project Name, the JSF Environment, and the default starting Template. Choose AppStoreWeb as the project name, and check that the JSF Environment used is JSF 1.2. You can leave all other options to the defaults and click Finish. Eclipse will now suggest that you switch to the Web Projects view that logically assembles all JSF components. (It seems that the current release of the plugin doesn't understand your choice, so you have to manually click on the Web Projects tab.) The key configuration file of a JSF application is faces-config.xml contained in the Configuration folder. Here you declare all navigation rules of the application and the JSF managed beans. Managed beans are simple POJOs that provide the logic for initializing and controlling JSF components, and for managing data across page requests, user sessions, or the application as a whole. Adding JSF functionalities also requires adding some information to your web.xml file so that all requests ending with a certain suffix are intercepted by the Faces Servlet. Let's have a look at the web.xml configuration file: <?xml version="1.0"?><web-app version="2.5" xsi:schemaLocation="http://java.sun.com/xml/ns/javaeehttp://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"><display-name>AppStoreWeb</display-name><context-param><param-name>javax.faces.STATE_SAVING_METHOD</param-name><param-value>server</param-value></context-param><context-param> [1]<param-name>com.sun.faces.enableRestoreView11Compatibility</param-name><param-value>true</param-value></context-param><listener><listener-class>com.sun.faces.config.ConfigureListener</listener-class></listener><!-- Faces Servlet --><servlet><servlet-name>Faces Servlet</servlet-name><servlet-class>javax.faces.webapp.FacesServlet</servlet-class><load-on-startup>1</load-on-startup></servlet><!-- Faces Servlet Mapping --><servlet-mapping><servlet-name>Faces Servlet</servlet-name><url-pattern>*.jsf</url-pattern></servlet-mapping><login-config><auth-method>BASIC</auth-method></login-config></web-app> The context-param pointed out here [1] is not added by default when you create a JSF application. However, it needs to be added, else you'll stumble into an annoying ViewExpiredException when your session expires (JSF 1.2). Setting up navigation rules In the first step, we will define the navigation rules for our AppStore. A minimalist approach would require a homepage that displays the orders, along with two additional pages for inserting new customers and new orders respectively. Let's add the following navigation rule to the faces-config.xml: <faces-config><navigation-rule><from-view-id>/home.jsp</from-view-id> [1]<navigation-case><from-outcome>newCustomer</from-outcome> [2]<to-view-id>/newCustomer.jsp</to-view-id></navigation-case><navigation-case><from-outcome>newOrder</from-outcome> [3]<to-view-id>/newOrder.jsp</to-view-id></navigation-case></navigation-rule><navigation-rule><from-view-id></from-view-id> [4]<navigation-case><from-outcome>home</from-outcome><to-view-id>/home.jsp</to-view-id></navigation-case></navigation-rule></faces-config> In a navigation rule, you can have one from-view-id that is the (optional) starting page, and one or more landing pages that are tagged as to-view-id. The from-outcome determines the navigation flow. Think about this parameter as a Struts forward, that is, instead of embedding the landing page in the JSP/servlet, you'll simply declare a virtual path in your JSF beans. Therefore, our starting page will be home.jsp [1] that has two possible links—the newCustomer.jsp form [2] and the newOrder.jsp form [3]. At the bottom, there is a navigation rule that is valid across all pages [4]. Every page requesting the home outcome will be redirected to the homepage of the application. The above JSP will be created in a minute, so don't worry if Eclipse validator complains about the missing pages. This configuration can also be examined from the Diagram tab of your faces-config.xml: The next piece of code that we will add to the confi guration is the JSF managed bean declaration. You need to declare each bean here that will be referenced by JSF pages. Add the following code snippet at the top of your faces-config.xml (just before navigation rules): <managed-bean><managed-bean-name>manager</managed-bean-name> [1]<managed-bean-class>com.packpub.web.StoreManagerJSFBean</managed-bean-class> [2]<managed-bean-scope>request</managed-bean-scope> [3]</managed-bean> The <managed-bean-name> [1] element will be used by your JSF page to reference your beans. The <managed-bean-class> [2] is obviously the corresponding class. The managed beans can then be stored within the request, session, or application scopes, depending on the value of the <managed-bean-scope> element [3].
Read more
  • 0
  • 0
  • 4716

article-image-oracle-web-rowset-part1
Packt
22 Oct 2009
6 min read
Save for later

Oracle Web RowSet - Part1

Packt
22 Oct 2009
6 min read
The ResultSet interface requires a persistent connection with a database to invoke the insert, update, and delete row operations on the database table data. The RowSet interface extends the ResultSet interface and is a container for tabular data that may operate without being connected to the data source. Thus, the RowSet interface reduces the overhead of a persistent connection with the database. In J2SE 5.0, five new implementations of RowSet—JdbcRowSet, CachedRowSet, WebRowSet, FilteredRowSet, and JoinRowSet—were introduced. The WebRowSet interface extends the RowSet interface and is the XML document representation of a RowSet object. A WebRowSet object represents a set of fetched database table rows, which may be modified without being connected to the database. Support for Oracle Web RowSet is a new feature in Oracle Database 10g driver. Oracle Web RowSet precludes the requirement for a persistent connection with the database. A connection is required only for retrieving data from the database with a SELECT query and for updating data in the database after all the required row operations on the retrieved data has been performed. Oracle Web RowSet is used for queries and modifications on the data retrieved from the database. Oracle Web RowSet, as an XML document representation of a RowSet facilitates the transfer of data. In Oracle Database 10g and 11g JDBC drivers, Oracle Web RowSet is implemented in the oracle.jdbc.rowset package. The OracleWebRowSet class represents a Oracle Web RowSet. The data in the Web RowSet may be modified without connecting to the database. The database table may be updated with the OracleWebRowSet class after the modifications to the Web RowSet have been made. A database JDBC connection is required only for retrieving data from the database and for updating the database. An XML document representation of the data in a Web RowSet may be obtained for data exchange. In this article, the Web RowSet feature in Oracle 10g database JDBC driver is implemented in JDeveloper 10g. An example Web RowSet will be created from a database. The Web RowSet will be modified and stored in the database table. In this article, we will learn the following: Creating a Oracle Web RowSet object Adding a row to Oracle Web RowSet Modifying the database table with Web RowSet In the second half of the article, we will cover the following : Reading a row from Oracle Web RowSet Updating a row in Oracle Web RowSet Deleting a row from Oracle Web RowSet Updating Database Table with modified Oracle Web RowSet Setting the Environment We will use Oracle database to generate an updatable OracleWebRowSet object. Therefore, install Oracle database 10g including the sample schemas. Connect to the database with the OE schema: SQL> CONNECT OE/<password> Create an example database table, Catalog, with the following SQL script: CREATE TABLE OE.Catalog(Journal VARCHAR(25), Publisher Varchar(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'July-August 2005', 'Tuning Undo Tablespace','Kimberly Floss');INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'March-April 2005', 'Starting with Oracle ADF', 'SteveMuench'); Configure JDeveloper 10g for Web RowSet implementation. Create a project in JDeveloper. Select File | New | General | Application. In the Create Application window specify an Application Name and click on Next. In the Create Project window, specify a Project Name and click on Next. A project is added in the Applications Navigator. Next, we will set the project libraries. Select Tools | ProjectProperties and in the Project Properties window, select Libraries | Add Library to add a library. Add the Oracle JDBC library to project libraries. If the Oracle JDBC drivers version prior to the Oracle database 10g (R2) JDBC drivers version is used, create a library from the Oracle Web RowSet implementation classes JAR file: C:JDeveloper10.1.3jdbclibocrs12.jar. The ocrs12.jar is required only for JDBC drivers prior to Oracle database 10g (R2) JDBC drivers. In Oracle database 10g (R2) JDBC drivers OracleRowSet implementation classes are packaged in the ojdbc14.jar. In Oracle database 11g JDBC drivers Oracle RowSet implementation classes are packaged in ojdbc5.jar and ojdbc6.jar. In the Add Library window select the User node and click on New. In the Create Library window specify a Library Name, select the Class Path node and click on Add Entry. Add an entry for ocrs12.jar. As Web RowSet was introduced in J2SE 5.0, if J2SE 1.4 is being used we also need to add an entry for the RowSet implementations JAR file, rowset.jar. Download the JDBC RowSet Implementations 1.0.1 zip file, jdbc_rowset_tiger-1_0_1-mrel-ri.zip, from http://java.sun.com/products/jdbc/download.html#rowset1_0_1 and extract the JDBC RowSet zip file to a directory. Click on OK in the Create Library window. Click on OK in the Add Library window. A library for the Web RowSet application is added. Now configure an OC4J data source. Select Tools | Embedded OC4J Server Preferences. A data source may be configured globally or for the current workspace. If a global data source is created using Global | Data Sources, the data source is configured in the C:JDeveloper10.1.3jdevsystemoracle.j2ee.10.1.3.36.73embedded-oc4jconfig data-sources.xml file. If a data source is configured for the current workspace using Current Workspace | Data Sources, the data source is configured in the data-sources.xml file. For example, the data source file for the WebRowSetApp application is WebRowSetApp-data-sources.xml. In the Embedded OC4J Server Preferences window configure either a global data source or a data source in the current workspace. A global data source definition is available to all applications deployed in the OC4J server instance. A managed-data-source element is added to the data-sources.xml file. <managed-data-source name='OracleDataSource' connection-pool-name='Oracle Connection Pool' jndi-name='jdbc/OracleDataSource'/><connection-pool name='Oracle Connection Pool'><connection-factory factory-class='oracle.jdbc.pool.OracleDataSource' user='OE' password='pw'url="jdbc:oracle:thin:@localhost:1521:ORCL"></connection-factory></connection-pool> Add a JSP, GenerateWebRowSet.jsp, to the WebRowSet project. Select File | New | Web Tier | JSP | JSP. Click on OK. Select J2EE 1.3 or J2EE 1.4 in the Web Application window and click on Next. In the JSP File window specify a File Name and click on Next. Select the default settings in the Error Page Options page and click on Next. Select the default settings in the Tag Librarieswindow and click on Next. Select the default options in the HTML Options window and click on Next. Click on Finish in the Finish window. Next, configure the web.xml deployment descriptor to include a reference to the data source resource configured in the data-sources.xml file as shown in following listing: <resource-ref><res-ref-name>jdbc/OracleDataSource</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Container</res-auth></resource-ref>
Read more
  • 0
  • 0
  • 4708

article-image-managing-it-portfolio-using-troux-enterprise-architecture
Packt
12 Aug 2010
16 min read
Save for later

Managing the IT Portfolio using Troux Enterprise Architecture

Packt
12 Aug 2010
16 min read
(For more resources on Troux, see here.) Managing the IT Portfolio using Troux Enterprise Architecture Almost every company today is totally dependent on IT for day-to-day operations. Large companies literally spend billions on IT-related personnel, software, equipment, and facilities. However, do business leaders really know what they get in return for these investments? Upper management knows that a successful business model depends on information technology. Whether the company is focused on delivery of services or development of products, management depends on its IT team to deliver solutions that meet or exceed customer expectations. However, even though companies continue to invest heavily in various technologies, for most companies, knowing the return-on-investment in technology is difficult or impossible. When upper management asks where the revenues are for the huge investments in software, servers, networks, and databases, few IT professionals are able to answer. There are questions that are almost impossible to answer without guessing, such as: Which IT projects in the portfolio of projects will actually generate revenue? What are we getting for spending millions on vendor software? When will our data center run out of capacity? This article will explore how IT professionals can be prepared when management asks the difficult questions. By being prepared, IT professionals can turn conversations with management about IT costs into discussions about the value IT provides. Using consolidated information about the majority of the IT portfolio, IT professionals can work with business leaders to select revenue-generating projects, decrease IT expenses, and develop realistic IT plans. The following sections will describe what IT professionals can do to be ready with accurate information in response to the most challenging questions business leaders might ask. Management repositories IT has done a fine job of delivering solutions for years. However, pressure to deliver business projects quickly has created a mentality in most IT organizations of "just put it in and we will go back and do the clean up later." This has led to a layering effect where older "legacy" technology remains in place, while new technology is adopted. With this complex mix of legacy solutions and emerging technology, business leaders have a hard time understanding how everything fits together and what value is provided from IT investments. Gone are the days when the Chief Information Officer (CIO) could say "just trust me" when business people asked questions about IT spending. In addition, new requirements for corporate compliance combined with the expanding use of web-based solutions makes managing technology more difficult than ever. With the advent of Software-as-a-Service (SaaS) or cloud computing, the technical footprint, or ecosystem, of IT has extended beyond the enterprise itself. Virtualization of platforms and service-orientation adds to the mind-numbing mix of technologies available to IT. However, there are many systems available to help companies manage their technological portfolio. Unfortunately, multiple teams within the business and within IT see the problem of managing the IT portfolio differently. In many companies, there is no centralized effort to gather and store IT portfolio information. Teams with a need for IT asset information tend to purchase or build a repository specific to their area of responsibility. Some examples of these include: Business goals repository Change management database Configuration management database Business process management database Fixed assets database Metadata repository Project portfolio management database Service catalog Service registry While each of these repositories provides valuable information about IT portfolios, they are each optimized to meet a specific set of requirements. The following table shows the main types of information stored in each of these repositories along with a brief statement about its functional purpose: Repository Main content Main purpose Business goals Goal statements and assignments Documents business goals and who is responsible Change management database Change request tickets, application owners Captures change requests and who can authorize change Configuration management database Identifies actual hardware and software in use across the enterprise Supports Information Technology Infrastructure Library (ITIL) processes Business process management database Business processes, information flows, and process owners Used to develop applications and document business processes Fixed assets database Asset identifiers for hardware and software, asset life, purchase cost, and depreciation amounts Documents cost and depreciable life of IT assets Metadata repository Data about the company databases and files Documents the names, definitions, data types, and locations of the company data Project portfolio management database Project names, classifications, assignments, business value and scope Used to manage IT workload and assess value of IT projects to the business Service catalog Defines hardware and compatible software available for project use Used to manage hardware and software implementations assigned to the IT department Service registry Names and details of reusable software services Used to manage, control, and report on reusable software It is easy to see that while each of these repositories serves a specific purpose, none supports an overarching view across the others. For example, one might ask: How many SQL Server databases do we have installed and what hardware do they run on? To answer this question, IT managers would have to extract data from the metadata repository and combine it with data from the Configuration Management Database (CMDB). The question could be extended: How much will it cost in early expense write-offs if we retire the SQL Server DB servers into a new virtual grid of servers? To answer this question, IT managers need to determine not only how many servers host SQL Server, but how old they are, what they cost at purchase time, and how much depreciation is left on them. Now the query must span at least three systems (CMDB, fixed assets, and metadata repository). The accuracy of the answer will also depend on the relative validity of the data in each repository. There could be overlapping data in some, and outright errors in others. Changing the conversation When upper management asks difficult questions, they are usually interested in cost, risk management, or IT agility. Not knowing a great deal about IT, they are curious about why they need to spend millions on technology and what they get for their investments. The conversation ends up being primarily about cost and how to reduce expenses. This is not a good position to be in if you are running a support function like Enterprise Architecture. How can you explain IT investments in a way that management can understand? If you are not prepared with facts, management has no choice but to assume that costs are out of control and they can be reduced, usually by dramatic amounts. As a good corporate citizen, it is your job to help reduce costs. Like everyone in management, getting the most out of the company's assets is your responsibility. However, as we in IT know, it's just as important to be ready for changes in technology and to be on top of technology trends. As technology leaders, it is our job to help the company stay current through investments that may pay off in the future rather than show an immediate return. The following diagram shows various management functions and technologies that are used to manage the business of IT: The dimensions of these tools and processes span systems that run the business to change the business and from the ones using operational information to using strategic information. Various technologies that support data about IT assets are shown. These include: Business process analytics and management information Service-oriented architecture governance Asset-liability management Information technology systems management Financial management information Project portfolio and management information The key to changing the conversation about IT is having the ability to bring the information of these disciplines into a single view. The single view provides the ability to actually discuss IT in a strategic way. Gathering data and reporting on the actual metrics of IT, in a way business leaders can understand, supports strategic planning. The strategic planning process combined with fact-based metrics establishes credibility with upper management and promotes improved decision making on a daily basis. Troux Technologies Solving the IT-business communication problem has been difficult until recently. Troux Technologies (www.troux.com) developed a new open-architected repository and software solution, called the Troux Transformation Platform, to help IT manage the vast array of technology deployed within the company. Troux customers use the suite of applications and advanced integration platform within the product architecture to deliver bottom-line results. By locating where IT expenses are redundant, or out-of-step with business strategy, Troux customers experience significant cost savings. When used properly, the platform also supports improved IT efficiency, quicker response to business requirements, and IT risk reduction. In today's globally-connected markets, where shocks and innovations happen at an unprecedented rate, antiquated approaches to Strategic IT Planning and Enterprise Architecture have become a major obstruction. The inability of IT to plan effectively has driven business leaders to seek solutions available outside the enterprise. Using SaaS or Application Service Providers (ASPs) to meet urgent business objectives can be an effective means to meet short-term goals. However, to be complete, even these solutions usually require integration with internal systems. IT finds itself dealing with unspecified service-level requirements, developing integration architectures, and cleaning up after poorly planned activities by business leaders who don't understand what capabilities exist within the software running inside the company. A global leader in Strategic IT Planning and Enterprise Architecture software, Troux has created an Enterprise Architecture repository that IT can use to put itself at the center of strategic planning. Troux has been successful in implementing its repository at a number of companies. A partial list of Troux's customers can be found on the website. There are other enterprise-level repository vendors on the market. However, leading analysts, such as The Gartner Group and Forrester Research, have published recent studies ranking Troux as a leader in the IT strategy planning tools space. Troux Transformation Platform Troux's sophisticated integration and collaboration capabilities support multiple business initiatives such as handling mergers, aligning business and IT plans, and consolidating IT assets. The business-driven platform provides new levels of visibility into the complex web of IT resources, programs, and business strategy so the business can see instantly where IT spending and programs are redundant or out-of-step with business strategy. The business suite of applications helps IT to plan and execute faster with data assimilated from various trusted sources within the company. The platform provides information necessary to relevant stakeholders such as Business Analysts, Enterprise Architects, The Program Management Office, Solutions Architects, and executives within the business and IT. The transformation platform is not only designed to address today's urgent cost-restructuring agendas, but it also introduces an ongoing IT management discipline, allowing EA and business users to drive strategic growth initiatives. The integration platform provides visibility and control to: Uncover and fix business/IT disconnects: This shows how IT directly supports business strategies and capabilities, and ensures that mismatched spending can be eliminated. Troux Alignment helps IT think like a CFO and demonstrate control and business purpose for the billions that are spent on IT assets, by ensuring that all stakeholders have valid and relevant IT information. Identify and eliminate redundant IT spending: This uncovers the many untapped opportunities with Troux Optimization to free up needless spend, and apply it either to the bottom line or to support new business initiatives. Speed business response and simplify IT: This speeds the creation and deployment of a set of standard, reusable building blocks that are proven to work in agile business cycles. Troux Standards enables the use of IT standards in real time, thereby streamlining the process of IT governance. Accelerate business transformation for government agencies: This helps federal agencies create an actionable Enterprise Architecture and comply with constantly changing mandates. Troux eaGov automatically identifies opportunities to reduce costs to business and IT risks, while fostering effective initiative planning and execution within or across agencies. Support EA methodology: Companies adopting The Open Group Architecture Framework (TOGAF™) can use the Troux for TOGAF solution to streamline their efforts. Unlock the full potential of IT portfolio investment: Unifies Strategic IT Planning, EA, and portfolio project management through a common IT governance process. The Troux CA Clarity Connection enables the first bi-directional integration in the market between CA Clarity Project Portfolio Management (PPM) and the Troux EA repository for enhanced IT investment portfolio planning, analysis, and control. Understand your deployed IT assets: Using the out-of-the-box connection to HP's Universal Configuration Management Database (uCMDB), link software and hardware with the applications they support. All of these capabilities are enabled through an open-architected platform that provides uncomplicated data integration tools. The platform provides Architecture-modeling capabilities for IT Architects, an extensible database schema (or meta-model), and integration interfaces that are simple to automate and bring online with minimal programming efforts. Enterprise Architecture repository The Troux Transformation Platform acts as the consolidation point across all the various IT management databases and even some management systems outside the control of IT. By collecting data from across various areas, new insights are possible, leading to reductions in operating costs and improvements in service levels to the business. While it is possible to combine these using other products on the market or even develop a home-grown EA repository, Troux has created a very easy-to-use API for data collection purposes. In addition, Troux provides a database meta-model for the repository that is extensible. Meta-model extensibility makes the product adaptable to the other management systems across the company. Troux also supports a configurable user interface allowing for a customized view into the repository. This capability makes the catalog appear as if it were a part of the other control systems already in place at the company. Additionally, Troux provides an optional set of applications that support a variety of roles, out of the box, with no meta-model extensions or user interface configurations required. These include: Troux Standards: This application supports the IT technology standards and lifecycle governance process usually conducted by the Enterprise Architecture department. Troux Optimization: This application supports the Application portfolio lifecycle management process conducted by the Enterprise Program Management Office (EPMO) and/or Enterprise Architecture. Troux Alignment: This application supports the business and IT assets and application-planning processes conducted by IT Engineering, Corporate Finance, and Enterprise Architecture. Even these three applications that are available out-of-the-box from Troux can be customized by extending their underlying meta-models and customizing the user interface. The EA repository provides output that is viewable online. Standard reports are provided or custom reports can be developed as per the specific needs of the user community. Departments within or even outside of IT can use the customized views, standard reports, and custom reports to perform analyses. For example, the Enterprise Program Management Office (EPMO) can produce reports that link projects with business goals. The EPMO can review the project portfolio of the company to identify projects that do not support company goals. Decisions can be made about these projects, thereby stopping them, slowing them down, or completing them faster. Resources can be moved from the stopped or completed low-value projects to the higher-value projects, leading to increased revenue or reduced costs for the company. In a similar fashion, the Internal Audit department can check on the level of compliance to company IT standards or use the list of applications stored within the catalog to determine the best audit schedule to follow. Less time can be spent auditing applications with minimal impact on company operations or on applications and projects targeted as low value. Application development can use data from the catalog to understand the current capabilities of the existing applications of the company. As staff changes or "off-shore" resources are applied to projects, knowing what existing systems do in advance of a new project can save many hours of work. Information can be extracted from the EA repository directly into requirements documentation, which is always the starting point for new applications, as well as maintenance projects on existing applications. One study performed at a major financial services company showed that over 40% of project development time was spent in the upfront work of documenting and explaining current application capabilities to business sponsors of projects. By supplying development teams with lists of application capabilities early in the project life cycle, time to gather and document requirements can be reduced significantly. Of course, one of the biggest benefactors of the repository is the EA group. In most companies, EA's main charter is to be the steward of information about applications, databases, hardware, software, and network architecture. EA can perform analyses using the data from the repository leading to recommendations for changes by middle and upper management. In addition, EA is responsible for collecting, setting, and managing the IT standards for the company. The repository supports a single source for IT standards, whether they are internal or external standards. The standards portion of the repository can be used as the centerpiece of IT governance. The function of the Architecture Review Board (ARB) is fully supported by Troux Standards. Capacity Planning and IT Engineering functions will also gain substantially through the use of an EA repository. The useful life of IT assets can be analyzed to create a master plan for technical refresh or reengineering efforts. The annual spend on IT expenses can be reduced dramatically through increased levels of virtualization of IT assets, consolidation of platforms, and even consolidation of whole data centers. IT Engineering can review what is currently running across the company and recommend changes to reduce software maintenance costs, eliminate underutilized hardware, and consolidate federated databases. Lastly, IT Operations can benefit from a consolidated view into the technical footprint running at any point in time. Even when system availability service levels call for near-real-time error correction, it may take hours for IT Operations personnel to diagnose problems. They tend not to have a full understanding of what applications run on what servers, which firewalls support which networks, and which databases support which applications. Problem determination time can be reduced by providing accurate technical architecture information to those focused on keeping systems running and meeting business service-level requirements. Summary This article identified the problem IT has with understanding what technologies it has under management. While many solutions are in place in many companies to gain a better view into the IT portfolio, none are designed to show the impact of IT assets in the aggregate. Without the capabilities provided by an EA repository, IT management has a difficult time answering tough questions asked by business leaders. Troux Technologies offers a solution to this problem using the Troux Transformation Platform. The platform acts as a master metadata repository and becomes the focus of many efforts that IT may run to reduce significant costs and improve business service levels. Further resources on this subject: Troux Enterprise Architecture: Managing the EA function [article]
Read more
  • 0
  • 0
  • 4701
Visually different images

article-image-getting-started-mule
Packt
26 Aug 2013
10 min read
Save for later

Getting Started with Mule

Packt
26 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mule ESB is a lightweight Java programming language. Through ESB, you can integrate or communicate with multiple applications. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. Understanding Mule concepts and terminologies Enterprise Service Bus (ESB) is an application that gives access to other applications and services. Its main task is to be the messaging and integration backbone of an enterprise. An ESB is a distributed middleware system to integrate different applications. All these applications communicate through the ESB. It consists of a set of service containers that integrate various types of applications. The containers are interconnected with a reliable messaging bus. Getting ready An ESB is used for integration using a service-oriented approach. Its main features are as follows: Polling JMS Message transformation and routing services Tomcat hot deployment Web service security We often use the abbreviation, VETRO, to summarize the ESB functionality: V– validate the schema validation E– enrich T– transform R– route (either itinerary or content based) O– operate (perform operations; they run at the backend) Before introducing any ESB, developers and integrators must connect different applications in a point-to-point fashion. How to do it... After the introduction of an ESB, you just need to connect each application to the ESB so that every application can communicate with each other through the ESB. You can easily connect multiple applications through the ESB, as shown in the following diagram: Need for the ESB You can integrate different applications using ESB. Each application can communicate through ESB: To integrate more than two or three services and/or applications To integrate more applications, services, or technologies in the future To use different communication protocols To publish services for composition and consumption For message transformation and routing   What is Mule ESB? Mule ESB is a lightweight Java-based enterprise service bus and integration platform that allows developers and integrators to connect applications together quickly and easily, enabling them to exchange data. There are two editions of Mule ESB: Community and Enterprise. Mule ESB Enterprise is the enterprise-class version of Mule ESB, with additional features and capabilities that are ideal for clustering and performance tuning, DataMapper, and the SAP connector. Mule ESB Community and Enterprise editions are built on a common code base, so it is easy to upgrade from Mule ESB Community to Mule ESB Enterprise. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet. Mule ESB includes powerful capabilities that include the following: Service creation and hosting: It exposes and hosts reusable services using Mule ESB as a lightweight service container Service mediation: It shields services from message formats and protocols, separate business logic from messaging, and enables location-independent service calls Message routing: It routes, filters, aggregates, and re-sequences messages based on content and rules Data transformation: It exchanges data across varying formats and transport protocols Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule provides a Java-based messaging framework. Mule manages all the interactions between applications and components transparently. Mule provides transformation, routing, filtering, Endpoint, and so on. How it works... When you examine how a message flows through Mule ESB, you can see that there are three layers in the architecture, which are listed as follows: Application Layer Integration Layer Transport Layer Likewise, there are three general types of tasks you can perform to configure and customize your Mule deployment. Refer to the following diagram: The following list talks about Mule and its configuration: Service component development: This involves developing or re-using the existing POJOs, which is a class with attributes and it generates the get and set methods, Cloud connectors, or Spring Beans that contain the business logic and will consume, process, or enrich messages. Service orchestration: This involves configuring message processors, routers, transformers, and filters that provide the service mediation and orchestration capabilities required to allow composition of loosely coupled services using a Mule flow. New orchestration elements can be created also and dropped into your deployment. Integration: A key requirement of service mediation is decoupling services from the underlying protocols. Mule provides transport methods to allow dispatching and receiving messages on different protocol connectors. These connectors are configured in the Mule configuration file and can be referenced from the orchestration layer. Mule supports many existing transport methods and all the popular communication protocols, but you may also develop a custom transport method if you need to extend Mule to support a particular legacy or proprietary system. Spring beans: You can construct service components from Spring beans and define these Spring components through a configuration file. If you don't have this file, you will need to define it manually in the Mule configuration file. Agents: An agent is a service that is created in Mule Studio. When you start the server, an agent is created. When you stop the server, this agent will be destroyed. Connectors: The Connector is a software component. Global configuration: Global configuration is used to set the global properties and settings. Global Endpoints: Global Endpoints can be used in the Global Elements tab. We can use the global properties' element as many times in a flow as we want. For that, we must pass the global properties' reference name. Global message processor: A global message processor observes a message or modifies either a message or the message flow; examples include transformers and filters. Transformers: A transformer converts data from one format to another. You can define them globally and use them in multiple flows. Filters: Filters decide which Mule messages should be processed. Filters specify the conditions that must be met for a message to be routed to a service or continue progressing through a flow. There are several standard filters that come with Mule ESB, which you can use, or you can create your own filters. Models: It is a logical grouping of services, which are created in Mule Studio. You can start and stop all the services inside a particular model. Services: You can define one or more services that wrap your components (business logic) and configure Routers, Endpoints, transformers, and filters specifically for that service. Services are connected using Endpoints. Endpoints: Services are connected using Endpoints. It is an object on which the services will receive (inbound) and send (outbound) messages. Flow: Flow is used for a message processor to define a message flow between a source and a target. Setting up the Mule IDE The developers who were using Mule ESB over other technologies such as Liferay Portal, Alfresco ECM, or Activiti BPM can use Mule IDE in Eclipse without configuring the standalone Mule Studio in the existing environment. In recent times, MuleSoft (http://www.mulesoft.org/) only provides Mule Studio from Version 3.3 onwards, but not Mule IDE. If you are using the older version of Mule ESB, you can get Mule IDE separately from http://dist.muleforge.org/mule-ide/releases/. Getting ready To set Mule IDE, we need Java to be installed on the machine and its execution path should be set in an environment variable. We will now see how to set up Java on our machine. Firstly, download JDK 1.6 or a higher version from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html. In your Windows system, go to Start | Control Panel | System | Advanced. Click on Environment Variables under System Variables, find Path, and click on it. In the Edit window, modify the path by adding the location of the class to its value. If you do not have the item Path, you may select the option of adding a new variable and adding Path as the name and the location of the class as its value. Close the window, reopen the command prompt window, and run your Java code. How to do it... If you go with Eclipse, you have to download Mule IDE Standalone 3.3. Download Mule ESB 3.3 Community edition from the following URL: http://www.mulesoft.org/extensions/mule-ide. Unzip the downloaded file and set MULE_HOME as the environment variable. Download the latest version of Eclipse from http://www.eclipse.org/downloads/. After installing Eclipse, you now have to integrate Mule IDE in the Eclipse. If you are using Eclipse Version 3.4 (Galileo), perform the following steps to install Mule IDE. If you are not using Version 3.4 (Galileo), the URL for downloading will be different. Open Eclipse IDE. Go to Help | Install New Software…. Write the URL in the Work with: textbox: http://dist.muleforge.org/muleide/updates/3.4/ and press Enter. Select the Mule IDE checkbox. Click on the Next button. Read and accept the license agreement terms. Click on the Finish button. This will take some time. When it prompts for a restart, shut it down and restart Eclipse. Mule configuration After installing Mule IDE, you will now have to configure Mule in Eclipse. Perform the following steps: Open Eclipse IDE. Go to Window | Preferences. Select Mule, add the distribution folder mule as standalone 3.3; click on the Apply button and then on the OK button. This way you can configure Mule with Eclipse. Installing Mule Studio Mule Studio is a powerful, user-friendly Eclipse-based tool. Mule Studio has three main components: a package tree, a palette, and a canvas. Mule ESB easily creates flows as well as edits and tests them in a few minutes. Mule Studio is currently in public beta. It is based on drag-and-drop elements and supports two-way editing. Getting ready To install Mule Studio, download Mule Studio from http://www.mulesoft.org/download-mule-esb-community-edition. How to do it... Unzip the Mule Studio folder. Set the environment variable for Mule Studio. While starting with Mule Studio, the config.xml file will be created automatically by Mule Studio. The three main components of Mule Studio are as follows: A package tree A palette A canvas A package tree A package tree contains the entire structure of your project. In the following screenshot, you can see the package explorer tree. In this package explorer tree, under src/main/java, you can store the custom Java class. You can create a graphical flow from src/main/resources. In the app folder you can store the mule-deploy.properties file. The folders src, main, and app contain the flow of XML files. The folders src, main, and test contain flow-related test files. The Mule-project.xml file contains the project's metadata. You can edit the name, description, and server runtime version used for a specific project. JRE System Library contains the Java runtime libraries. Mule Runtime contains the Mule runtime libraries. A palette The second component is palette. The palette is the source for accessing Endpoints, components, transformers, and Cloud connectors. You can drag them from the palette and drop them onto the canvas in order to create flows. The palette typically displays buttons indicating the different types of Mule elements. You can view the content of each button by clicking on them. If you do not want to expand elements, click on the button again to hide the content. A canvas The third component is canvas; canvas is a graphical editor. In canvas you can create flows. The canvas provides a space that facilitates the arrangement of Studio components into Mule flows. In the canvas area you can configure each and every component, and you can add or remove components on the canvas.
Read more
  • 0
  • 0
  • 4687

article-image-push-your-data-web
Packt
22 Feb 2016
27 min read
Save for later

Push your data to the Web

Packt
22 Feb 2016
27 min read
This article covers the following topics: An introduction to the Shiny app framework Creating your first Shiny app The connection between the server file and the user interface The concept of reactive programming Different types of interface layouts, widgets, and Shiny tags How to create a dynamic user interface Ways to share your Shiny applications with others How to deploy Shiny apps to the web (For more resources related to this topic, see here.) Introducing Shiny – the app framework The Shiny package delivers a powerful framework to build fully featured interactive Web applications just with R and RStudio. Basic Shiny applications typically consist of two components: ~/shinyapp |-- ui.R |-- server.R While the ui.R function represents the appearance of the user interface, the server.R function contains all the code for the execution of the app. The look of the user interface is based on the famous Twitter bootstrap framework, which makes the look and layout highly customizable and fully responsive. In fact, you only need to know R and how to use the shiny package to build a pretty web application. Also, a little knowledge of HTML, CSS, and JavaScript may help. If you want to check the general possibilities and what is possible with the Shiny package, it is advisable to take a look at the inbuilt examples. Just load the library and enter the example name: library(shiny) runExample("01_hello") As you can see, running the first example opens the Shiny app in a new window. This app creates a simple histogram plot where you can interactively change the number of bins. Further, this example allows you to inspect the corresponding ui.R and server.R code files. There are currently eleven inbuilt example apps: 01_hello 02_text 03_reactivity 04_mpg 05_sliders 06_tabsets 07_widgets 08_html 09_upload 10_download 11_timer These examples focus mainly on the user interface possibilities and elements that you can create with Shiny. Creating a new Shiny web app with RStudio RStudio offers a fast and easy way to create the basis of every new Shiny app. Just click on New Project and select the New Directory option in the newly opened window: After that, click on the Shiny Web Application field: Give your new app a name in the next step, and click on Create Project: RStudio will then open a ready-to-use Shiny app by opening a prefilled ui.R and server.R file: You can click on the now visible Run App button in the right corner of the file pane to display the prefilled example application. Creating your first Shiny application In your effort to create your first Shiny application, you should first create or consider rough sketches for your app. Questions that you might ask in this context are, What do I want to show? How do I want it to show?, and so on. Let's say we want to create an application that allows users to explore some of the variables of the mtcars dataset. The data was extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). Sketching the final app We want the user of the app to be able to select one out of the three variables of the dataset that gets displayed in a histogram. Furthermore, we want users to get a summary of the dataset under the main plot. So, the following figure could be a rough project sketch: Constructing the user interface for your app We will reuse the already opened ui.R file from the RStudio example, and adapt it to our needs. The layout of the ui.R file for your first app is controlled by nested Shiny functions and looks like the following lines: library(shiny) shinyUI(pageWithSidebar( headerPanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) Creating the server file The server file holds all the code for the execution of the application: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) The final application After changing the ui.R and the server.R files according to our needs, just hit the Run App button and the final app opens in a new window: As planned in the app sketch, the app offers the user a drop-down menu to choose the desired variable on the left side, and shows a histogram and data summary of the selected variable on the right side. Deconstructing the final app into its components For a better understanding of the Shiny application logic and the interplay of the two main files, ui.R and server.R, we will disassemble your first app again into its individual parts. The components of the user interface We have divided the user interface into three parts: After loading the Shiny library, the complete look of the app gets defined by the shinyUI() function. In our app sketch, we chose a sidebar look; therefore, the shinyUI function holds the argument, pageWithSidebar(): library(shiny) shinyUI(pageWithSidebar( ... The headerPanel() argument is certainly the simplest component, since usually only the title of the app will be stored in it. In our ui.R file, it is just a single line of code: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), ... The sidebarPanel() function defines the look of the sidebar, and most importantly, handles the input of the variables of the chosen mtcars dataset: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), ... Finally, the mainPanel() function ensures that the output is displayed. In our case, this is the histogram and the data summary for the selected variables: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) The server file in detail While the ui.R file defines the look of the app, the server.R file holds instructions for the execution of the R code. Again, we use our first app to deconstruct the related server.R file into its main important parts. After loading the needed libraries, datasets, and further scripts, the function, shinyServer(function(input, output) {} ), defines the server logic: library(shiny) library(datasets) shinyServer(function(input, output) { The marked lines of code that follow translate the inputs of the ui.R file into matching outputs. In our case, the server side output$ object is assigned to carsPlot, which in turn was called in the mainPanel() function of the ui.R file as plotOutput(). Moreover, the render* function, in our example it is renderPlot(), reflects the type of output. Of course, here it is the histogram plot. Within the renderPlot() function, you can recognize the input$ object assigned to the variables that were defined in the user interface file: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... In the following lines, you will see another type of the render function, renderPrint() , and within the curly braces, the actual R function, summary(), with the defined input variable: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) There are plenty of different render functions. The most used are as follows: renderPlot: This creates normal plots renderPrin: This gives printed output types renderUI: This gives HTML or Shiny tag objects renderTable: This gives tables, data frames, and matrices renderText: This creates character strings Every code outside the shinyServer() function runs only once on the first launch of the app, while all the code in between the brackets and before the output functions runs as often as a user visits or refreshes the application. The code within the output functions runs every time a user changes the widget that belongs to the corresponding output. The connection between the server and the ui file As already inspected in our decomposed Shiny app, the input functions of the ui.R file are linked with the output functions of the server file. The following figure illustrates this again: The concept of reactivity Shiny uses a reactive programming model, and this is a big deal. By applying reactive programming, the framework is able to be fast, efficient, and robust. Briefly, changing the input in the user interface, Shiny rebuilds the related output. Shiny uses three reactive objects: Reactive source Reactive conductor Reactive endpoint For simplicity, we use the formal signs of the RStudio documentation: The implementation of a reactive source is the reactive value; that of a reactive conductor is a reactive expression; and the reactive endpoint is also called the observer. The source and endpoint structure As taught in the previous section, the defined input of the ui.R links is the output of the server.R file. For simplicity, we use the code from our first Shiny app again, along with the introduced formal signs: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... The input variable, in our app these are the Horsepower; Miles per Gallon, and Number of Carburetors choices, represents the reactive source. The histogram called carsPlot stands for the reactive endpoint. In fact, it is possible to link the reactive source to numerous reactive endpoints, and also conversely. In our Shiny app, we also connected the input variable to our first and second output—carsSummary: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) ... To sum it up, this structure ensures that every time a user changes the input, the output refreshes automatically and accordingly. The purpose of the reactive conductor The reactive conductor differs from the reactive source and the endpoint is so far that this reactive type can be dependent and can have dependents. Therefore, it can be placed between the source, which can only have dependents and the endpoint, which in turn can only be dependent. The primary function of a reactive conductor is the encapsulation of heavy and difficult computations. In fact, reactive expressions are caching the results of these computations. The following graph displays a possible connection of the three reactive types: In general, reactivity raises the impression of a logically working directional system; after input, the output occurs. You get the feeling that an input pushes information to an output. But this isn't the case. In reality, it works vice versa. The output pulls the information from the input. And this all works due to sophisticated server logic. The input sends a callback to the server, which in turn informs the output that pulls the needed value from the input and shows the result to the user. But of course, for a user, this all feels like an instant updating of any input changes, and overall, like a responsive app's behavior. Of course, we have just touched upon the main aspects of reactivity, but now you know what's really going on under the hood of Shiny. Discovering the scope of the Shiny user interface After you know how to build a simple Shiny application, as well as how reactivity works, let us take a look at the next step: the various resources to create a custom user interface. Furthermore, there are nearly endless possibilities to shape the look and feel of the layout. As already mentioned, the entire HTML, CSS, and JavaScript logic and functions of the layout options are based on the highly flexible bootstrap framework. And, of course, everything is responsive by default, which makes it possible for the final application layout to adapt to the screen of any device. Exploring the Shiny interface layouts Currently, there are four common shinyUI () page layouts: pageWithSidebar() fluidPage() navbarPage() fixedPage() These page layouts can be, in turn, structured with different functions for a custom inner arrangement structure of the page layout. In the following sections, we are introducing the most useful inner layout functions. As an example, we will use our first Shiny application again. The sidebar layout The sidebar layout, where the sidebarPanel() function is used as the input area, and the mainPanel() function as the output, just like in our first Shiny app. The sidebar layout uses the pageWithSidebar() function: library(shiny) shinyUI(pageWithSidebar( headerPanel("The Sidebar Layout"), sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) )) When you only change the first three functions, you can create exactly the same look as the application with the fluidPage() layout. This is the sidebar layout with the fluidPage() function: library(shiny) shinyUI(fluidPage( titlePanel("The Sidebar Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) ))   The grid layout The grid layout is where rows are created with the fluidRow() function. The input and output are made within free customizable columns. Naturally, a maximum of 12 columns from the bootstrap grid system must be respected. This is the grid layout with the fluidPage () function and a 4-8 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(8, tags$h3("Eight-column output area"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) )) As you can see from inspecting the previous ui.R file, the width of the columns is defined within the fluidRow() function, and the sum of these two columns adds up to 12. Since the allocation of the columns is completely flexible, you can also create something like the grid layout with the fluidPage() function and a 4-4-4 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(4, tags$h5("Four-column output area"), plotOutput("carsPlot") ), column(4, tags$h5("Another four-column output area"), verbatimTextOutput("carsSummary") ) ) )) The tabset panel layout The tabsetPanel() function can be built into the mainPanel() function of the aforementioned sidebar layout page. By applying this function, you can integrate several tabbed outputs into one view. This is the tabset layout with the fluidPage() function and three tab panels: library(shiny) shinyUI(fluidPage( titlePanel("The Tabset Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tabsetPanel( tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", dataTableOutput("tableData")) ) ) ) )) After changing the code to include the tabsetPanel() function, the three tabs with the tabPanel() function display the respective output. With the help of this layout, you are no longer dependent on representing several outputs among themselves. Instead, you can display each output in its own tab, while the sidebar does not change. The position of the tabs is flexible and can be assigned to be above, below, right, and left. For example, in the following code file detail, the position of the tabsetPanel() function was assigned as follows: ... mainPanel( tabsetPanel(position = "below", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", tableOutput("tableData")) ) ) ... The navlist panel layout The navlistPanel() function is similar to the tabsetPanel() function, and can be seen as an alternative if you need to integrate a large number of tabs. The navlistPanel() function also uses the tabPanel() function to include outputs: library(shiny) shinyUI(fluidPage( titlePanel("The Navlist Layout"), navlistPanel( "Discovering The Dataset", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Another Plot", plotOutput("barPlot")), tabPanel("Even A Third Plot", plotOutput("thirdPlot"), "More Information", tabPanel("Raw Data", tableOutput("tableData")), tabPanel("More Datatables", tableOutput("moreData")) ) ))   The navbar page as the page layout In the previous examples, we have used the page layouts, fluidPage() and pageWithSidebar(), in the first line. But, especially when you want to create an application with a variety of tabs, sidebars, and various input and output areas, it is recommended that you use the navbarPage() layout. This function makes use of the standard top navigation of the bootstrap framework: library(shiny) shinyUI(navbarPage("The Navbar Page Layout", tabPanel("Data Analysis", sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ), tabPanel("Calculations" … ), tabPanel("Some Notes" … ) )) Adding widgets to your application After inspecting the most important page layouts in detail, we now look at the different interface input and output elements. By adding these widgets, panels, and other interface elements to an application, we can further customize each page layout. Shiny input elements Already, in our first Shiny application, we got to know a typical Shiny input element: the selection box widget. But, of course, there are a lot more widgets with different types of uses. All widgets can have several arguments; the minimum setup is to assign an inputId, which instructs the input slot to communicate with the server file, and a label to communicate with a widget. Each widget can also have its own specific arguments. As an example, we are looking at the code of a slider widget. In the previous screenshot are two versions of a slider; we took the slider range for inspection: sliderInput(inputId = "sliderExample", label = "Slider range", min = 0, max = 100, value = c(25, 75)) Besides the mandatory arguments, inputId and label, three more values have been added to the slider widget. The min and max arguments specify the minimum and maximum values that can be selected. In our example, these are 0 and 100. A numeric vector was assigned to the value argument, and this creates a double-ended range slider. This vector must logically be within the set minimum and maximum values. Currently, there are more than twenty different input widgets, which in turn are all individually configurable by assigning to them their own set of arguments. A brief overview of the output elements As we have seen, the output elements in the ui.R file are connected to the rendering functions in the server file. The mainly used output elements are: htmlOutput imageOutput plotOutput tableOutput textOutput verbatimTextOutput downloadButton Due to their unambiguous naming, the purpose of these elements should be clear. Individualizing your app even further with Shiny tags Although you don't need to know HTML to create stunning Shiny applications, you have the option to create highly customized apps with the usage of raw HTML or so-called Shiny tags. To add raw HTML, you can use the HTML() function. We will focus on Shiny tags in the following list. Currently, there are over a 100 different Shiny tag objects, which can be used to add text styling, colors, different headers, visual and audio, lists, and many more things. You can use these tags by writing tags $tagname. Following is a brief list of useful tags: tags$h1: This is first level header; of course you can also use the known h1 -h6 tags$hr: This makes a horizontal line, also known as a thematic break tags$br: This makes a line break, a popular way to add some space tags$strong = This makes the text bold tags$div: This makes a division of text with a uniform style tags$a: This links to a webpage tags$iframe: This makes an inline frame for embedding possibilities The following ui.R file and corresponding screenshot show the usage of Shiny tags by an example: shinyUI(fluidPage( fluidRow( column(6, tags$h3("Customize your app with Shiny tags!"), tags$hr(), tags$a(href = "http://www.rstudio.com", "Click me"), tags$hr() ), column(6, tags$br(), tags$em("Look - the R project logo"), tags$br(), tags$img(src = "http://www.r-project.org/Rlogo.png") ) ), fluidRow( column(6, tags$strong("We can even add a video"), tags$video(src = "video.mp4", type = "video/mp4", autoplay = NA, controls = NA) ), column(6, tags$br(), tags$ol( tags$li("One"), tags$li("Two"), tags$li("Three")) ) ) ))   Creating dynamic user interface elements We know how to build completely custom user interfaces with all the bells and whistles. But all the introduced types of interface elements are fixed and static. However, if you need to create dynamic interface elements, Shiny offers three ways to achieve it: The conditinalPanel() function: The renderUI() function The use of directly injected JavaScript code In the following section, we only show how to use the first two ways, because firstly, they are built into the Shiny package, and secondly, the JavaScript method is indicated as experimental. Using conditionalPanel The condtionalPanel() functions allow you to show or hide interface elements dynamically, and is set in the ui.R file. The dynamic of this function is achieved by JavaScript expressions, but as usual in the Shiny package, all you need to know is R programming. The following example application shows how this function works for the ui.R file: library(shiny) shinyUI(fluidPage( titlePanel("Dynamic Interface With Conditional Panels"), column(4, wellPanel( sliderInput(inputId = "n", label= "Number of points:", min = 10, max = 200, value = 50, step = 10) )), column(5, "The plot below will be not displayed when the slider value", "is less than 50.", conditionalPanel("input.n >= 50", plotOutput("scatterPlot", height = 300) ) ) )) The following example application shows how this function works for the Related server.R file: library(shiny) shinyServer(function(input, output) { output$scatterPlot <- renderPlot({ x <- rnorm(input$n) y <- rnorm(input$n) plot(x, y) }) }) The code for this example application was taken from the Shiny gallery of RStudio (http://shiny.rstudio.com/gallery/conditionalpanel-demo.html). As readable in both code files, the defined function, input.n, is the linchpin for the dynamic behavior of the example app. In the conditionalPanel() function, it is defined that inputId="n" must have a value of 50 or higher, while the input and output of the plot will work as already defined. Taking advantage of the renderUI function The renderUI() function is hooked, contrary to the previous model, to the server file to create a dynamic user interface. We have already introduced different render output functions in this article. The following example code shows the basic functionality using the ui.R file: # Partial example taken from the Shiny documentation numericInput("lat", "Latitude"), numericInput("long", "Longitude"), uiOutput("cityControls") The following example code shows the basic functionality of the Related sever.R file: # Partial example output$cityControls <- renderUI({ cities <- getNearestCities(input$lat, input$long) checkboxGroupInput("cities", "Choose Cities", cities) }) As described, the dynamic of this method gets defined in the renderUI() process as an output, which then gets displayed through the uiOutput() function in the ui.R file. Sharing your Shiny application with others Typically, you create a Shiny application not only for yourself, but also for other users. There are a two main ways to distribute your app; either you let users download your application, or you deploy it on the web. Offering a download of your Shiny app By offering the option to download your final Shiny application, other users can run your app locally. Actually, there are four ways to deliver your app this way. No matter which way you choose, it is important that the user has installed R and the Shiny package on his/her computer. Gist Gist is a public code sharing pasteboard from GitHub. To share your app this way, it is important that both the ui.R file and the server.R file are in the same Gist and have been named correctly. Take a look at the following screenshot: There are two options to run apps via Gist. First, just enter runGist("Gist_URL") in the console of RStudio; or second, just use the Gist ID and place it in the shiny::runGist("Gist_ID") function. Gist is a very easy way to share your application, but you need to keep in mind that your code is published on a third-party server. GitHub The next way to enable users to download your app is through a GitHub repository: To run an application from GitHub, you need to enter the command, shiny::runGitHub ("Repository_Name", "GitHub_Account_Name"), in the console: Zip file There are two ways to share a Shiny application by zip file. You can either let the user download the zip file over the web, or you can share it via email, USB stick, memory card, or any other such device. To download a zip file via the Web, you need to type runUrl ("Zip_File_URL") in the console: Package Certainly, a much more labor-intensive but also publically effective way is to create a complete R package for your Shiny application. This especially makes sense if you have built an extensive application that may help many other users. Another advantage is the fact that you can also publish your application on CRAN. Later in the book, we will show you how to create an R package. Deploying your app to the web After showing you the ways users can download your app and run it on their local machines, we will now check the options to deploy Shiny apps to the web. Shinyapps.io http://www.shinyapps.io/ is a Shiny app- hosting service by RStudio. There is a free-to- use account package, but it is limited to a maximum of five applications, 25 so-called active hours, and the apps are branded with the RStudio logo. Nevertheless, this service is a great way to publish one's own applications quickly and easily to the web. To use http://www.shinyapps.io/ with RStudio, a few R packages and some additional operating system software needs to be installed: RTools (If you use Windows) GCC (If you use Linux) XCode Command Line Tools (If you use Mac OS X) The devtools R package The shinyapps package Since the shinyapps package is not on CRAN, you need to install it via GitHub by using the devtools package: if (!require("devtools")) install.packages("devtools") devtools::install_github("rstudio/shinyapps") library(shinyapps) When everything that is needed is installed ,you are ready to publish your Shiny apps directly from the RStudio IDE. Just click on the Publish icon, and in the new window you will need to log in to your http://www.shinyapps.io/ account once, if you are using it for the first time. All other times, you can directly create a new Shiny app or update an existing app: After clicking on Publish, a new tab called Deploy opens in the console pane, showing you the progress of the deployment process. If there is something set incorrectly, you can use the deployment log to find the error: When the deployment is successful, your app will be publically reachable with its own web address on http://www.shinyapps.io/.   Setting up a self-hosted Shiny server There are two editions of the Shiny Server software: an open source edition and the professional edition. The open source edition can be downloaded for free and you can use it on your own server. The Professional edition offers a lot more features and support by RStudio, but is also priced accordingly. Diving into the Shiny ecosystem Since the Shiny framework is such an awesome and powerful tool, a lot of people, and of course, the creators of RStudio and Shiny have built several packages around it that are enormously extending the existing functionalities of Shiny. These almost infinite possibilities of technical and visual individualization, which are possible by deeply checking the Shiny ecosystem, would certainly go beyond the scope of this article. Therefore, we are presenting only a few important directions to give a first impression. Creating apps with more files In this article, you have learned how to build a Shiny app consisting of two files: the server.R and the ui.R. To include every aspect, we first want to point out that it is also possible to create a single file Shiny app. To do so, create a file called app.R. In this file, you can include both the server.R and the ui.R file. Furthermore, you can include global variables, data, and more. If you build larger Shiny apps with multiple functions, datasets, options, and more, it could be very confusing if you do all of it in just one file. Therefore, single-file Shiny apps are a good idea for simple and small exhibition apps with a minimal setup. Especially for large Shiny apps, it is recommended that you outsource extensive custom functions, datasets, images, and more into your own files, but put them into the same directory as the app. An example file setup could look like this: ~/shinyapp |-- ui.R |-- server.R |-- helper.R |-- data |-- www |-- js |-- etc   To access the helper file, you just need to add source("helpers.R") into the code of your server.R file. The same logic applies to any other R files. If you want to read in some data from your data folder, you store it in a variable that is also in the head of your server.R file, like this: myData &lt;- readRDS("data/myDataset.rds") Expanding the Shiny package As said earlier, you can expand the functionalities of Shiny with several add-on packages. There are currently ten packages available on CRAN with different inbuilt functions to add some extra magic to your Shiny app. shinyAce: This package makes available Ace editor bindings to enable a rich text-editing environment within Shiny. shinybootstrap2: The latest Shiny package uses bootstrap 3; so, if you built your app with bootstrap 2 features, you need to use this package. shinyBS: This package adds the additional features of the original Twitter Bootstraptheme, such as tooltips, modals, and others, to Shiny. shinydashboard: This packages comes from the folks at RStudio and enables the user to create stunning and multifunctional dashboards on top of Shiny. shinyFiles: This provides functionality for client-side navigation of the server side file system in Shiny apps. shinyjs: By using this package, you can perform common JavaScript operations in Shiny applications without having to know any JavaScript. shinyRGL: This package provides Shiny wrappers for the RGL package. This package exposes RGL's ability to export WebGL visualization in a shiny-friendly format. shinystan: This package is, in fact, not a real add-on. Shinystan is a fantastic full-blown Shiny application to give users a graphical interface for Markov chain Monte Carlo simulations. shinythemes: This packages gives you the option of changing the whole look and feel of your application by using different inbuilt bootstrap themes. shinyTree: This exposes bindings to jsTree—a JavaScript library that supports interactive trees—to enable rich, editable trees in Shiny. Of course, you can find a bunch of other packages with similar or even more functionalities, extensions, and also comprehensive Shiny apps on GitHub. Summary To learn more about Shiny, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Shiny (https://www.packtpub.com/application-development/learning-shiny) Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r)
Read more
  • 0
  • 0
  • 4676

article-image-cross-validation-r-predictive-models
Pravin Dhandre
17 Apr 2018
8 min read
Save for later

Cross-validation in R for predictive models

Pravin Dhandre
17 Apr 2018
8 min read
In today’s tutorial, we will efficiently train our first predictive model, we will use Cross-validation in R as the basis of our modeling process. We will build the corresponding confusion matrix. Most of the functionality comes from the excellent caret package. You can find more information on the vast features of caret package that we will not explore in this tutorial. Before moving to the training tutorial, lets understand what a confusion matrix is. A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions. It gives you insight not only into the errors being made by your classifier but more importantly the types of errors that are being made. Training our first predictive model Following best practices, we will use Cross Validation (CV) as the basis of our modeling process. Using CV we can create estimates of how well our model will do with unseen data. CV is powerful, but the downside is that it requires more processing and therefore more time. If you can take the computational complexity, you should definitely take advantage of it in your projects. Going into the mathematics behind CV is outside of the scope of this tutorial. If interested, you can find out more information on cross validation on Wikipedia . The basic idea is that the training data will be split into various parts, and each of these parts will be taken out of the rest of the training data one at a time, keeping all remaining parts together. The parts that are kept together will be used to train the model, while the part that was taken out will be used for testing, and this will be repeated by rotating the parts such that every part is taken out once. This allows you to test the training procedure more thoroughly, before doing the final testing with the testing data. We use the trainControl() function to set our repeated CV mechanism with five splits and two repeats. This object will be passed to our predictive models, created with the caret package, to automatically apply this control mechanism within them: cv.control <- trainControl(method = "repeatedcv", number = 5, repeats = 2) Our predictive models pick for this example are Random Forests (RF). We will very briefly explain what RF are, but the interested reader is encouraged to look into James, Witten, Hastie, and Tibshirani's excellent "Statistical Learning" (Springer, 2013). RF are a non-linear model used to generate predictions. A tree is a structure that provides a clear path from inputs to specific outputs through a branching model. In predictive modeling they are used to find limited input-space areas that perform well when providing predictions. RF create many such trees and use a mechanism to aggregate the predictions provided by this trees into a single prediction. They are a very powerful and popular Machine Learning model. Let's have a look at the random forests example: Random forests aggregate trees To train our model, we use the train() function passing a formula that signals R to use MULT_PURCHASES as the dependent variable and everything else (~ .) as the independent variables, which are the token frequencies. It also specifies the data, the method ("rf" stands for random forests), the control mechanism we just created, and the number of tuning scenarios to use: model.1 <- train( MULT_PURCHASES ~ ., data = train.dfm.df, method = "rf", trControl = cv.control, tuneLength = 5 ) Improving speed with parallelization If you actually executed the previous code in your computer before reading this, you may have found that it took a long time to finish (8.41 minutes in our case). As we mentioned earlier, text analysis suffers from very high dimensional structures which take a long time to process. Furthermore, using CV runs will take a long time to run. To cut down on the total execution time, use the doParallel package to allow for multi-core computers to do the training in parallel and substantially cut down on time. We proceed to create the train_model() function, which takes the data and the control mechanism as parameters. It then makes a cluster object with the makeCluster() function with a number of available cores (processors) equal to the number of cores in the computer, detected with the detectCores() function. Note that if you're planning on using your computer to do other tasks while you train your models, you should leave one or two cores free to avoid choking your system (you can then use makeCluster(detectCores() -2) to accomplish this). After that, we start our time measuring mechanism, train our model, print the total time, stop the cluster, and return the resulting model. train_model <- function(data, cv.control) { cluster <- makeCluster(detectCores()) registerDoParallel(cluster) start.time <- Sys.time() model <- train( MULT_PURCHASES ~ ., data = data, method = "rf", trControl = cv.control, tuneLength = 5 ) print(Sys.time() - start.time) stopCluster(cluster) return(model) } Now we can retrain the same model much faster. The time reduction will depend on your computer's available resources. In the case of an 8-core system with 32 GB of memory available, the total time was 3.34 minutes instead of the previous 8.41 minutes, which implies that with parallelization, it only took 39% of the original time. Not bad right? Let's have look at how the model is trained: model.1 <- train_model(train.dfm.df, cv.control) Computing predictive accuracy and confusion matrices Now that we have our trained model, we can see its results and ask it to compute some predictive accuracy metrics. We start by simply printing the object we get back from the train() function. As can be seen, we have some useful metadata, but what we are concerned with right now is the predictive accuracy, shown in the Accuracy column. From the five values we told the function to use as testing scenarios, the best model was reached when we used 356 out of the 2,007 available features (tokens). In that case, our predictive accuracy was 65.36%. If we take into account the fact that the proportions in our data were around 63% of cases with multiple purchases, we have made an improvement. This can be seen by the fact that if we just guessed the class with the most observations (MULT_PURCHASES being true) for all the observations, we would only have a 63% accuracy, but using our model we were able to improve toward 65%. This is a 3% improvement. Keep in mind that this is a randomized process, and the results will be different every time you train these models. That's why we want a repeated CV as well as various testing scenarios to make sure that our results are robust: model.1 #> Random Forest #> #> 212 samples #> 2007 predictors #> 2 classes: 'FALSE', 'TRUE' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 2 times) #> Summary of sample sizes: 170, 169, 170, 169, 170, 169, ... #> Resampling results across tuning parameters: #> #> mtry Accuracy Kappa #> 2 0.6368771 0.00000000 #> 11 0.6439092 0.03436849 #> 63 0.6462901 0.07827322 #> 356 0.6536545 0.16160573 #> 2006 0.6512735 0.16892126 #> #> Accuracy was used to select the optimal model using the largest value. #> The final value used for the model was mtry = 356. To create a confusion matrix, we can use the confusionMatrix() function and send it the model's predictions first and the real values second. This will not only create the confusion matrix for us, but also compute some useful metrics such as sensitivity and specificity. We won't go deep into what these metrics mean or how to interpret them since that's outside the scope of this tutorial, but we highly encourage the reader to study them using the resources cited in this tutorial: confusionMatrix(model.1$finalModel$predicted, train$MULT_PURCHASES) #> Confusion Matrix and Statistics #> #> Reference #> Prediction FALSE TRUE #> FALSE 18 19 #> TRUE 59 116 #> #> Accuracy : 0.6321 #> 95% CI : (0.5633, 0.6971) #> No Information Rate : 0.6368 #> P-Value [Acc > NIR] : 0.5872 #> #> Kappa : 0.1047 #> Mcnemar's Test P-Value : 1.006e-05 #> #> Sensitivity : 0.23377 #> Specificity : 0.85926 #> Pos Pred Value : 0.48649 #> Neg Pred Value : 0.66286 #> Prevalence : 0.36321 #> Detection Rate : 0.08491 #> Detection Prevalence : 0.17453 #> Balanced Accuracy : 0.54651 #> #> 'Positive' Class : FALSE You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book gets you familiar with R’s fundamentals and its advanced features to get you hands-on experience with R’s cutting edge tools for software development. Getting Started with Predictive Analytics Here’s how you can handle the bias variance trade-off in your ML models    
Read more
  • 0
  • 0
  • 4658
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-introducing-weld-a-runtime-written-in-rust-and-llvm-for-cross-library-optimizations
Bhagyashree R
24 Sep 2019
5 min read
Save for later

Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations

Bhagyashree R
24 Sep 2019
5 min read
Weld is an open-source Rust project for improving the performance of data-intensive applications. It is an interface and runtime that can be integrated into existing frameworks including Spark, TensorFlow, Pandas, and NumPy without changing their user-facing APIs. The motivation behind Weld Data analytics applications today often require developers to combine various functions from different libraries and frameworks to accomplish a particular task. For instance, a typical Python ecosystem application selects some data using Spark SQL, transforms it using NumPy and Pandas, and trains a model with TensorFlow. This improves developers’ productivity as they are taking advantage of functions from high-quality libraries. However, these functions are usually optimized in isolation, which is not enough to achieve the best application performance. Weld aims to solve this problem by providing an interface and runtime that can optimize across data-intensive libraries and frameworks while preserving their user-facing APIs. In an interview with Federico Carrone, a Tech Lead at LambdaClass, Weld’s main contributor, Shoumik Palkar shared, “The motivation behind Weld is to provide bare-metal performance for applications that rely on existing high-level APIs such as NumPy and Pandas. The main problem it solves is enabling cross-function and cross-library optimizations that other libraries today don’t provide.” How Weld works Weld serves as a common runtime that allows libraries from different domains like SQL and machine learning to represent their computations in a common functional intermediate representation (IR). This IR is then optimized by a compiler optimizer and JIT’d to efficient machine code for diverse parallel hardware. It performs a wide range of optimizations on the IR including loop fusion, loop tiling, and vectorization. “Weld’s IR is natively parallel, so programs expressed in it can always be trivially parallelized,” said Palkar. When Weld was first introduced it was mainly used for cross-library optimizations. However, over time people have started to use it for other applications as well. It can be used to build JITs or new physical execution engines for databases or analytics frameworks, individual libraries, target new kinds of parallel hardware using the IR, and more. To evaluate Weld’s performance the team integrated it with popular data analytics frameworks including Spark, NumPy, and TensorFlow. This prototype showed up to 30x improvements over the native framework implementations. While cross library optimizations between Pandas and NumPy also improved performance by up to two orders of magnitude. Source: Weld Why Rust and LLVM were chosen for its implementation The first iteration of Weld was implemented in Scala because of its algebraic data types, powerful pattern matching, and large ecosystem. However, it did have some shortcomings. Palkar shared in the interview, “We moved away from Scala because it was too difficult to embed a JVM-based language into other runtimes and languages.” It had a managed runtime, clunky build system, and its JIT compilations were quite slow for larger programs. Because of these shortcomings the team wanted to redesign the JIT compiler, core API, and runtime from the ground up. They were in the search for a language that was fast, safe, didn’t have a managed runtime, provided a rich standard library, functional paradigms, good package manager, and great community background. So, they zeroed-in on Rust that happens to meet all these requirements. Rust provides a very minimal, no setup required runtime. It can be easily embedded into other languages such as Java and Python. To make development easier, it has high-quality packages, known as crates, and functional paradigms such as pattern matching. Lastly, it is backed by a great Rust Community. Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Explaining the reason why they chose LLVM, Palkar said in the interview, “We chose LLVM because its an open-source compiler framework that has wide use and support; we generate LLVM directly instead of C/C++ so we don’t need to rely on the existence of a C compiler, and because it improves compilation times (we don’t need to parse C/C++ code).” In a discussion on Hacker News many users listed other Weld-like projects that developers may find useful. A user commented, “Also worth checking out OmniSci (formerly MapD), which features an LLVM query compiler to gain large speedups executing SQL on both CPU and GPU.” Users also talked about Numba, an open-source JIT compiler that translates Python functions to optimized machine code at runtime with the help of the LLVM compiler library.  “Very bizarre there is no discussion of numba here, which has been around and used widely for many years, achieves faster speedups than this, and also emits an LLVM IR that is likely a much better starting point for developing a “universal” scientific computing IR than doing yet another thing that further complicates it with fairly needless involvement of Rust,” a user added. To know more about Weld, check out the full interview on Medium. Also, watch this RustConf 2019 talk by Shoumik Palkar: https://www.youtube.com/watch?v=AZsgdCEQjFo&t Other news in Programming Darklang available in private beta GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 4618

article-image-netbeans-ide-7-building-ejb-application
Packt
01 Jun 2011
10 min read
Save for later

NetBeans IDE 7: Building an EJB Application

Packt
01 Jun 2011
10 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic. These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability. The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer. If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book. For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in. Some of the capabilities supported by EJB and enforced by Application Servers are: Remote access Transactions Security Scalability NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development. NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It's as easy as a project node right-click. Creating EJB project In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org. There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen, but at least one is necessary. In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package. How to do it... Lets create a new project by either clicking File and then New Project, or by pressing Ctrl+Shift+N. In the New Project window, in the categories side, choose Java Web and in Projects side, select WebApplication, then click Next. In Name and Location, under Project Name, enter EJBApplication. Tick the Use Dedicated Folder for Storing Libraries option box. Now either type the folder path or select one by clicking on browse. After choosing the folder, we can proceed by clicking Next. In Server and Settings, under Server, choose GlassFish Server 3.1. Tick Enable Contexts and Dependency Injection. Leave the other values with their default values and click Finish. The new project structure is created. How it works... NetBeans creates a complete file structure for our project. It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor. The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.   Adding JPA support The Java Persistence API (JPA) is one of the frameworks that equips Java with object/relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database. With the release of JPA 2.0, there are many areas that were improved, such as: Domain Modeling EntityManager Query interfaces JPA query language and others We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html. NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA. In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project. Getting ready We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe. How to do it... Right-click on EJBApplication node and select New Entity Classes from Database.... In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize Java DB. When Available Tables is populated, select MANUFACTURER, click Add, and then click Next. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish. How it works... NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package. Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself. The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java: @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation, which has a name parameter, points out the table in the Database where the information is stored. The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to: SELECT m FROM Manufacturer m On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods: @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the XML code. A persistence unit, or persistence.xml, is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example: <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU, using the jdbc/sample as the data source. To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor. This is an example of adding another PU to our project:   Creating Stateless Session Bean A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client. Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean. This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client. It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fly by our servlet. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org. We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code. How to do it... Right-click on EJBApplication node and select New and Session Bean.... For Name and Location: Name the EJB as ManufacturerEJB. Under Package, enter beans. Leave Session Type as Stateless. Leave Create Interface with nothing marked and click Finish. Here are the steps for us to create business methods: Open ManufacturerEJB and inside the class body, enter: @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } Press Ctrl+Shift+I to resolve the following imports: java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; Creating the Servlet: Right-click on the EJBApplication node and select New and Servlet.... For Name and Location: Name the servlet as ManufacturerServlet. Under package, enter servlets. Leave all the other fields with their default values and click Next. For Configure Servlet Deployment: Leave all the default values and click Finish. With the ManufacturerServlet open: After the class declaration and before the processRequest method, add: @EJB ManufacturerEJB manufacturerEJB; Then inside the processRequest method, first line after the try statement, add: List<Manufacturer> l = manufacturerEJB.findAll(); Remove the /* TODO output your page here and also */. And finally replace: out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); With: for(int i = 0; i < 10; i++ ) out.println("<b>City</b>"+ l.get(i).getCity() +", <b>State</b>"+ l.get(i).getState() +"<br>" ); Resolve all the import errors and save the file. How it works... To execute the code produced in this recipe, right-click on the EJBApplication node and select Run. When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names. One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that: @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs. We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation. Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.  
Read more
  • 0
  • 0
  • 4607

article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-1
Packt
05 Feb 2010
6 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 1

Packt
05 Feb 2010
6 min read
Due to the fact that writing a Java app to control your Twitter account is quite a long process and requires several features, I intend to divide this article in several sections, so you can see in extreme detail all the bells and whistles involved in writing Java applications. Downloading and installing NetBeans for your developing platform To download NetBeans, open a web browser window and go to the NetBeans website. Then click on the Download button and select the All IDE download bundle. After downloading NetBeans, install it with the default options. Creating your SwingAndTweet project Open NetBeans and select File | New Project to open the New Project dialog. Now select Java from the Categories panel and Java Application from the Projects panel. Click on Next to continue. The New Java Application dialog will show up next. Type SwingAndTweet in the Project Name field, mark the Use Dedicated Folder for Storing Libraries option, deselect the Create Main Class box (we’ll deal with that later), make sure the Set as Main Project box is enabled and click on Next to continue: NetBeans will create the SwingAndTweet project and will show it under the Projects tab, in the NetBeans main window. Right click on the project’s name and select JFrame Form... in the pop-up menu: The New JFrame Form window will appear next. Type SwingAndTweetUI in the Class Name field, type swingandtweet in the Package field and click on Finish to continue: NetBeans will open the SwingAndTweetUI frame in the center panel of the main screen. Now you’re ready to assemble your Tweeter Java application! Now let me explain a little bit about what we did in the previous exercise: First, we created a new Java application called SwingAndTweet. Then we created a Swing JFrame component and we named it SwingAndTweetUI, because this is going to act as the foundation, where we’re going to put all the other Swing components required to interact with Twitter. Now I’m going to show you how to download and integrate the Twitter4J API to your SwingAndTweetJava application. Downloading and integrating the Twitter4J API into your NetBeans environment For us to be able to use the powerful classes and methods from the Twitter4J API, we need to tell NetBeans where to find them and integrate them into our Java applications. Open a web browser window, go to http://repo1.maven.org/maven2/net/homeip/yusuke/twitter4j/ and search for the latest twitter4j.2.X.X.jar file, or download the most recent version at the time of this writing from here:http://repo1.maven.org/maven2/net/homeip/yusuke/twitter4j/2.0.9/twitter4j-2.0.9.jar. Once you download it in your computer, go to NetBeans, right-click on the SwingAndTweet project and select Properties from the context menu. Once at the project properties screen, select the Libraries category under the Categories panel, click on the Add JAR/Folder... button at the middle-right part of the screen to open the Add JAR/Folder dialog, navigate to the directory where you downloaded the twitter4j-2.X.X.jar file and double click on it to add it to your project’s library path: Click on OK to close the Project Properties dialog and return to the NetBeans main screen. Ok, you have integrated the Twitter4J API to your SwingAndTweet application. Now, let’s see how to log into your Twitter account from our Java application... Logging into Twitter from Java and seeing your last Tweet In the following exercise, I’ll show you how easy it is to start communicating with Twitter from a Java application, thanks to the Twitter class from the Twitter4J API. You‘ll also learn how to check your last tweet through your Java application. Let’s see how to log into a Twitter account: Go to the Palette window and locate the JLabel component under the Swing Controls section; then drag and drop it into the TweetAndSwing JFrame component: Now drag a Button and a Text Editor, too. Once you have the three controls inside the SwingAndTweetUI JFrame control, arrange them as shown below: The next step is to change their names and captions, to make our application look more professional. Right click on the JLabel1 control, select Edit from the context menu, type My Last Tweet and hit Enter. Do the same procedure with the other two controls: erase the text in the jTextField1 control and type Login in the jButton1 control. Rearrange the jLabel1 and jTextField1 controls, and drag one of the ends of jTextField1 to increase its length all you can. Once done, your application will look like this: And now, let’s inject some life to our application! Double click on the JButton1 control to open your application’s code window. You’ll be inside a java method called jButton1ActionPerformed. This method will execute every time you click on the Login button, and this is where we’re going to put all the code for logging into your Twitter account. Delete the // TODO add your handling code here: line and type the following code inside the JButton1ActionPerformed method: Remember to replace username and password with your real Twitter username and password. If you look closely at the line numbers, you‘ll notice there are five error icons on lines 82, 84, 85,  88 and 89. That’s because we need to add some import lines at the beginning of your code, to indicate NetBeans where to find the Twitter and JOptionPane classes, and the TwitterException. Scroll up until you locate the package swingandtweet; line; then add the following lines: Now all the errors will disappear from your code. To see your Java application in action, press F6 or select Run  Run | Main Project from the NetBeans main menu. The Run Project window will pop up, asking you to select the main class for your project. The swingandtweet.SwingAndTweetUI class will already be selected, so just click on OK to continue. Your SwingAndTweetUI application window will appear next, showing the three controls you created. Click on the Login button and wait for the SwingAndTweet application to validate your Twitter username and password. If they’re correct, the following dialog will pop up: Click on OK to return to your SwingAndTweet application. Now you will see your last tweet on the textbox control: If you want to be really sure it’s working, go to your Twitter account and update your status through the Web interface; for example, type Testing my Java app. Then return to your SwingAndTweet application and click on the Login button again to see your last tweet. The textbox control will now reflect your latest tweet: As you can see, your SwingAndTweet Java application can now communicate with your Twitter account! Click on the X button to close the window and exit your SwingAndTweet application.
Read more
  • 0
  • 0
  • 4594

article-image-introduction-cloud-computing-microsoft-azure
Packt
13 Jan 2011
6 min read
Save for later

Introduction to cloud computing with Microsoft Azure

Packt
13 Jan 2011
6 min read
What is an enterprise application? Before we hop into the cloud, let's talk about who this book is for. Who are "enterprise developers"? In the United States, over half of the economy is small businesses, usually privately owned, with a couple dozen of employees and revenues up to the millions of dollars. The applications that run these businesses have lower requirements because of smaller data volumes and a low number of application users. A single server may host several applications. Many of the business needs for these companies can be met with off-the-shelf software requiring little to no modification. The minority of the United States economy is made up of huge publicly owned corporations—think Microsoft, Apple, McDonald's, Coca-Cola, Best Buy, and so on. These companies have thousands of employees and revenues in the billions of dollars. Because these companies are publicly owned, they are subject to tight regulatory scrutiny. The applications utilized by these companies must faithfully keep track of an immense amount of data to be utilized by hundreds or thousands of users, and must comply with all matters of regulations. The infrastructure for a single application may involve dozens of servers. A team of consultants is often retained to install and maintain the critical systems of a business, and there is often an ecosystem of internal applications built around the enterprise systems that are just as critical. These are the applications we consider to be "enterprise applications", and the people who develop and extend them are "enterprise developers". The high availability of cloud platforms makes them attractive for hosting these critical applications, and there are many options available to the enterprise developer. What is cloud computing? At its most basic, cloud computing is moving applications accessible from our internal network onto an internet (cloud)-accessible space. We're essentially renting virtual machines in someone else's data center, with the capabilities for immediate scale-out, failover, and data synchronization. In the past, having an Internet-accessible application meant we were building a website with a hosted database. Cloud computing changes that paradigm—our application could be a website, or it could be a client installed on a local PC accessing a common data store from anywhere in the world. The data store could be internal to our network or itself hosted in the cloud. The following diagram outlines three ways in which cloud computing can be utilized for an application. In option 1, both data and application have been hosted in the cloud, the second option is to host our application in the cloud and our data locally, and the third option is to host our data in the cloud and our application locally. The expense (or cost) model is also very different. In our local network, we have to buy the hardware and software licenses, install and configure the servers, and finally we have to maintain them. All this counts in addition to building and maintaining the application! In cloud computing, the host usually handles all the installation, configuration, and maintenance of the servers, allowing us to focus mostly on the application. The direct costs of running our application in the cloud are only for each machine-hour of use and storage utilization. The individual pieces of cloud computing have all been around for some time. Shared mainframes and supercomputers have for a long time billed the end users based on that user's resource consumption. Space for websites can be rented on a monthly basis. Providers offer specialized application hosting and, relatively recently, leased virtual machines have also become available. If there is anything revolutionary about cloud computing, then it is its ability to combine all the best features of these different components into a single affordable service offering. Some benefits of cloud computing Cloud computing sounds great so far, right? So, what are some of the tangible benefits of cloud computing? Does cloud computing merit all the attention? Let's have a look at some of the advantages: Low up-front cost:At the top of the benefits list is probably the low up-front cost. With cloud computing, someone else is buying and installing the servers, switches, and firewalls, among other things. In addition to the hardware, software licenses and assurance plans are also expensive on the enterprise level, even with a purchasing agreement. In most cloud services, including Microsoft's Azure platform, we do not need to purchase separate licenses for operating systems or databases. In Azure, the costs include licenses for Windows Azure OS and SQL Azure. As a corollary, someone else is responsible for the maintenance and upkeep of the servers—no more tape backups that must be rotated and sent to off-site storage, no extensive strategies and lost weekends bringing servers up to the current release level, and no more counting the minutes until the early morning delivery of a hot swap fan to replace the one that burned out the previous afternoon. Easier disaster recovery and storage management:With synchronized storage across multiple data centers, located in different regions in the same country or even in different countries, disaster recovery planning becomes significantly easier. If capacity needs to be increased, it can be done quite easily by logging into a control panel and turning on an additional VM. It would be a rare instance indeed when our provider doesn't sell us additional capacity. When the need for capacity passes, we can simply turn off the VMs we no longer need and pay only for the uptime and storage utilization. Simplified migration:Migration from a test to a production environment is greatly simplified. In Windows Azure, we can test an updated version of our application in a local sandbox environment. When we're ready to go live, we deploy our application to a staged environment in the cloud and, with a few mouse clicks in the control panel, we turn off the live virtual machine and activate the staging environment as the live machine—we barely miss a beat! The migration can be performed well in advance of the cut-over, so daytime migrations and midnight cut-overs can become routine. Should something go wrong, the environments can be easily reversed and the issues analyzed the following day. Familiar environment:Finally, the environment we're working on is very familiar. In Azure's case, the environment can include the capabilities of IIS and .NET (or Java or PHP and Apache), with Windows and SQL Server or MySQL. One of the great features of Windows is that it can be confi gured in so many ways, and to an extent, Azure can also be configured in many ways, supporting a rich and familiar application environment.
Read more
  • 0
  • 0
  • 4593
article-image-welcome-spring-framework
Packt
30 Apr 2015
17 min read
Save for later

Welcome to the Spring Framework

Packt
30 Apr 2015
17 min read
In this article by Ravi Kant Soni, author of the book Learning Spring Application Development, you will be closely acquainted with the Spring Framework. Spring is an open source framework created by Rod Johnson to address the complexity of enterprise application development. Spring is now a long time de facto standard for Java enterprise software development. The framework was designed with developer productivity in mind and this makes it easier to work with the existing Java and JEE APIs. Using Spring, we can develop standalone applications, desktop applications, two tier applications, web applications, distributed applications, enterprise applications, and so on. (For more resources related to this topic, see here.) Features of the Spring Framework Lightweight: Spring is described as a lightweight framework when it comes to size and transparency. Lightweight frameworks reduce complexity in application code and also avoid unnecessary complexity in their own functioning. Non intrusive: Non intrusive means that your domain logic code has no dependencies on the framework itself. Spring is designed to be non intrusive. Container: Spring's container is a lightweight container, which contains and manages the life cycle and configuration of application objects. Inversion of control (IoC): Inversion of Control is an architectural pattern. This describes the Dependency Injection that needs to be performed by external entities instead of creating dependencies by the component itself. Aspect-oriented programming (AOP): Aspect-oriented programming refers to the programming paradigm that isolates supporting functions from the main program's business logic. It allows developers to build the core functionality of a system without making it aware of the secondary requirements of this system. JDBC exception handling: The JDBC abstraction layer of the Spring Framework offers a exceptional hierarchy that simplifies the error handling strategy. Spring MVC Framework: Spring comes with an MVC web application framework to build robust and maintainable web applications. Spring Security: Spring Security offers a declarative security mechanism for Spring-based applications, which is a critical aspect of many applications. ApplicationContext ApplicationContext is defined by the org.springframework.context.ApplicationContext interface. BeanFactory provides a basic functionality, while ApplicationContext provides advance features to our spring applications, which make them enterprise-level applications. Create ApplicationContext by using the ClassPathXmlApplicationContext framework API. This API loads the beans configuration file and it takes care of creating and initializing all the beans mentioned in the configuration file: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;   public class MainApp {   public static void main(String[] args) {      ApplicationContext context =    new ClassPathXmlApplicationContext("beans.xml");      HelloWorld helloWorld =    (HelloWorld) context.getBean("helloworld");      helloWorld.getMessage(); } } Autowiring modes There are five modes of autowiring that can be used to instruct Spring Container to use autowiring for Dependency Injection. You use the autowire attribute of the <bean/> element to specify the autowire mode for a bean definition. The following table explains the different modes of autowire: Mode Description no By default, the Spring bean autowiring is turned off, meaning no autowiring is to be performed. You should use the explicit bean reference called ref for wiring purposes. byName This autowires by the property name. If the bean property is the same as the other bean name, autowire it. The setter method is used for this type of autowiring to inject dependency. byType Data type is used for this type of autowiring. If the data type bean property is compatible with the data type of the other bean, autowire it. Only one bean should be configured for this type in the configuration file; otherwise, a fatal exception will be thrown. constructor This is similar to the byType autowire, but here a constructor is used to inject dependencies. autodetect Spring first tries to autowire by constructor; if this does not work, then it tries to autowire by byType. This option is deprecated. Stereotype annotation Generally, @Component, a parent stereotype annotation, can define all beans. The following table explains the different stereotype annotations: Annotation Use Description @Component Type This is a generic stereotype annotation for any Spring-managed component. @Service Type This stereotypes a component as a service and is used when defining a class that handles the business logic. @Controller Type This stereotypes a component as a Spring MVC controller. It is used when defining a controller class, which composes of a presentation layer and is available only on Spring MVC. @Repository Type This stereotypes a component as a repository and is used when defining a class that handles the data access logic and provide translations on the exception occurred at the persistence layer. Annotation-based container configuration For a Spring IoC container to recognize annotation, the following definition must be added to the configuration file: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans.xsd    http://www.springframework.org/schema/context    http://www.springframework.org/schema/context/spring-context-    3.2.xsd">   <context:annotation-config />                             </beans> Aspect-oriented programming (AOP) supports in Spring AOP is used in Spring to provide declarative enterprise services, especially as a replacement for EJB declarative services. Application objects do what they're supposed to do—perform business logic—and nothing more. They are not responsible for (or even aware of) other system concerns, such as logging, security, auditing, locking, and event handling. AOP is a methodology of applying middleware services, such as security services, transaction management services, and so on on the Spring application. Declaring an aspect An aspect can be declared by annotating the POJO class with the @Aspect annotation. This aspect is required to import the org.aspectj.lang.annotation.aspect package. The following code snippet represents the aspect declaration in the @AspectJ form: import org.aspectj.lang.annotation.Aspect; import org.springframework.stereotype.Component;   @Aspect @Component ("myAspect") public class AspectModule { // ... } JDBC with the Spring Framework The DriverManagerDataSource class is used to configure the DataSource for application, which is defined in the Spring.xml configuration file. The central class of Spring JDBC's abstraction framework is the JdbcTemplate class that includes the most common logic in using the JDBC API to access data (such as handling the creation of connection, creation of statement, execution of statement, and release of resources). The JdbcTemplate class resides in the org.springframework.jdbc.core package. JdbcTemplate can be used to execute different types of SQL statements. DML is an abbreviation of data manipulation language and is used to retrieve, modify, insert, update, and delete data in a database. Examples of DML are SELECT, INSERT, or UPDATE statements. DDL is an abbreviation of data definition language and is used to create or modify the structure of database objects in a database. Examples of DDL are CREATE, ALTER, and DROP statements. The JDBC batch operation in Spring The JDBC batch operation allows you to submit multiple SQL DataSource to process at once. Submitting multiple SQL DataSource together instead of separately improves the performance: JDBC with batch processing Hibernate with the Spring Framework Data persistence is an ability of an object to save its state so that it can regain the same state. Hibernate is one of the ORM libraries that is available to the open source community. Hibernate is the main component available for a Java developer with features such as POJO-based approach and supports relationship definitions. The object query language used by Hibernate is called as Hibernate Query Language (HQL). HQL is an SQL-like textual query language working at a class level or a field level. Let's start learning the architecture of Hibernate. Hibernate annotations is the powerful way to provide the metadata for the object and relational table mapping. Hibernate provides an implementation of the Java Persistence API so that we can use JPA annotations with model beans. Hibernate will take care of configuring it to be used in CRUD operations. The following table explains JPA annotations: JPA annotation Description @Entity The javax.persistence.Entity annotation is used to mark a class as an entity bean that can be persisted by Hibernate, as Hibernate provides the JPA implementation. @Table The javax.persistence.Table annotation is used to define table mapping and unique constraints for various columns. The @Table annotation provides four attributes, which allows you to override the name of the table, its catalogue, and its schema. This annotation also allows you to enforce unique constraints on columns in the table. For now, we will just use the table name as Employee. @Id Each entity bean will have a primary key, which you annotate on the class with the @Id annotation. The javax.persistence.Id annotation is used to define the primary key for the table. By default, the @Id annotation will automatically determine the most appropriate primary key generation strategy to be used. @GeneratedValue javax.persistence.GeneratedValue is used to define the field that will be autogenerated. It takes two parameters, that is, strategy and generator. The GenerationType.IDENTITY strategy is used so that the generated id value is mapped to the bean and can be retrieved in the Java program. @Column javax.persistence.Column is used to map the field with the table column. We can also specify the length, nullable, and uniqueness for the bean properties. Object-relational mapping (ORM, O/RM, and O/R mapping) ORM stands for Object-relational Mapping. ORM is the process of persisting objects in a relational database such as RDBMS. ORM bridges the gap between object and relational schemas, allowing object-oriented application to persist objects directly without having the need to convert object to and from a relational format: Hibernate Query Language (HQL) Hibernate Query Language (HQL) is an object-oriented query language that works on persistence object and their properties instead of operating on tables and columns. To use HQL, we need to use a query object. Query interface is an object-oriented representation of HQL. The query interface provides many methods; let's take a look at a few of them: Method Description public int executeUpdate() This is used to execute the update or delete query public List list() This returns the result of the relation as a list public Query setFirstResult(int rowno) This specifies the row number from where a record will be retrieved public Query setMaxResult(int rowno) This specifies the number of records to be retrieved from the relation (table) public Query setParameter(int position, Object value) This sets the value to the JDBC style query parameter public Query setParameter(String name, Object value) This sets the value to a named query parameter The Spring Web MVC Framework Spring Framework supports web application development by providing comprehensive and intensive support. The Spring MVC framework is a robust, flexible, and well-designed framework used to develop web applications. It's designed in such a way that development of a web application is highly configurable to Model, View, and Controller. In an MVC design pattern, Model represents the data of a web application, View represents the UI, that is, user interface components, such as checkbox, textbox, and so on, that are used to display web pages, and Controller processes the user request. Spring MVC framework supports the integration of other frameworks, such as Struts and WebWork, in a Spring application. This framework also helps in integrating other view technologies, such as Java Server Pages (JSP), velocity, tiles, and FreeMarker in a Spring application. The Spring MVC Framework is designed around a DispatcherServlet. The DispatcherServlet dispatches the http request to handler, which is a very simple controller interface. The Spring MVC Framework provides a set of the following web support features: Powerful configuration of framework and application classes: The Spring MVC Framework provides a powerful and straightforward configuration of framework and application classes (such as JavaBeans). Easier testing: Most of the Spring classes are designed as JavaBeans, which enable you to inject the test data using the setter method of these JavaBeans classes. The Spring MVC framework also provides classes to handle the Hyper Text Transfer Protocol (HTTP) requests (HttpServletRequest), which makes the unit testing of the web application much simpler. Separation of roles: Each component of a Spring MVC Framework performs a different role during request handling. A request is handled by components (such as controller, validator, model object, view resolver, and the HandlerMapping interface). The whole task is dependent on these components and provides a clear separation of roles. No need of the duplication of code: In the Spring MVC Framework, we can use the existing business code in any component of the Spring MVC application. Therefore, no duplicity of code arises in a Spring MVC application. Specific validation and binding: Validation errors are displayed when any mismatched data is entered in a form. DispatcherServlet in Spring MVC The DispatcherServlet of the Spring MVC Framework is an implementation of front controller and is a Java Servlet component for Spring MVC applications. DispatcherServlet is a front controller class that receives all incoming HTTP client request for the Spring MVC application. DispatcherServlet is also responsible for initializing the framework components that will be used to process the request at various stages. The following code snippet declares the DispatcherServlet in the web.xml deployment descriptor: <servlet> <servlet-name>SpringDispatcher</servlet-name> <servlet-class>    org.springframework.web.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet>   <servlet-mapping> <servlet-name>SpringDispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> In the preceding code snippet, the user-defined name of the DispatcherServlet class is SpringDispatcher, which is enclosed with the <servlet-name> element. When our newly created SpringDispatcher class is loaded in a web application, it loads an application context from an XML file. DispatcherServlet will try to load the application context from a file named SpringDispatcher-servlet.xml, which will be located in the application's WEB-INF directory: <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context- 3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">   <mvc:annotation-driven />   <context:component-scan base- package="org.packt.Spring.chapter7.springmvc" />   <beanclass="org.springframework.web.servlet.view. InternalResourceViewResolver">    <property name="prefix" value="/WEB-INF/views/" />    <property name="suffix" value=".jsp" /> </bean>   </beans> Spring Security The Spring Security framework is the de facto standard to secure Spring-based applications. The Spring Security framework provides security services for enterprise Java software applications by handling authentication and authorization. The Spring Security framework handles authentication and authorization at the web request and the method invocation level. The two major operations provided by Spring Security are as follows: Authentication: Authentication is the process of assuring that a user is the one who he/she claims to be. It's a combination of identification and verification. The identification process can be performed in a number of different ways, that is, username and password that can be stored in a database, LDAP, or CAS (single sign-out protocol), and so on. Spring Security provides a password encoder interface to make sure that the user's password is hashed. Authorization: Authorization provides access control to an authenticated user. It's the process of assurance that the authenticated user is allowed to access only those resources that he/she is authorized for use. Let's take a look at an example of the HR payroll application, where some parts of the application have access to HR and to some other parts, all the employees have access. The access rights given to user of the system will determine the access rules. In a web-based application, this is often done by URL-based security and is implemented using filters that play an primary role in securing the Spring web application. Sometimes, URL-based security is not enough in web application because URLs can be manipulated and can have relative pass. So, Spring Security also provides method level security. An authorized user will only able to invoke those methods that he is granted access for. Securing web application's URL access HttpServletRequest is the starting point of Java's web application. To configure web security, it's required to set up a filter that provides various security features. In order to enable Spring Security, add filter and their mapping in the web.xml file: <!—Spring Security --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter. DelegatingFilterProxy</filter-class> </filter>   <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Logging in to a web application There are multiple ways supported by Spring security for users to log in to a web application: HTTP basic authentication: This is supported by Spring Security by processing the basic credentials presented in the header of the HTTP request. It's generally used with stateless clients, who on each request pass their credential. Form-based login service: Spring Security supports the form-based login service by providing a default login form page for users to log in to the web application. Logout service: Spring Security supports logout services that allow users to log out of this application. Anonymous login: This service is provided by Spring Security that grants authority to an anonymous user, such as a normal user. Remember-me support: This is also supported by Spring Security and remembers the identity of a user across multiple browser sessions. Encrypting passwords Spring Security supports some hashing algorithms such as MD5 (Md5PasswordEncoder), SHA (ShaPasswordEncoder), and BCrypt (BCryptPasswordEncoder) for password encryption. To enable the password encoder, use the <password-encoder/> element and set the hash attribute, as shown in the following code snippet: <authentication-manager> <authentication-provider>    <password-encoder hash="md5" />    <jdbc-user-service data-source-    ref="dataSource"    . . .   </authentication-provider> </authentication-manager> Mail support in the Spring Framework The Spring Framework provides a simplified API and plug-in for full e-mail support, which minimizes the effect of the underlying e-mailing system specifications. The Sprig e-mail supports provide an abstract, easy, and implementation independent API to send e-mails. The Spring Framework provides an API to simplify the use of the JavaMail API. The classes handle the initialization, cleanup operations, and exceptions. The packages for the JavaMail API provided by the Spring Framework are listed as follows: Package Description org.springframework.mail This defines the basic set of classes and interfaces to send e-mails. org.springframework.mail.java This defines JavaMail API-specific classes and interfaces to send e-mails. Spring's Java Messaging Service (JMS) Java Message Service is a Java Message-oriented middleware (MOM) API responsible for sending messages between two or more clients. JMS is a part of the Java enterprise edition. JMS is a broker similar to a postman who acts like a middleware between the message sender and the receiver. Message is nothing, but just bytes of data or information exchanged between two parties. By taking different specifications, a message can be described in various ways. However, it's nothing, but an entity of communication. A message can be used to transfer a piece of information from one application to another, which may or may not run on the same platform. The JMS application Let's look at the sample JMS application pictorial, as shown in the following diagram: We have a Sender and a Receiver. The Sender is responsible for sending a message and the Receiver is responsible for receiving a message. We need a broker or MOM between the Sender and Receiver, who takes the sender's message and passes it from the network to the receiver. Message oriented middleware (MOM) is basically an MQ application such as ActiveMQ or IBM-MQ, which are two different message providers. The sender promises loose coupling and it can be .NET or mainframe-based application. The receiver can be Java or Spring-based application and it sends back the message to the sender as well. This is a two-way communication, which is loosely coupled. Summary This article covered the architecture of Spring Framework and how to set up the key components of the Spring application development environment. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Serving and processing forms [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 4590

article-image-process-driven-soa-development
Packt
13 Sep 2010
9 min read
Save for later

Process Driven SOA Development

Packt
13 Sep 2010
9 min read
(For more resources on Oracle, see here.) Business Process Management and SOA One of the major benefits of a Service-Oriented Architecture is its ability to align IT with business processes. Business processes are important because they define the way business activities are performed. Business processes change as the company evolves and improves its operations. They also change in order to make the company more competitive. Today, IT is an essential part of business operations. Companies are simply unable to do business without IT support. However, this places a high level of responsibility on IT. An important part of this responsibility is the ability of IT to react to changes in a quick and efficient manner. Ideally, IT must instantly respond to business process changes. In most cases, however, IT is not flexible enough to adapt application architecture to the changes in business processes quickly. Software developers require time to modify application behavior. In the meantime, the company is stuck with old processes. In a highly competitive marketplace such delays are dangerous, and the threat is exacerbated by a reliance on traditional software development to make quick changes within an increasingly complex IT architecture. The major problem with traditional approaches to software development is the huge semantic gap between IT and the process models. The traditional approach to software development has been focused on functionalities rather than on end-to-end support for business processes. It usually requires the definition of use cases, sequence diagrams, class diagrams, and other artifacts, which bring us to the actual code in a programming language such as Java, C#, C++, and so on. SOA reduces the semantic gap by introducing a development model that aligns the IT development cycle with the business process lifecycle. In SOA, business processes can be executed directly and integrated with existing applications through services. To understand this better, let's look at the four phases of the SOA lifecycle: Process modeling: This is the phase in which process analysts work with process owners to analyze the business process and define the process model. They define the activity flow, information flow, roles, and business documents. They also define business policies and constraints, business rules, and performance measures. Performance measures are often called Key Performance Indicators (KPIs). Examples of KPIs include activity turnaround time, activity cost, and so on. Usually Business Process Modeling Notation (BPMN) is used in this phase. Process implementation: This is the phase in which developers work with process analysts to implement the business process, with the objective of providing end-to-end support for the process. In an SOA approach, the process implementation phase includes process implementation with the Business Process Execution Language (BPEL) and process decomposition to the services, implementation or reuse of services, and integration. Process execution and control: This is the actual execution phase, in which the process participants execute various activities of the process. In the end-to-end support for business processes, it is very important that IT drives the process and directs process participants to execute activities, and not vice versa, where the actual process drivers are employees. In SOA, processes execute on a process server. Process control is an important part of this phase, during which process supervisors or process managers control whether the process is executing optimally. If delays occur, exceptions arise, resources are unavailable, or other problems develop, process supervisors or managers can take corrective actions. Process monitoring and optimization: This is the phase in which process owners monitor the KPIs of the process using Business Activity Monitoring (BAM). Process analysts, process owners, process supervisors, and key users examine the process and analyze the KPIs while taking into account changing business conditions. They examine business issues and make optimizations to the business process. The following figure shows how a process enters this cycle, and goes through the various stages: Once optimizations have been identified and selected, the process returns to the modeling phase, where optimizations are applied. Then the process is re-implemented and the whole lifecycle is repeated. This is referred to as an iterative-incremental lifecycle, because the process is improved at each stage. Organizational aspects of SOA development SOA development, as described in the previous section, differs considerably from traditional development. SOA development is process-centric and keeps the modeler and the developer focused on the business process and on end-to-end support for the process, thereby efficiently reducing the gap between business and IT. The success of the SOA development cycle relies on correct process modeling. Only when processes are modeled in detail can we develop end-to-end support that will work. Exceptional process fl ows also have to be considered. This can be a difficult task, one that is beyond the scope of the IT department (particularly when viewed from the traditional perspective). To make process-centric SOA projects successful, some organizational changes are required. Business users with a good understanding of the process must be motivated to actively participate in the process modeling. Their active participation must not be taken for granted, lest they find other work "more useful," particularly if they do not see the added value of process modeling. Therefore, a concise explanation as to why process modeling makes sense can be a very valuable time investment. A good strategy is to gain top management support. It makes enormous sense to explain two key factors to top management—first, why a process centric approach and end-to-end support for processes makes sense, and second, why the IT department cannot successfully complete the task without the participation of business users. Usually top management will understand the situation rather quickly and will instruct business users to participate. Obviously, the proposed process-centric development approach must become an ongoing activity. This will require the formalization of certain organizational structures. Otherwise, it will be necessary to seek approval for each and every project. We have already seen that the proposed approach outgrows the organizational limits of the IT department. Many organizations establish a BPM/SOA Competency Center, which includes business users and all the other profiles required for SOA development. This also includes the process analyst, process implementation, service development, and presentation layer groups, as well as SOA governance. Perhaps the greatest responsibility of SOA development is to orchestrate the aforementioned groups so that they work towards a common goal. This is the responsibility of the project manager, who must work in close connection with the governance group. Only in this way can SOA development be successful, both in the short term (developing end-to-end applications for business processes), and in the long term (developing a fl exible, agile IT architecture that is aligned with business needs). Technology aspects of SOA development SOA introduces technologies and languages that enable the SOA development approach. Particularly important is BPMN, which is used for business process modeling, and BPEL, which is used for business process execution. BPMN is the key technology for process modeling. The process analyst group must have in-depth knowledge of BPMN and process modeling concepts. When modeling processes for SOA, they must be modeled in detail. Using SOA, we model business processes with the objective of implementing them in BPEL and executing them on the process server. Process models can be made executable only if all the relevant information is captured that is needed for the actual execution. We must identify individual activities that are atomic from the perspective of the execution. We must model exceptional scenarios too. Exceptional scenarios define how the process behaves when something goes wrong—and in the real world, business processes can and do go wrong. We must model how to react to exceptional situations and how to recover appropriately. Next, we automate the process. This requires mapping of the BPMN process model into the executable representation in BPEL. This is the responsibility of the process implementation group. BPMN can be converted to BPEL almost automatically and vice versa, which guarantees that the process map is always in sync with the executable code. However, the executable BPEL process also has to be connected with the business services. Each process activity is connected with the corresponding business service. Business services are responsible for fulfilling the individual process activities. SOA development is most efficient if you have a portfolio of business services that can be reused, and which includes lower-level and intermediate technical services. Business services can be developed from scratch, exposed from existing systems, or outsourced. This task is the responsibility of the service development group. In theory, it makes sense for the service development group to first develop all business services. Only then would the process implementation group start to compose those services into the process. However, in the real world this is often not the case, because you will probably not have the luxury of time to develop the services first and only then start the processes. And even if you do have enough time, it would be difficult to know which business services will be required by processes. Therefore, both groups usually work in parallel, which is a great challenge. It requires interaction between them and strict, concise supervision of the SOA governance group and the project manager; otherwise, the results of both groups (the process implementation group and the service development group) will be incompatible. Once you have successfully implemented the process, it can be deployed on the process server. In addition to executing processes, a process server provides other valuable information, including a process audit trail, lists of successfully completed processes, and a list of terminated or failed processes. This information is helpful in controlling the process execution and in taking any necessary corrective measures. The services and processes communicate using the Enterprise Service Bus (ESB). The services and processes are registered in the UDDI-compliant service registry. Another part of the architecture is the rule engine, which serves as a central place for business rules. For processes with human tasks, user interaction is obviously important, and is connected to identity management. The SOA platform also provides BAM. BAM helps to measure the key performance indicators of the process, and provides valuable data that can be used to optimize processes. The ultimate goal of each BAM user is to optimize process execution, to improve process efficiency, and to sense and react to important events. BAM ensures that we start optimizing processes where it makes most sense. Traditionally, process optimization has been based on simulation results, or even worse, by guessing where bottlenecks might be. BAM, on the other hand, gives more reliable and accurate data, which leads to better decisions about where to start with optimizations. The following figure illustrates the SOA layers:
Read more
  • 0
  • 0
  • 4587

article-image-microsoft-lightswitch-querying-multiple-entities
Packt
16 Sep 2011
4 min read
Save for later

Microsoft LightSwitch: Querying Multiple Entities

Packt
16 Sep 2011
4 min read
  (For more resources on this topic, see here.)   Microsoft LightSwitch makes it easy to query multiple entities and with queries you can fine tune the results using multiple parameters. In the following, we will be considering the Orders and the Shippers tables from the Northwind database shown next: What we would like to achieve is to fashion a query in LightSwitch which finds orders later than a specified date (OrderDate) carried by a specified shipping company (CompanyName). In the previous example, we created a single parameter and here we extend it to two parameters, OrderDate and CompanyName. The following stored procedure in SQL Server 2008 would produce the rows that satisfy the above conditions: Use NorthwindGoCreate Procedure ByDateAndShprName @ordDate datetime, @shprName nvarchar(30)asSELECT Orders.OrderID, Orders.CustomerID, Orders.EmployeeID,Orders.OrderDate, Orders.RequiredDate, Orders.ShippedDate,Orders.ShipVia, Orders.Freight, Orders.ShipName, Orders.ShipAddress, Shippers.ShipperID,Shippers.CompanyName, Shippers.PhoneFROM Orders INNER JOIN Shippers ON Orders.ShipVia = Shippers.ShipperIDwhere Orders.OrderDate > @OrdDate and Shippers.CompanyName=@shprName The stored procedure ByDateAndShprName can be executed by providing the two parameters (variables), @OrdDate and @shprName, as shown below. Exec ByDateAndShprName '5/1/1998 12:00:00','United Package' The result returned by the previous command is shown next copied from the SQL Server Management Studio (only first few columns are shown): The same result can be achieved in LightSwitch using two parameters after attaching these two tables to the LightSwitch application. As the details of creating screens and queries have been described in detail, only some details specific to the present section are described. Note that the mm-dd-yyyy appears in the result reversed yyyy-mm-dd. Create a Microsoft LightSwitch application (VB or C#). Here project Using Combo6 was created. Attach a database using SQL Server 2008 Express and bring the two tables, Orders and Shippers, to create two entities, Order and Shipper, as shown in the next screenshot: Create a query as shown in the next image: Here the query is called ByDate. Note that the CompanyName in the Shippers table is distinct. The completed query with two parameters appears as shown: Create a new screen (click on Add Screen in the query designer shown in the previous screenshot) and choose the Editable Grid Screen template. Here the screen created is named EditableGridByDate. Click on Add Data Item… and add the query NorthwindData.ByDate. The designer changes as shown next: Click on OrderDate parameter on the left-hand side navigation of the screen and drag and drop it just below the Screen Command Bar as shown. In a similar manner, drag and drop the query parameter CompanyName below the OrderDate of the earlier step. This will display as two controls for two parameters on the screen. Hold with mouse, drag and drop ByDate below the CompanyName you added in the previous step. The completed screen design should appear as shown (some fields are not shown in the display): The previous image shows two parameters. The DataGrid rows show the rows returned by the query. As is, this screen would return no data if the parameters were not specified. The OrderDate defaults to Current Date. Click on F5 to display the screen as shown: Enter the date 5/1/1998 directly. Enter United Package in the CompanyName textbox and click on the Refresh button on the previous screen. The screen is displayed as shown here: The above screen is an editable screen and you should be able to add, delete, and edit the fields and they should update the fields in the backend database when you save the data. Also note that the LightSwitch application returned 11 rows of data while the stored procedure in SQL Server returned 10 rows. This may look weird but SQL Server date time refers to PM but Microsoft LightSwitch order date is datetime data type with AM. Entering PM instead of AM returns the correct number of rows.  
Read more
  • 0
  • 0
  • 4576
article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 4549

article-image-getting-started-netbeans
Packt
04 Aug 2011
6 min read
Save for later

Getting Started with NetBeans

Packt
04 Aug 2011
6 min read
Java EE 6 Development with NetBeans 7 Develop professional enterprise Java EE applications quickly and easily with this popular IDE In addition to being an IDE, NetBeans is also a platform. Developers can use NetBeans' APIs to create both NetBeans plugins and standalone applications. For a brief history of Netbeans, see http://netbeans.org/about/history.html. Although the NetBeans IDE supports several programming languages, because of its roots as a Java only IDE it is a lot more popular with this language. As a Java IDE, NetBeans has built-in support for Java SE (Standard Edition) applications, which typically run in the user's desktop or notebook computer; Java ME (Micro Edition), which typically runs in small devices such as cell phones or PDAs; and for Java EE (Enterprise Edition) applications, which typically run on "big iron" servers and can support thousands of concurrent users.   Obtaining NetBeans NetBeans can be obtained by downloading it from http://www.netbeans.org. To download NetBeans, we need to click on the button labeled Download Free NetBeans IDE 7.0 (the exact name of the button may vary depending on the current version of NetBeans). Clicking on this button will take us to a page displaying all of NetBeans download bundles. NetBeans download includes different NetBeans bundles that provide different levels of functionality. The following table summarizes the different available NetBeans bundles and describes the functionality they provide: NetBeans bundleDescriptionJava SEAllows development of Java desktop applications.Java EEAllows development of Java Standard Edition (typically desktop applications), and Java Enterprise Edition (enterprise application running on "big iron" servers) applications.C/C++Allows development of applications written in the C or C++ languages.PHPAllows development of web applications using the popular open source PHP programming language.AllIncludes functionality of all NetBeans bundles. To follow the examples, either the Java EE or the All bundle is needed. The screenshots were taken with the Java EE bundle. NetBeans may look slightly different if the All Pack is used, particularly, some additional menu items may be seen. The following platforms are officially supported: Windows 7/Vista/XP/2000 Linux x86 Linux x64 Solaris x86 Solaris x64 Mac OS X Additionally, NetBeans can be executed in any platform containing Java 6 or newer. To download a version of NetBeans to be executed in one of these platforms, an OS independent version of NetBeans is available for download. Although the OS independent version of NetBeans can be executed in all of the supported platforms, it is recommended to obtain the platform-specific version of NetBeans for your platform. The NetBeans download page should detect the operating system being used to access it, and the appropriate platform should be selected by default. If this is not the case, or if you are downloading NetBeans with the intention of installing it in another workstation on another platform, the correct platform can be selected from the drop down labeled, appropriately enough, Platform. Once the correct platform has been selected, we need to click on the appropriate Download button for the NetBeans bundle we wish to install. For Java EE development, we need either the Java EE or the All bundle. NetBeans will then be downloaded to a directory of our choice. Java EE applications need to be deployed to an application server. Several application servers exist in the market, both the Java EE and the All NetBeans bundles come with GlassFish and Tomcat bundled. Tomcat is a popular open source servlet container, it can be used to deploy applications using the Servlets, JSP and JSF, however it does not support other Java EE technologies such as EJBs or JPA. GlassFish is a 100 percent Java EE-compliant application server. We will be using the bundled GlassFish application server to deploy and execute our examples.   Installing NetBeans NetBeans requires a Java Development Kit (JDK) version 6.0 or newer to be available before it can be installed. NetBeans installation varies slightly between the supported platforms. In the following few sections we explain how to install NetBeans on each supported platform. Microsoft Windows For Microsoft Windows platforms, NetBeans is downloaded as an executable file named something like netbeans-7.0-ml-java-windows.exe, (exact name depends on the version of NetBeans and the NetBeans bundle that was selected for download). To install NetBeans on Windows platforms, simply navigate to the folder where NetBeans was downloaded and double-click on the executable file. Mac OS X For Mac OS X, the downloaded file is called something like netbeans-7.0-ml-javamacosx.dmg (exact name depends on the NetBeans version and the NetBeans bundle that was selected for download). In order to install NetBeans, navigate to the location where the file was downloaded and double-click on it. The Mac OS X installer contains four packages, NetBeans, GlassFish, Tomcat, and OpenESB, these four packages need to be installed individually, They can be installed by simply double-clicking on each one of them. Please note that GlassFish must be installed before OpenESB. Linux and Solaris For Linux and Solaris, NetBeans is downloaded in the form of a shell script. The name of the file will be similar to netbeans-7.0-ml-java-linux.sh, netbeans-7.0-mljava-solaris-x86.sh, or netbeans-7.0-ml-java-solaris-sparc.sh, depending on the version of NetBeans, the selected platform and the selected NetBeans bundle. Before NetBeans can be installed in these platforms, the downloaded file needs to be made executable. This can be done in the command line by navigating to the directory where the NetBeans installer was downloaded and executing the following command: chmod +x ./filename.sh Substitute filename.sh with the appropriate file name for the platform and the NetBeans bundle. Once the file is executable it can be installed from the command line: ./filename.sh Again substitute filename.sh with the appropriate file name for the platform and the NetBeans bundle. Other platforms For other platforms, NetBeans can be downloaded as a platform-independent zip file. The name of the zip file will be something like netbeans-7.0-201007282301-mljava.zip (exact file name may vary, depending on the exact version of NetBeans downloaded and the NetBeans bundle that was selected). To install NetBeans on one of these platforms, simply extract the zip file to any suitable directory.  
Read more
  • 0
  • 0
  • 4513
Modal Close icon
Modal Close icon