Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-dwr-java-ajax-user-interface-basic-elements-part-1
Packt
20 Oct 2009
16 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 1)

Packt
20 Oct 2009
16 min read
  Creating a Dynamic User Interface The idea behind a dynamic user interface is to have a common "framework" for all samples. We will create a new web application and then add new features to the application as we go on. The user interface will look something like the following figure: The user interface has three main areas: the title/logo that is static, the tabs that are dynamic, and the content area that shows the actual content. The idea behind this implementation is to use DWR functionality to generate tabs and to get content for the tab pages. The tabbed user interface is created using a CSS template from the Dynamic Drive CSS Library (http://dynamicdrive.com/style/csslibrary/item/css-tabs-menu). Tabs are read from a properties file, so it is possible to dynamically add new tabs to the web page. The following screenshot shows the user interface. The following sequence diagram shows the application flow from the logical perspective. Because of the built-in DWR features we don't need to worry very much about how asynchronous AJAX "stuff" works. This is, of course, a Good Thing. Now we will develop the application using the Eclipse IDE and the Geronimo test environment Creating a New Web Project First, we will create a new web project. Using the Eclipse IDE we do the following: select the menu File | New | Dynamic Web Project. This opens the New Dynamic Web Project dialog; enter the project name DWREasyAjax and click Next, and accept the defaults on all the pages till the last page, where Geronimo Deployment Plan is created as shown in the following screenshot: Enter easyajax as Group Id and DWREasyAjax as Artifact Id. On clicking Finish, Eclipse creates a new web project. The following screen shot shows the generated project and the directory hierarchy. Before starting to do anything else, we need to copy DWR to our web application. All DWR functionality is present in the dwr.jar file, and we just copy that to the WEB-INF | lib directory. A couple of files are noteworthy: web.xml and geronimo-web.xml. The latter is generated for the Geronimo application server, and we can leave it as it is. Eclipse has an editor to show the contents of geronimo-web.xml when we double-click the file. Configuring the Web Application The context root is worth noting (visible in the screenshot above). We will need it when we test the application. The other XML file, web.xml, is very important as we all know. This XML will hold the DWR servlet definition and other possible initialization parameters. The following code shows the full contents of the web.xml file that we will use: <?xml version="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web- app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWREasyAjax</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app> DWR cannot function without the dwr.xml configuration file. So we need to create the configuration file. We use Eclipse to create a new XML file in the WEB-INF directory. The following is required for the user interface skeleton. It already includes the allow-element for our DWR based menu. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="HorizontalMenu"> <param name="class" value="samples.HorizontalMenu" /> </create> </allow> </dwr> In the allow element, there is a creator for the horizontal menu Java class that we are going to implement here. The creator that we use here is the new creator, which means that DWR will use an empty constructor to create Java objects for clients. The parameter named class holds the fully qualified class name. Developing the Web Application Since we have already defined the name of the Java class that will be used for creating the menu, the next thing we do is implement it. The idea behind the HorizontalMenu class is that it is used to read a properties file that holds the menus that are going to be on the web page. We add properties to a file named dwrapplication.properties, and we create it in the same samples-package as the HorizontalMenu-class. The properties file for the menu items is as follows: menu.1=Tables and lists,TablesAndLists menu.2=Field completion,FieldCompletion The syntax for the menu property is that it contains two elements separated by a comma. The first element is the name of the menu item. This is visible to user. The second is the name of HTML template file that will hold the page content of the menu item. The class contains just one method, which is used from JavaScript and via DWR to retrieve the menu items. The full class implementation is shown here: package samples; import java.io.IOException; import java.io.InputStream; import java.util.List; import java.util.Properties; import java.util.Vector; public class HorizontalMenu { public HorizontalMenu() { } public List<String> getMenuItems() throws IOException { List<String> menuItems = new Vector<String>(); InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/dwrapplication.properties"); Properties appProps = new Properties(); appProps.load(is); is.close(); for (int menuCount = 1; true; menuCount++) { String menuItem = appProps.getProperty("menu." + menuCount); if (menuItem == null) { break; } menuItems.add(menuItem); } return menuItems; } } The implementation is straightforward. The getMenuItems() method loads properties using the ClassLoader.getResourceAsStream() method, which searches the class path for the specified resource. Then, after loading properties, a for loop is used to loop through menu items and then a List of String-objects is returned to the client. The client is the JavaScript callback function that we will see later. DWR automatically converts the List of String objects to JavaScript arrays, so we don't have to worry about that. Testing the Web Application We haven't completed any client-side code now, but let's test the code anyway. Testing uses the Geronimo test environment. The Project context menu has the Run As menu that we use to test the application as shown in the following screenshot: Run on Server opens a wizard to define a new server runtime. The following screenshot shows that the Geronimo test environment has already been set up, and we just click Finish to run the application. If the test environment is not set up, we can manually define a new one in this dialog: After we click Finish, Eclipse starts the Geronimo test environment and our application with it. When the server starts, the Console tab in Eclipse informs us that it's been started. The Servers tab shows that the server is started and all the code has been synchronized, that is, the code is the most recent (Synchronization happens whenever we save changes on some deployed file.) The Servers tab also has a list of deployed applications under the server. Just the one application that we are testing here is visible in the Servers tab. Now comes the interesting part—what are we going to test if we haven't really implemented anything? If we take a look at the web.xml file, we will find that we have defined one initialization parameter. The Debug parameter is true, which means that DWR generates test pages for our remoted Java classes. We just point the browser (Firefox in our case) to the URL http://127.0.0.1:8080/DWREasyAjax/dwr and the following page opens up: This page will show a list of all the classes that we allow to be remoted. When we click the class name, a test page opens as in the following screenshot: This is an interesting page. We see all the allowed methods, in this case, all public class methods since we didn't specifically include or exclude anything. The most important ones are the script elements, which we need to include in our HTML pages. DWR does not automatically know what we want in our web pages, so we must add the script includes in each page where we are using DWR and a remoted functionality. Then there is the possibility of testing remoted methods. When we test our own method, getMenuItems(), we see a response in an alert box: The array in the alert box in the screenshot is the JavaScript array that DWR returns from our method. Developing Web Pages The next step is to add the web pages. Note that we can leave the test environment running. Whenever we change the application code, it is automatically published to test the environment, so we don't need to stop and start the server each time we make some changes and want to test the application. The CSS style sheet is from the Dynamic Drive CSS Library. The file is named styles.css, and it is in the WebContent directory in Eclipse IDE. The CSS code is as shown: /*URL: http://www.dynamicdrive.com/style/ */ .basictab{ padding: 3px 0; margin-left: 0; font: bold 12px Verdana; border-bottom: 1px solid gray; list-style-type: none; text-align: left; /*set to left, center, or right to align the menu as desired*/ } .basictab li{ display: inline; margin: 0; } .basictab li a{ text-decoration: none; padding: 3px 7px; margin-right: 3px; border: 1px solid gray; border-bottom: none; background-color: #f6ffd5; color: #2d2b2b; } .basictab li a:visited{ color: #2d2b2b; } .basictab li a:hover{ background-color: #DBFF6C; color: black; } .basictab li a:active{ color: black; } .basictab li.selected a{ /*selected tab effect*/ position: relative; top: 1px; padding-top: 4px; background-color: #DBFF6C; color: black; } This CSS is shown for the sake of completion, and we will not go into details of CSS style sheets. It is sufficient to say that CSS provides an excellent method to create websites with good presentation. The next step is the actual web page. We create an index.jsp page, in the WebContent directory, which will have the menu and also the JavaScript functions for our samples. It should be noted that although all JavaScript code is added to a single JSP page here in this sample, in "real" projects it would probably be more useful to create a separate file for JavaScript functions and include the JavaScript file in the HTML/JSP page using a code snippet such as this: <script type="text/javascript" src="myjavascriptcode/HorizontalMenu.js"/>. We will add JavaScript functions later for each sample. The following is the JSP code that shows the menu using the remoted HorizontalMenu class. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link href="styles.css" rel="stylesheet" type="text/css"/> <script type='text/javascript' src='/DWREasyAjax/dwr/engine.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/util.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/interface/HorizontalMenu.js'></script> <title>DWR samples</title> <script type="text/javascript"> function loadMenuItems() { HorizontalMenu.getMenuItems(setMenuItems); } function getContent(contentId) { AppContent.getContent(contentId,setContent); } function menuItemFormatter(item) { elements=item.split(','); return '<li><a href="#" onclick="getContent(''+elements[1]+'');return false;">'+elements[0]+'</a></li>'; } function setMenuItems(menuItems) { menu=dwr.util.byId("dwrMenu"); menuItemsHtml=''; for(var i=0;i<menuItems.length;i++) { menuItemsHtml=menuItemsHtml+menuItemFormatter(menuItems[i]); } menu.innerHTML=menuItemsHtml; } function setContent(htmlArray) { var contentFunctions=''; var scriptToBeEvaled=''; var contentHtml=''; for(var i=0;i<htmlArray.length;i++) { var html=htmlArray[i]; if(html.toLowerCase().indexOf('<script')>-1) { if(html.indexOf('TO BE EVALED')>-1) { scriptToBeEvaled=html.substring(html.indexOf('>')+1,html.indexOf('</')); } else { eval(html.substring(html.indexOf('>')+1,html.indexOf('</'))); contentFunctions+=html; } } else { contentHtml+=html; } } contentScriptArea=dwr.util.byId("contentAreaFunctions"); contentScriptArea.innerHTML=contentFunctions; contentArea=dwr.util.byId("contentArea"); contentArea.innerHTML=contentHtml; if(scriptToBeEvaled!='') { eval(scriptToBeEvaled); } } </script> </head> <body onload="loadMenuItems()"> <h1>DWR Easy Java Ajax Applications</h1> <ul class="basictab" id="dwrMenu"> </ul> <div id="contentAreaFunctions"> </div> <div id="contentArea"> </div> </body> </html> This JSP is our user interface. The HTML is just normal HTML with a head element and a body element. The head includes reference to a style sheet and to DWR JavaScript files, engine.js, util.js, and our own HorizontalMenu.js. The util.js file is optional, but as it contains very useful functions, it could be included in all the web pages where we use the functions in util.js. The body element has a contentArea place holder for the content pages just below the menu. It also contains the content area for JavaScript functions for a particular content. The body element onload-event executes the loadMenuItems() function when the page is loaded. The loadMenuItems() function calls the remoted method of the HorizontalMenu Java class. The parameter of the HorizontalMenu. getMenuItems() JavaScript function is the callback function that is called by DWR when the Java method has been executed and it returns menu items. The setMenuItems() function is a callback function for the loadMenuItems() function mentioned in the previous paragraph. While loading menu items, the Horizontal.getMenuItems() remoted method returns menu items as a List of Strings as a parameter to the setMenuItems() function. The menu items are formatted using the menuItemFormatter() helper function. The menuItemFormatter() function creates li elements of menu texts. Menus are formatted as links, (a href) and they have an onclick event that has a function call to the getContent-function, which in turn calls the AppContent.getContent() function. The AppContent is a remoted Java class, which we haven't implemented yet, and its purpose is to read the HTML from a file based on the menu item that the user clicked. Implementation of AppContent and the content pages are described in the next section. The setContent() function sets the HTML content to the content area and also evaluates JavaScript options that are within the content to be inserted in the content area (this is not used very much, but it is there for those who need it). Our dynamic user interface looks like this: Note the Firebug window at the bottom of the browser screen. The Firebug console in the screenshot shows one POST request to our HorizontalMenu.getMenuItems() method. Other Firebug features are extremely useful during development work, and we find it useful that Firebug has been enabled throughout the development work. Callback Functions We saw our first callback function as a parameter in the HorizontalMenu.getMenuItems(setMenuItems) function, and since callbacks are an important concept in DWR, it would be good to discuss a little more about them now that we have seen their first usage. Callbacks are used to operate on the data that was returned from a remoted method. As DWR and AJAX are asynchronous, typical return values in RPCs (Remote Procedure Calls), as in Java calls, do not work. DWR hides the details of calling the callback functions and handles everything internally from the moment we return a value from the remoted Java method to receiving the returned value to the callback function. Two methods are recommended while using callback functions. We have already seen the first method in the HorizontalMenu.getMenuItems(setMenuItems) function call. Remember that there are no parameters in the getMenuItems()Java method, but in the JavaScript call, we added the callback function name at the end of the parameter list. If the Java method has parameters, then the JavaScript call is similar to CountryDB.getCountries(selectedLetters,setCountryRows), where selectedLetters is the input parameter for the Java method and setCountryRows is the name of the callback function (we see the implementation later on). The second method to use callbacks is a meta-data object in the remote JavaScript call. An example (a full implementation is shown later in this article) is shown here: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here } }); Here, the function is anonymous and its implementation is included in the JavaScript call to the remoted Java method. One advantage here is that it is easy to read the code, and the code is executed immediately after we get the return value from the Java method. The other advantage is that we can add extra options to the call. Extra options include timeout and error handler as shown in the following example: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here }, timeout:10000, errorHandler:function(errorMsg) { alert(errorMsg);} }); It is also possible to add a callback function to those Java methods that do not return a value. Adding a callback to methods with no return values would be useful in getting a notification when a remote call has been completed. Afterword Our first sample is ready, and it is also the basis for the following samples. We also looked at how applications are tested in the Eclipse environment. Using DWR, we can look at JavaScript code on the browser and Java code on the server as one. It may take a while to get used to it, but it will change the way we develop web applications. Logically, there is no longer a client and a server but just a single run time platform that happens to be physically separate. But in practice, of course, applications using DWR, JavaScript on the client and Java in the server, are using the typical client-server interaction. This should be remembered when writing applications in the logically single run-time platform.
Read more
  • 0
  • 0
  • 2642

article-image-dwr-java-ajax-user-interface-basic-elements-part-2
Packt
20 Oct 2009
21 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 2)

Packt
20 Oct 2009
21 min read
Implementing Tables and Lists The first actual sample is very common in applications: tables and lists. In this sample, the table is populated using the DWR utility functions, and a remoted Java class. The sample code also shows how DWR is used to do inline table editing. When a table cell is double-clicked, an edit box opens, and it is used to save new cell data. The sample will have country data in a CSV file: country Name, Long Name, two-letter Code, Capital, and user-defined Notes. The user interface for the table sample appears as shown in the following screenshot: Server Code for Tables and Lists The first thing to do is to get the country data. Country data is in a CSV file (named countries.csv and located in the samples Java package). The following is an excerpt of the content of the CSV file (data is from http://www.state.gov ). Short-form name,Long-form name,FIPS Code,CapitalAfghanistan,Islamic Republic of Afghanistan,AF,KabulAlbania,Republic of Albania,AL,TiranaAlgeria,People's Democratic Republic of Algeria,AG,AlgiersAndorra,Principality of Andorra,AN,Andorra la VellaAngola,Republic of Angola,AO,LuandaAntigua andBarbuda,(no long-form name),AC,Saint John'sArgentina,Argentine Republic,AR,Buenos AiresArmenia,Republic of Armenia,AM,Yerevan The CSV file is read each time a client requests country data. Although this is not very efficient, it is good enough here. Other alternatives include an in-memory cache or a real database such as Apache Derby or IBM DB2. As an example, we have created a CountryDB class that is used to read and write the country CSV. We also have another class, DBUtils, which has some helper methods. The DBUtils code is as follows: package samples;import java.io.BufferedReader;import java.io.File;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.io.PrintWriter;import java.util.List;import java.util.Vector;public class DBUtils { private String fileName=null; public void initFileDB(String fileName) { this.fileName=fileName; // copy csv file to bin-directory, for easy // file access File countriesFile = new File(fileName); if (!countriesFile.exists()) { try { List<String> countries = getCSVStrings(null); PrintWriter pw; pw = new PrintWriter(new FileWriter(countriesFile)); for (String country : countries) { pw.println(country); } pw.close(); } catch (IOException e) { e.printStackTrace(); } } } protected List<String> getCSVStrings(String letter) { List<String> csvData = new Vector<String>(); try { File csvFile = new File(fileName); BufferedReader br = null; if(csvFile.exists()) { br=new BufferedReader(new FileReader(csvFile)); } else { InputStream is = this.getClass().getClassLoader() .getResourceAsStream("samples/"+fileName); br=new BufferedReader(new InputStreamReader(is)); br.readLine(); } for (String line = br.readLine(); line != null; line = br.readLine()) { if (letter == null || (letter != null && line.startsWith(letter))) { csvData.add(line); } } br.close(); } catch (IOException ioe) { ioe.printStackTrace(); } return csvData; }} The DBUtils class is a straightforward utility class that returns CSV content as a List of Strings. It also copies the original CSV file to the runtime directory of any application server we might be running. This may not be the best practice, but it makes it easier to manipulate the CSV file, and we always have the original CSV file untouched if and when we need to go back to the original version. The code for CountryDB is given here: package samples;import java.io.FileWriter;import java.io.IOException;import java.io.PrintWriter;import java.util.Arrays;import java.util.List;import java.util.Vector;public class CountryDB { private DBUtils dbUtils = new DBUtils(); private String fileName = "countries.csv"; public CountryDB() { dbUtils.initFileDB(fileName); } public String[] getCountryData(String ccode) { List<String> countries = dbUtils.getCSVStrings(null); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { return country.split(","); } } return new String[0]; } public List<List<String>> getCountries(String startLetter) { List<List<String>> allCountryData = new Vector<List<String>>(); List<String> countryData = dbUtils.getCSVStrings(startLetter); for (String country : countryData) { String[] data = country.split(","); allCountryData.add(Arrays.asList(data)); } return allCountryData; } public String[] saveCountryNotes(String ccode, String notes) { List<String> countries = dbUtils.getCSVStrings(null); try { PrintWriter pw = new PrintWriter(new FileWriter(fileName)); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { if (country.split(",").length == 4) { // no existing notes country = country + "," + notes; } else { if (notes.length() == 0) { country = country.substring(0, country .lastIndexOf(",")); } else { country = country.substring(0, country .lastIndexOf(",")) + "," + notes; } } } pw.println(country); } pw.close(); } catch (IOException ioe) { ioe.printStackTrace(); } String[] rv = new String[2]; rv[0] = ccode; rv[1] = notes; return rv; }} The CountryDB class is a remoted class. The getCountryData() method returns country data as an array of strings based on the country code. The getCountries() method returns all the countries that start with the specified parameter, and saveCountryNotes() saves user added notes to the country specified by the country code. In order to use CountryDB, the following script element must be added to the index.jsp file together with other JavaScript elements. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/CountryDB.js'></script> There is one other Java class that we need to create and remote. That is the AppContent class that was already present in the JavaScript functions of the home page. The AppContent class is responsible for reading the content of the HTML file and parses the possible JavaScript function out of it, so it can become usable by the existing JavaScript functions in index.jsp file. package samples;import java.io.ByteArrayOutputStream;import java.io.IOException;import java.io.InputStream;import java.util.List;import java.util.Vector;public class AppContent { public AppContent() { } public List<String> getContent(String contentId) { InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/"+contentId+".html"); String content=streamToString(is); List<String> contentList=new Vector<String>(); //Javascript within script tag will be extracted and sent separately to client for(String script=getScript(content);!script.equals("");script=getScript(content)) { contentList.add(script); content=removeScript(content); } //content list will have all the javascript //functions, last element is executed last //and all other before html content if(contentList.size()>1) { contentList.add(contentList.size()-1, content); } else { contentList.add(content); } return contentList; } public List<String> getLetters() { List<String> letters=new Vector<String>(); char[] l=new char[1]; for(int i=65;i<91;i++) { l[0]=(char)i; letters.add(new String(l)); } return letters; } public String removeScript(String html) { //removes first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return html; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(0, sIndex)+html.substring(eIndex); } public String getScript(String html) { //returns first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return ""; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(sIndex, eIndex); } public String streamToString(InputStream is) { String content=""; try { ByteArrayOutputStream baos=new ByteArrayOutputStream(); for(int b=is.read();b!=-1;b=is.read()) { baos.write(b); } content=baos.toString(); } catch(IOException ioe) { content=ioe.toString(); } return content; }} The getContent() method reads the HTML code from a file based on the contentId. ContentId was specified in the dwrapplication.properties file, and the HTML is just contentId plus the extension .html in the package directory. There is also a getLetters() method that simply lists letters from A to Z and returns a list of letters to the browser. If we test the application now, we will get an error as shown in the following screenshot: We know why the AppContent is not defined error occurs, so let's fix it by adding AppContent to the allow element in the dwr.xml file. We also add CountryDB to the allow element. The first thing we do is to add required elements to the dwr.xml file. We add the following creators within the allow element in the dwr.xml file. <create creator="new" javascript="AppContent"> <param name="class" value="samples.AppContent" /> <include method="getContent" /> <include method="getLetters" /> </create> <create creator="new" javascript="CountryDB"> <param name="class" value="samples.CountryDB" /> <include method="getCountries" /> <include method="saveCountryNotes" /> <include method="getCountryData" /></create> We explicitly define the methods we are remoting using the include elements. This is a good practice, as we don't accidentally allow access to any methods that are not meant to be remoted. Client Code for Tables and Lists We also need to add a JavaScript interface to the index.jsp page. Add the following with the rest of the scripts in the index.jsp file. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/AppContent.js'></script> Before testing, we need the sample HTML for the content area. The following HTML is in the TablesAndLists.html file under the samples directory: <h3>Countries</h3><p>Show countries starting with <select id="letters" onchange="selectLetter(this);return false;"> </select><br/>Doubleclick "Notes"-cell to add notes to country.</p><table border="1"> <thead> <tr> <th>Name</th> <th>Long name</th> <th>Code</th> <th>Capital</th> <th>Notes</th> </tr> </thead> <tbody id="countryData"> </tbody></table><script type='text/javascript'>//TO BE EVALEDAppContent.getLetters(addLetters);</script> The script element at the end is extracted by our Java class, and it is then evaluated by the browser when the client-side JavaScript receives the HTML. There is the select element, and its onchange event calls the selectLetter() JavaScript function. We will implement the selectLetter() function shortly. JavaScript functions are added in the index.jsp file, and within the head element. Functions could be in separate JavaScript files, but the embedded script is just fine here. function selectLetter(selectElement){ var selectedIndex = selectElement.selectedIndex; var selectedLetter= selectElement.options[selectedIndex ].value; CountryDB.getCountries(selectedLetter,setCountryRows);}function addLetters(letters){dwr.util.addOptions('letters',['letter...']);dwr.util.addOptions('letters',letters);}function setCountryRows(countryData){var cellFuncs = [ function(data) { return data[0]; }, function(data) { return data[1]; }, function(data) { return data[2]; }, function(data) { return data[3]; }, function(data) { return data[4]; }];dwr.util.removeAllRows('countryData');dwr.util.addRows( 'countryData',countryData,cellFuncs, { cellCreator:function(options) { var td = document.createElement("td"); if(options.cellNum==4) { var notes=options.rowData[4]; if(notes==undefined) { notes='&nbsp;';// + options.rowData[2]+'notes'; } var ccode=options.rowData[2]; var divId=ccode+'_Notes'; var tdId=divId+'Cell'; td.setAttribute('id',tdId); var html=getNotesHtml(ccode,notes); td.innerHTML=html; options.data=html; } return td; }, escapeHtml:false });}function getNotesHtml(ccode,notes){ var divId=ccode+'_Notes'; return "<div onDblClick="editCountryNotes('"+divId+"','"+ccode+"');" id=""+divId+"">"+notes+"</div>";}function editCountryNotes(id,ccode){ var notesElement=dwr.util.byId(id); var tdId=id+'Cell'; var notes=notesElement.innerHTML; if(notes=='&nbsp;') { notes=''; } var editBox='<input id="'+ccode+'NotesEditBox" type="text" value="'+notes+'"/><br/>'; editBox+="<input type='button' id='"+ccode+"SaveNotesButton' value='Save' onclick='saveCountryNotes(""+ccode+"");'/>"; editBox+="<input type='button' id='"+ccode+"CancelNotesButton' value='Cancel' onclick='cancelEditNotes(""+ccode+"");'/>"; tdElement=dwr.util.byId(tdId); tdElement.innerHTML=editBox; dwr.util.byId(ccode+'NotesEditBox').focus();}function cancelEditNotes(ccode){ var countryData=CountryDB.getCountryData(ccode, { callback:function(data) { var notes=data[4]; if(notes==undefined) { notes='&nbsp;'; } var html=getNotesHtml(ccode,notes); var tdId=ccode+'_NotesCell'; var td=dwr.util.byId(tdId); td.innerHTML=html; } });}function saveCountryNotes(ccode){ var editBox=dwr.util.byId(ccode+'NotesEditBox'); var newNotes=editBox.value; CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { var ccode=newNotes[0]; var notes=newNotes[1]; var notesHtml=getNotesHtml(ccode,notes); var td=dwr.util.byId(ccode+"_NotesCell"); td.innerHTML=notesHtml; } });} There are lots of functions for table samples, and we go through each one of them. The first is the selectLetter() function. This function gets the selected letter from the select element and calls the CountryDB.getCountries() remoted Java method. The callback function is setCountryRows. This function receives the return value from the Java getCountries() method, that is List<List<String>>, a List of Lists of Strings. The second function is addLetters(letters), and it is a callback function for theAppContent.getLetters() method, which simply returns letters from A to Z. The addLetters() function uses the DWR utility functions to populate the letter list. Then there is a callback function for the CountryDB.getCountries() method. The parameter for the function is an array of countries that begin with a specified letter. Each array element has a format: Name, Long name, (country code) Code, Capital, Notes. The purpose of this function is to populate the table with country data; and let's see how it is done. The variable, cellFuncs, holds functions for retrieving data for each cell in a column. The parameter named data is an array of country data that was returned from the Java class. The table is populated using the DWR utility function, addRows(). The cellFuncs variable is used to get the correct data for the table cell. The cellCreator function is used to create custom HTML for the table cell. Default implementation generates just a td element, but our custom implementation generates the td-element with the div placeholder for user notes. The getNotesHtml() function is used to generate the div element with the event listener for double-click. The editCountryNotes() function is called when the table cell is double-clicked. The function creates input fields for editing notes with the Save and Cancel buttons. The cancelEditNotes() and saveCountryNotes() functions cancel the editing of new notes, or saves them by calling the CountryDB.saveCountryNotes() Java method. The following screenshot shows what the sample looks like with the populated table: Now that we have added necessary functions to the web page we can test the application. Testing Tables and Lists The application should be ready for testing if we have had the test environment running during development. Eclipse automatically deploys our new code to the server whenever something changes. So we can go right away to the test page http://127.0.0.1:8080/DWREasyAjax. On clicking Tables and lists we can see the page we have developed. By selecting some letter, for example "I" we get a list of all the countries that start with letter "I" (as shown in the previous screenshot). Now we can add notes to countries. We can double-click any table cell under Notes. For example, if we want to enter notes to Iceland, we double-click the Notes cell in Iceland's table row, and we get the edit box for the notes as shown in the following screenshot: The edit box is a simple text input field. We didn't use any forms. Saving and canceling editing is done using JavaScript and DWR. If we press Cancel, we get the original notes from the CountryDB Java class using DWR and saving also uses DWR to save data. CountryDB.saveCountryNotes() takes the country code and the notes that the user entered in the edit box and saves them to the CSV file. When notes are available, the application will show them in the country table together with other country information as shown in the following screenshot: Afterword The sample in this section uses DWR features to get data for the table and list from the server. We developed the application so that most of the application logic is written in JavaScript and Java beans that are remoted. In principle, the application logic can be thought of as being fully browser based, with some extensions in the server. Implementing Field Completion Nowadays, field completion is typical of many web pages. A typical use case is getting a stock quote, and field completion shows matching symbols as users type letters. Many Internet sites use this feature. Our sample here is a simple license text finder. We enter the license name in the input text field, and we use DWR to show the license names that start with the typed text. A list of possible completions is shown below the input field. The following is a screenshot of the field completion in action: Selected license content is shown in an iframe element from http://www.opensource.org. Server Code for Field Completion We will re-use some of the classes we developed in the last section. AppContent is used to load the sample page, and the DBUtils class is used in the LicenseDB class. The LicenseDB class is shown here: package samples;import java.util.List;import java.util.Vector;public class LicenseDB{ private DBUtils dbUtils=new DBUtils(); public LicenseDB() { dbUtils.initFileDB("licenses.csv"); } public List<String> getLicensesStartingWith(String startLetters) { List<String> list=new Vector<String>(); List<String> licenses=dbUtils.getCSVStrings(startLetters); for(String license : licenses) { list.add(license.split(",")[0]); } return list; } public String getLicenseContentUrl(String licenseName) { List<String> licenses=dbUtils.getCSVStrings(licenseName); if(licenses.size()>0) { return licenses.get(0).split(",")[1]; } return ""; }} The getLicenseStartingWith() method goes through the license names and returns valid license names and their URLs. Similar to the data in the previous section, license data is in a CSV file named licenses.csv in the package directory. The following is an excerpt of the file content: Academic Free License, http://opensource.org/licenses/afl-3.0.phpAdaptive Public License, http://opensource.org/licenses/apl1.0.phpApache Software License, http://opensource.org/licenses/apachepl-1.1.phpApache License, http://opensource.org/licenses/apache2.0.phpApple Public Source License, http://opensource.org/licenses/apsl-2.0.phpArtistic license, http://opensource.org/licenses/artistic-license-1.0.php... There are quite a few open-source licenses. Some are more popular than others (like the Apache Software License) and some cannot be re-used (like the IBM Public License). We want to remote the LicenseDB class, so we add the following to the dwr.xml file. <create creator="new" javascript="LicenseDB"> <param name="class" value="samples.LicenseDB"/> <include method="getLicensesStartingWith"/> <include method="getLicenseContentUrl"/></create> Client Code for Field Completion The following script element will go in the index.jsp page. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/LicenseDB.js'></script> The HTML for the field completion is as follows: <h3>Field completion</h3><p>Enter Open Source license name to see it's contents.</p><input type="text" id="licenseNameEditBox" value="" onkeyup="showPopupMenu()" size="40"/><input type="button" id="showLicenseTextButton" value="Show license text" onclick="showLicenseText()"/><div id="completionMenuPopup"></div><div id="licenseContent"></div> The input element, where we enter the license name, listens to the onkeyup event which calls the showPopupMenu() JavaScript function. Clicking the Input button calls the showLicenseText() function (the JavaScript functions are explained shortly). Finally, the two div elements are place holders for the pop-up menu and the iframe element that shows license content. For the pop-up box functionality, we use existing code and modify it for our purpose (many thanks to http://www.jtricks.com). The following is the popup.js file, which is located under the WebContent | js directory. //<script type="text/javascript"><!--/* Original script by: www.jtricks.com * Version: 20070301 * Latest version: * www.jtricks.com/javascript/window/box.html * * Modified by Sami Salkosuo. */// Moves the box object to be directly beneath an object.function move_box(an, box){ var cleft = 0; var ctop = 0; var obj = an; while (obj.offsetParent) { cleft += obj.offsetLeft; ctop += obj.offsetTop; obj = obj.offsetParent; } box.style.left = cleft + 'px'; ctop += an.offsetHeight + 8; // Handle Internet Explorer body margins, // which affect normal document, but not // absolute-positioned stuff. if (document.body.currentStyle && document.body.currentStyle['marginTop']) { ctop += parseInt( document.body.currentStyle['marginTop']); } box.style.top = ctop + 'px';}var popupMenuInitialised=false;// Shows a box if it wasn't shown yet or is hidden// or hides it if it is currently shownfunction show_box(html, width, height, borderStyle,id){ // Create box object through DOM var boxdiv = document.getElementById(id); boxdiv.style.display='block'; if(popupMenuInitialised==false) { //boxdiv = document.createElement('div'); boxdiv.setAttribute('id', id); boxdiv.style.display = 'block'; boxdiv.style.position = 'absolute'; boxdiv.style.width = width + 'px'; boxdiv.style.height = height + 'px'; boxdiv.style.border = borderStyle; boxdiv.style.textAlign = 'right'; boxdiv.style.padding = '4px'; boxdiv.style.background = '#FFFFFF'; boxdiv.style.zIndex='99'; popupMenuInitialised=true; //document.body.appendChild(boxdiv); } var contentId=id+'Content'; var contents = document.getElementById(contentId); if(contents==null) { contents = document.createElement('div'); contents.setAttribute('id', id+'Content'); contents.style.textAlign= 'left'; boxdiv.contents = contents; boxdiv.appendChild(contents); } move_box(html, boxdiv); contents.innerHTML= html; return false;}function hide_box(id){ document.getElementById(id).style.display='none'; var boxdiv = document.getElementById(id+'Content'); if(boxdiv!=null) { boxdiv.parentNode.removeChild(boxdiv); } return false;}//--></script> Functions in the popup.js file are used as menu options directly below the edit box. The show_box() function takes the following arguments: HTML code for the pop-up, position of the pop-up window, and the "parent" element (to which the pop-up box is related). The function then creates a pop-up window using DOM. The move_box() function is used to move the pop-up window to its correct place under the edit box and the hide_box() function hides the pop-up window by removing the pop-up window from the DOM tree. In order to use functions in popup.js, we need to add the following script-element to the index.jsp file: <script type='text/javascript' src='js/popup.js'></script> Our own JavaScript code for the field completion is in the index.jsp file. The following are the JavaScript functions, and an explanation follows the code: function showPopupMenu(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); var startLetters=licenseNameEditBox.value; LicenseDB.getLicensesStartingWith(startLetters, { callback:function(licenses) { var html=""; if(licenses.length==0) { return; } if(licenses.length==1) { hidePopupMenu(); licenseNameEditBox.value=licenses[0]; } else { for (index in licenses) { var licenseName=licenses[index];//.split(",")[0]; licenseName=licenseName.replace(/"/g,"&quot;"); html+="<div style="border:1px solid #777777;margin-bottom:5;" onclick="completeEditBox('"+licenseName+"');">"+licenseName+"</div>"; } show_box(html, 200, 270, '1px solid','completionMenuPopup'); } } });}function hidePopupMenu(){ hide_box('completionMenuPopup');}function completeEditBox(licenseName){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseNameEditBox.value=licenseName; hidePopupMenu(); dwr.util.byId('showLicenseTextButton').focus();}function showLicenseText(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseName=licenseNameEditBox.value; LicenseDB.getLicenseContentUrl(licenseName,{ callback:function(licenseUrl) { var html='<iframe src="'+licenseUrl+'" width="100%" height="600"></iframe>'; var content=dwr.util.byId('licenseContent'); content.style.zIndex="1"; content.innerHTML=html; } });} The showPopupMenu() function is called each time a user enters a letter in the input box. The function gets the value of the input field and calls the LicenseDB. getLicensesStartingWith() method. The callback function is specified in the function parameters. The callback function gets all the licenses that match the parameter, and based on the length of the parameter (which is an array), it either shows a pop-up box with all the matching license names, or, if the array length is one, hides the pop-up box and inserts the full license name in the text field. In the pop up box, the license names are wrapped within the div element that has an onclick event listener that calls the completeEditBox() function. The hidePopupMenu() function just closes the pop-up menu and the competeEditBox() function inserts the clicked license text in the input box and moves the focus to the button. The showLicenseText() function is called when we click the Show license text button. The function calls the LicenseDB. getLicenseContentUrl() method and the callback function creates an iframe element to show the license content directly from http://www.opensource.org, as shown in the following screenshot: Afterword Field completion improves user experience in web pages and the sample code in this section showed one way of doing it using DWR. It should be noted that the sample for field completion presented here is only for demonstration purposes.
Read more
  • 0
  • 0
  • 2178

article-image-soa-service-oriented-architecture
Packt
20 Oct 2009
17 min read
Save for later

SOA—Service Oriented Architecture

Packt
20 Oct 2009
17 min read
What is SOA? SOA is the acronym for Service Oriented Architecture. As it has come to be known, SOA is an architectural design pattern by which several guiding principles determine the nature of the design. Basically, SOA states that every component of a system should be a service, and the system should be composed of several loosely-coupled services. A service here means a unit of a program that serves a business process. "Loosely-coupled" here means that these services should be independent of each other, so that changing one of them should not affect any other services. SOA is not a specific technology, nor a specific language. It is just a blueprint, or a system design approach. It is an architecture model that aims to enhance the efficiency, agility, and productivity of an enterprise system. The key concepts of SOA are services, high interoperability and loose coupling. Several other architecture/technologies such as RPC, DCOM, and CORBA have existed for a long time, and attempted to address the client/server communication problems. The difference between SOA and these other approaches is that SOA is trying to address the problem from the client side, and not from the server side. It tries to decouple the client side from the server side, instead of bundling them, to make the client side application much easier to develop and maintain. This is exactly what happened when object-oriented programming (OOP) came into play 20 years ago. Prior to object-oriented programming, most designs were procedure-oriented, meaning the developer had to control the process of an application. Without OOP, in order to finish a block of work, the developer had to be aware of the sequence that the code would follow. This sequence was then hard-coded into the program, and any change to this sequence would result in a code change. With OOP, an object simply supplied certain operations; it was up to the caller of the object to decide the sequence of those operations. The caller could mash up all of the operations, and finish the job in whatever order needed. There was a paradigm shift from the object side to the caller side. This same paradigm shift is happening today. Without SOA, every application is a bundled, tightly coupled solution. The client-side application is often compiled and deployed along with the server-side applications, making it impossible to quickly change anything on the server side. DCOM and CORBA were on the right track to ease this problem by making the server-side components reside on remote machines. The client application could directly call a method on a remote object, without knowing that this object was actually far away, just like calling a method on a local object. However, the client-side applications continue to remain tightly coupled with these remote objects, and any change to the remote object will still result in a recompiling or redeploying of the client application. Now, with SOA, the remote objects are truly treated as remote objects. To the client applications, they are no longer objects; they are services. The client application is unaware of how the service is implemented, or of the signature that should be used when interacting with those services. The client application interacts with these services by exchanging messages. What a client application knows now is only the interfaces, or protocols of the services, such as the format of the messages to be passed in to the service, and the format of the expected returning messages from the service. Historically, there have been many other architectural design approaches, technologies, and methodologies to integrate existing applications. EAI (Enterprise Application Integration) is just one of them. Often, organizations have many different applications, such as order management systems, accounts receivable systems, and customer relationship management systems. Each application has been designed and developed by different people using different tools and technologies at different times, and to serve different purposes. However, between these applications, there are no standard common ways to communicate. EAI is the process of linking these applications and others in order to realize financial and operational competitive advantages. It may seem that SOA is just an extension of EAI. The similarity is that they are both designed to connect different pieces of applications in order to build an enterprise-level system for business. But fundamentally, they are quite different. EAI attempts to connect legacy applications without modifying any of the applications, while SOA is a fresh approach to solve the same problem. Why SOA? So why do we need SOA now? The answer is in one word—agility. Business requirements change frequently, as they always have. The IT department has to respond more quickly and cost-effectively to those changes. With a traditional architecture, all components are bundled together with each other. Thus, even a small change to one component will require a large number of other components to be recompiled and redeployed. Quality assurance (QA) effort is also huge for any code changes. The processes of gathering requirements, designing, development, QA, and deployment are too long for businesses to wait for, and become actual bottlenecks. To complicate matters further, some business processes are no longer static. Requirements change on an ad-hoc basis, and a business needs to be able to dynamically define its own processes whenever it wants. A business needs a system that is agile enough for its day-to-day work. This is very hard, if not impossible, with existing traditional infrastructure and systems. This is where SOA comes into play. SOA's basic unit is a service. These services are building blocks that business users can use to define their own processes. Services are designed and implemented so that they can serve different purposes or processes, and not just specific ones. No matter what new processes a business needs to build or what existing processes a business needs need to modify, the business users should always be able to use existing service blocks, in order to compete with others according to current marketing conditions. Also, if necessary, some new service blocks can be used. These services are also designed and implemented so that they are loosely coupled, and independent of one another. A change to one service does not affect any other service. Also, the deployment of a new service does not affect any existing service. This greatly eases release management and makes agility possible. For example, a GetBalance service can be designed to retrieve the balance for a loan. When a borrower calls in to query the status of a specific loan, this GetBalance service may be called by the application that is used by the customer service representatives. When a borrower makes a payment online, this service can also be called to get the balance of the loan, so that the borrower will know the balance of his or her loan after the payment. Yet in the payment posting process, this service can still be used to calculate the accrued interest for a loan, by multiplying the balance with the interest rate. Even further, a new process can be created by business users to utilize this service if a loan balance needs to be retrieved. The GetBalance service is developed and deployed independently from all of the above processes. Actually, the service exists without even knowing who the client will be or even how many clients there will be. All of the client applications communicate with this service through its interface, and its interface will remain stable once it is in production. If we have to change the implementation of this service, for example by fixing a bug, or changing an algorithm inside a method of the service, all of the client applications can still work without any change. When combined with the more mature Business Process Management (BPM) technology, SOA plays an even more important role in an organization's efforts to achieve agility. Business users can create and maintain processes within BPM, and through SOA they can plug a service into any of the processes. The front-end BPM application is loosely coupled to the back-end SOA system. This combination of BPM and SOA will give an organization much greater flexibility in order to achieve agility. How do we implement SOA? Now that we've established why SOA is needed by the business, the question becomes—how do we implement SOA? To implement SOA in an organization, three key elements have to be evaluated—people, process, and technology. Firstly, the people in the organization must be ready to adopt SOA. Secondly, the organization must know the processes that the SOA approach will include, including the definition, scope, and priority. Finally, the organization should choose the right technology to implement it. Note that people and processes take precedence over technology in an SOA implementation, but they are out of the scope of this article. In this article, we will assume people and processes are all ready for an organization to adopt SOA. Technically, there are many SOA approaches. At certain degrees, traditional technologies such as RPC, DCOM, CORBA, or some modern technologies such as IBM WebSphere MQ, Java RMI, and .NET Remoting could all be categorized as service-oriented, and can be used to implement SOA for one organization. However, all of these technologies have limitations, such as language or platform specifications, complexity of implementation, or the ability to support binary transports only. The most important shortcoming of these approaches is that the server-side applications are tightly coupled with the client-side applications, which is against the SOA principle. Today, with the emergence of web service technologies, SOA becomes a reality. Thanks to the dramatic increase in network bandwidth, and given the maturity of web service standards such as WS-Security, and WS-AtomicTransaction, an SOA back-end can now be implemented as a real system. SOA from different users' perspectives However, as we said earlier, SOA is not a technology, but only a style of architecture, or an approach to building software products. Different people view SOA in different ways. In fact, many companies now have their own definitions for SOA. Many companies claim they can offer an SOA solution, while they are really just trying to sell their products. The key point here is—SOA is not a solution. SOA alone can't solve any problem. It has to be implemented with a specific approach to become a real solution. You can't buy an SOA solution. You may be able to buy some kinds of products to help you realize your own SOA, but this SOA should be customized to your specific environment, for your specific needs. Even within the same organization, different players will think about SOA in quite different ways. What follows are just some examples of how different players in an organization judge the success of an SOA initiative using different criteria. [Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446] To a programmer, SOA is a form of distributed computing in which the building blocks (services) may come from other applications or be offered to them. SOA increases the scope of a programmer's product and adds to his or her resources, while also closely resembling familiar modular software design principles. To a software architect, SOA translates to the disappearance of fences between applications. Architects turn to the design of business functions rather than to self-contained and isolated applications. The software architect becomes interested in collaboration with a business analyst to get a clear picture of the business functionality and scope of the application. SOA turns software architects into integration architects and business experts. For the Chief Investment Officers (CIOs), SOA is an investment in the future. Expensive in the short term, its long-term promises are lower costs, and greater flexibility in meeting new business requirements. Re-use is the primary benefit anticipated as a means to reduce the cost and time of new application development. For business analysts, SOA is the bridge between them and the IT organization. It carries the promise that IT designers will understand them better, because the services in SOA reflect the business functions in business process models. For CEOs, SOA is expected to help IT become more responsive to business needs and facilitate competitive business change. Complexities in SOA implementation Although SOA will make it possible for business parties to achieve agility, SOA itself is technically not simple to implement. In some cases, it even makes software development more complex than ever, because with SOA you are building for unknown problems. On one hand, you have to make sure that the SOA blocks you are building are useful blocks. On the other, you need a framework within which you can assemble those blocks to perform business activities. The technology issues associated with SOA are more challenging than vendors would like users to believe. Web services technology has turned SOA into an affordable proposition for most large organizations by providing a universally-accepted, standard foundation. However, web services play a technology role only for the SOA backplane, which is the software infrastructure that enables SOA-related interoperability and integration. The following figure shows the technical complexity of SOA. It has been taken from Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446. As Gartner says, users must understand the complex world of middleware, and point-to-point web service connections only for small-scale, experimental SOA projects. If the number of services deployed grows to more than 20 or 30, then use a middleware-based intermediary—the SOA backplane. The SOA backplane could be an Enterprise Service Bus (ESB), a Message-Oriented Middleware (MOM), or an Object Request Broker (ORB). However, in this article, we will not cover it. We will build only point-to-point services using WCF. Web services There are many approaches to realizing SOA, but the most popular and practical one is—using web services. What is a web service? A web service is a software system designed to support interoperable machine-to-machine interaction over a network. A web service is typically hosted on a remote machine (provider), and called by a client application (consumer) over a network. After the provider of a web service publishes the service, the client can discover it and invoke it. The communications between a web service and a client application use XML messages. A web service is hosted within a web server and HTTP is used as the transport protocol between the server and the client applications. The following diagram shows the interaction of web services: Web services were invented to solve the interoperability problem between applications. In the early 90s, along with the LAN/WAN/Internet development, it became a big problem to integrate different applications. An application might have been developed using C++, or Java, and run on a Unix box, a Windows PC, or even a mainframe computer. There was no easy way for it to communicate with other applications. It was the development of XML that made it possible to share data between applications across hardware boundaries and networks, or even over the Internet. For example, a Windows application might need to display the price of a particular stock. With a web service, this application can make a request to a URL, and/or pass an XML string such as <QuoteRequest><GetPrice Symble='XYZ'/></QuoteRequest>. The requested URL is actually the Internet address of a web service, which, upon receiving the above quote request, gives a response, <QuoteResponse><QuotePrice Symble='XYZ'>51.22</QuotePrice></QuoteResponse/>. The Windows application then uses an XML parser to interpret the response package, and display the price on the screen. The reason it is called a web service is that it is designed to be hosted in a web server, such as Microsoft Internet Information Server, and called over the Internet, typically via the HTTP or HTTPS protocols. This is to ensure that a web service can be called by any application, using any programming language, and under any operating system, as long as there is an active Internet connection, and of course, an open HTTP/HTTPS port, which is true for almost every computer on the Internet. Each web service has a unique URL, and contains various methods. When calling a web service, you have to specify which method you want to call, and pass the required parameters to the web service method. Each web service method will also give a response package to tell the caller the execution results. Besides new applications being developed specifically as web services, legacy applications can also be wrapped up and exposed as web services. So, an IBM mainframe accounting system might be able to provide external customers with a link to check the balance of an account. Web service WSDL In order to be called by other applications, each web service has to supply a description of itself, so that other applications will know how to call it. This description is provided in a language called a WSDL. WSDL stands for Web Services Description Language. It is an XML format that defines and describes the functionalities of the web service, including the method names, parameter names, and types, and returning data types of the web service. For a Microsoft ASMX web service, you can get the WSDL by adding ?WSDL to the end of the web service URL, say http://localhost/MyService/MyService.asmx?WSDL. Web service proxy A client application calls a web service through a proxy. A web service proxy is a stub class between a web service and a client. It is normally auto-generated by a tool such as Visual Studio IDE, according to the WSDL of the web service. It can be re-used by any client application. The proxy contains stub methods mimicking all of methods of the web service so that a client application can call each method of the web service through these stub methods. It also contains other necessary information required by the client to call the web service such as custom exceptions, custom data and class types, and so on. The address of the web service can be embedded within the proxy class, or it can be placed inside a configuration file. A proxy class is always for a specific language. For each web service, there could be a proxy class for Java clients, a proxy class for C# clients, and yet another proxy class for COBOL clients. To call a web service from a client application, the proper proxy class first has to be added to the client project. Then, with an optional configuration file, the address of the web service can be defined. Within the client application, a web service object can be instantiated, and its methods can be called just as for any other normal method. SOAP There are many standards for web services. SOAP is one of them. SOAP was originally an acronym for Simple Object Access Protocol, and was designed by Microsoft. As this protocol became popular with the spread of web services, and its original meaning was misleading, the original acronym was dropped with version 1.2 of the standard. It is now merely a protocol, maintained by W3C. SOAP, now, is a protocol for exchanging XML-based messages over computer networks. It is widely-used by web services and has become its de-facto protocol. With SOAP, the client application can send a request in XML format to a server application, and the server application will send back a response in XML format. The transport for SOAP is normally HTTP / HTTPS, and the wide acceptance of HTTP is one of the reasons why SOAP is widely accepted today.
Read more
  • 0
  • 0
  • 1872
Banner background image

article-image-getting-started-scratch-14-part-2
Packt
16 Oct 2009
7 min read
Save for later

Getting Started with Scratch 1.4 (Part 2)

Packt
16 Oct 2009
7 min read
Add sprites to the stage In the first part we learned that if we want something done in Scratch, we tell a sprite by using blocks in the scripts area. A single sprite can't be responsible for carrying out all our actions, which means we'll often need to add sprites to accomplish our goals. We can add sprites to the stage in one of the following four ways: paint new sprite, choose new sprite from file, get a surprise sprite, or by duplicating a sprite. Duplicating a sprite is not in the scope of this article. The buttons to insert a new sprite using the other three methods are directly above the sprites list. Let's be surprised. Click on get surprise sprite (the button with the "?" on it.). If the second sprite covers up the first sprite, grab one of them with your mouse and drag it around the screen to reposition it. If you don't like the sprite that popped up, delete it by selecting the scissors from the tool bar and clicking on the sprite. Then click on get surprise sprite again. Each sprite has a name that displays beneath the icon. See the previous screenshot for an example. Right now, our sprites are cleverly named Sprite1 and Sprite2. Get new sprites The create new sprite option allows you to draw a sprite using the Paint Editor when you need a sprite that you can't find anywhere else. You can also create sprites using third-party graphics programs, such as Adobe Photoshop, GIMP, and Tux Paint. If you create a sprite in a different program, then you need to import the sprite using the choose new sprite from file option. Scratch also bundles many sprites with the installation, and the choose new sprite from file option will allow you to select one of the included files. The bundled sprites are categorized into Animals, Fantasy, Letters, People, Things, and Transportation, as seen in the following screenshot: If you look at the screenshot carefully, you'll notice the folder path lists Costumes, not sprites. A costume is really a sprite. If you want to be surprised, then use the get surprise sprite option to add a sprite to the project. This option picks a random entry from the gallery of bundled sprites. We can also add a new sprite by duplicating a sprite that's already in the project by right-clicking on the sprite in the sprites list and choosing duplicate (command C on Mac). As the name implies, this creates a clone of the sprite. The method we use to add a new sprite depends on what we are trying to do and what we need for our project. Time for action – spin sprite spin Let's get our sprites spinning. To start, click on Sprite1 from the sprites list. This will let us edit the script for Sprite1. From the Motion palette, drag the turn clockwise 15 degrees block into the script for Sprite1 and snap it in place after the if on edge, bounce block. Change the value on the turn block to 5. From the sprites list, click on Sprite2. From the Motion palette, drag the turn clockwise 15 degrees block into the scripts area. Find the repeat 10 block from the Control palette and snap it around the turn clockwise 15 degrees block. Wrap the script in the forever block. Place the when space key pressed block on top of the entire stack of blocks. From the Looks palette, snap the say hello for 2 secs block onto the bottom of the repeat block and above the forever block. Change the value on the repeat block to 100. Change the value on the turn clockwise 15 degrees block to 270. Change the value on the say block to I'm getting dizzy! Press the Space bar and watch the second sprite spin. Click the flag and set the second sprite on a trip around the stage. What just happened? We have two sprites on the screen acting independently of each other. It seems simple enough, but let's step through our script. Our cat got bored bouncing in a straight line across the stage, so we introduced some rotation. Now as the cat walked, it turned five degrees each time the blocks in the forever loop ran. This caused the cat to walk in an arc. As the cat bounced off the stage, it got a new trajectory. We told Sprite2 to turn 270 degrees for 100 consecutive times. Then the sprite stopped for two seconds and displayed a message, "I'm getting dizzy!" Because the script was wrapped in a forever block, Sprite2 started tumbling again. We used the space bar as the control to set Sprite2 in motion. However, you noticed that Sprite1 did not start until we clicked the flag. That's because we programmed Sprite1 to start when the flag was clicked. Have a go hero Make Sprite2 less spastic. Instead of turning 270 degrees, try a smaller value, such as 5. Sometimes we need inspiration So far, we've had a cursory introduction to Scratch, and we've created a few animations to illustrate some basic concepts. However, now is a good time to pause and talk about inspiration. Sometimes we learn by examining the work of other people and adapting that work to create something new that leads to creative solutions. When we want to see what other people are doing with Scratch, we have two places to turn. First, our Scratch installation contains dozens of sample projects. Second, the Scratch web site at http://scratch.mit.edu maintains a thriving community of Scratchers. Browse Scratch's projects Scratch includes several categories of projects for Animation, Games, Greetings, Interactive Art, Lists, Music and Dance, Names, Simulations, Speak up, and Stories. Time for action – spinner Let's dive right in. From the Scratch interface, click the Open button to display the Open Project dialog box, as seen in the following screenshot. Click on the Examples button. Select Simulations and click OK. Select Spinner and click OK to load the Spinner project. Follow the instructions on the screen and spin the arrow by clicking on the arrow. We're going to edit the spinner wheel. From the sprites list, click on Stage. From the scripts area, click the Backgrounds tab. Click Edit on background number 1 to open the Paint Editor. Select a unique color from the color palette, such as purple. Click on the paint bucket from the toolbar, then click on one of the triangles in the circle to change its color. The paint bucket is highlighted in the following screenshot. Click OK to return to our project. What just happened? We opened a community project called Spinner that came bundled with Scratch. When we clicked on the arrow, it spun and randomly selected a color from the wheel. We got our first look at a project that uses a background for the stage and modified the background using Scratch's built-in image editor. The Paint Editor in Scratch provides a basic but functional image editing environment. Using the Paint Editor, we can create a new sprite/background and modify a sprite/background. This can be useful if we are working with a sprite or background that someone else has created. Costume versus background A costume defines the look of a sprite while a background defines the look of the stage. A sprite may have multiple costumes just as the stage can have multiple backgrounds. When we want to work with the backgrounds on the stage, we use the switch to background and next background blocks. We use the switch to costume and next costume blocks when we want to manipulate a sprite's costume. Actually, if you look closely at the available looks blocks when you're working with a sprite, you'll realize that you can't select the backgrounds. Likewise, if you're working with the stage, you can't select costumes.
Read more
  • 0
  • 0
  • 2633

article-image-ejb-3-entities
Packt
16 Oct 2009
6 min read
Save for later

EJB 3 Entities

Packt
16 Oct 2009
6 min read
The JPA can be regarded as a higher level of abstraction sitting on top of JDBC. Under the covers the persistence engine converts JPA statements into lower level JDBC statements. EJB 3 Entities In JPA, any class or POJO (Plain Old Java Object) can be converted to an entity with very few modifications. The following listing shows an entity Customer.java with attributes id, which is unique for a Customer instance, and firstName and lastName. package ejb30.entity; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class Customer implements java.io.Serializable { private int id; private String firstName; private String lastName; public Customer() {} @Id public int getId() { return id; } public void setId(int id) { this.id = id; } public String getFirstname() { return firstName; } public void setFirstname(String firstName) { this.firstName = firstName; } public String getLastname() { return lastName; } public void setLastname(String lastName) { this.lastName = lastName; } public String toString() { return "[Customer Id =" + id + ",first name=" + firstName + ",last name=" + lastName + "]"; } } The class follows the usual JavaBean rules. The instance variables are non-public and are accessed by clients through appropriately named getter and setter accessor methods. Only a couple of annotations have been added to distinguish this entity from a POJO. Annotations specify entity metadata. They are not an intrinsic part of an entity but describe how an entity is persisted. The @Entity annotation indicates to the persistence engine that the annotated class, in this case Customer, is an entity. The annotation is placed immediately before the class definition and is an example of a class level annotation. We can also have property-based and field-based annotations, as we shall see. The @Id annotation specifies the primary key of the entity. The id attribute is a primary key candidate. Note that we have placed the annotation immediately before the corresponding getter method, getId(). This is an example of a property-based annotation. A property-based annotation must be placed immediately before the corresponding getter method, and not the setter method. Where property-based annotations are used, the persistence engine uses the getter and setter methods to access and set the entity state. An alternative to property-based annotations are field-based annotations. An example of this is shown later. Note that all annotations within an entity, other than class level annotations, must be all property-based or all field-based. The final requirement for an entity is the presence of a no-arg constructor. Our Customer entity also implements the java.io.Serializable interface. This is not essential, but good practice because the Customer entity has the potential of becoming a detached entity. Detached entities must implement the Serializable interface. At this point we remind the reader that, as throughout EJB 3, XML deployment descriptors are an alternative to entity metadata annotations. Comparison with EJB 2.x Entity Beans An EJB 3 entity is a POJO and not a component, so it is referred to as an entity and not an entity bean. In EJB 2.x the corresponding construct is an entity bean component with the same artifacts as session beans, namely an XML deployment descriptor file, a remote or local interface, a home or localhome interface, and the bean class itself. The remote or local interface contains getter and setter method definitions. The home or local interface contains definitions for the create() and findByPrimaryKey() methods and optionally other finder method definitions. As with session beans, the entity bean class contains callback methods such as ejbCreate(), ejbLoad(), ejbStore(), ejbRemove(), ejbActivate(), ejbPassivate(), and setEntityContext(). The EJB 3 entity, being a POJO, can run outside a container. Its clients are always local to the JVM. The EJB 2.x entity bean is a distributed object that needs a container to run, but can have clients from outside its JVM. Consequently EJB 3 entities are more reusable and easier to test than EJB 2.x entity beans. In EJB 2.x we need to decide whether the persistence aspects of an entity bean are handled by the container (Container Managed Persistence or CMP) or by the application (Bean Managed Persistence or BMP). In the case of CMP, the entity bean is defined as an abstract class with abstract getter and setter method definitions. At deployment the container creates a concrete implementation of this abstract entity bean class. In the case of BMP, the entity bean is defined as a class. The getter and setter methods need to be coded. In addition the ejbCreate(), ejbLoad(), ejbStore(), ejbFindByPrimaryKey(), and any other finder methods need to be coded using JDBC. Mapping an Entity to a Database Table We can map entities onto just about any relational database. GlassFish includes an embedded Derby relational database. If we want GlassFish to access another relational database, Oracle say, then we need to use the GlassFish admin console to set up an Oracle data source. We also need to refer to this Oracle data source in the persistence.xml file. We will describe the persistence.xml file later in this article. These steps are not required if we use the GlassFish default Derby data source. All the examples in this article will use the Derby database. EJB 3 makes heavy use of defaulting for describing entity metadata. In this section we describe a few of these defaults. First, by default, the persistence engine maps the entity name to a relational table name. So in our example the table name is CUSTOMER. If we want to map the Customer entity to another table we will need to use the @Table annotation which we shall see later. By default, property or fields names are mapped to a column name. So ID, FIRSTNAME, and LASTNAME are the column names corresponding to the id, firstname, and lastname entity attributes. If we want to change this default behavior we will need to use the @Column annotation which we shall see later. JDBC rules are used for mapping Java primitives to relational datatypes. So a String will be mapped to VARCHAR for a Derby database and VARCHAR2 for an Oracle database. An int will be mapped to INTEGER for a Derby database and NUMBER for an Oracle database. The size of a column mapped from a String defaults to 255, for example VARCHAR(255) for Derby or VARCHAR2(255) for Oracle. If we want to change this column size then we need to use the length element of the @Column annotation which we shall see later. To summarize, if we are using the GlassFish container with the embedded Derby database, the Customer entity will map onto the following table:   CUSTOMER ID INTEGER PRIMARY KEY FIRSTNAME VARCHAR(255) LASTNAME VARCHAR(255) Most persistence engines, including the GlassFish default persistence engine, Toplink, have a schema generation option, although this is not required by the JPA specification. In the case of GlassFish, if a flag is set when the application is deployed to the container, then the container will create the mapped table in the database. Otherwise the table is assumed to exist in the database.
Read more
  • 0
  • 0
  • 1330

article-image-asterisk-gateway-interface-scripting-php
Packt
16 Oct 2009
4 min read
Save for later

Asterisk Gateway Interface Scripting with PHP

Packt
16 Oct 2009
4 min read
PHP-CLI vs PHP-CGI Most Linux distributions include both versions of PHP when installed, especially if you are using a modern distribution such as CentOS or Mandriva. When writing AGI scripts with PHP, it is imperative that you use PHP-CLI, and not PHP-CGI. Why is this so important? The main issue is that PHP-CLI and PHP-CGI handle their STDIN (standard input) slightly differently, which makes the reading of channel variables via PHP-CGI slightly more problematic. The php.ini configuration file The PHP interpreter includes a configuration file that defines a set of defaults for the interpreter. For your scripts to work in an efficient manner, the following must be set—either via the php.ini file, or by your PHP script: ob_implicit_flush(false); set_time_limit(5); error_log = filename;error_reporting(0); The above code snippet performs the following: Directive Description ob_implicit_flush(false); Sets your PHP output buffering to false, in order to make sure that output from your AGI script to Asterisk is not buffered, and takes longer to execute set_time_limit(5); Sets a time limit on your AGI scripts to verify that they don't extend beyond a reasonable time of execution; there is no rule of thumb relating to the actual value; it is highly dependant on your implementation Depending on your system and applications, your maximum time limit may be set to any value; however, we suggest that you verify your scripts, and are able to work with a maximum limit of 30 seconds. error_log=filename; Excellent for debugging purposes; always creates a log file error_reporting(E_NONE); Does not report errors to the error_log; changes the value to enable different logging parameters; check the PHP website for additional information about this AGI script permissions All AGI scripts must be located in the directory /var/lib/asterisk/agi-bin, which is Asterisk's default directory for AGI scripts. All AGI scripts should have the execute permission, and should be owned by the user running Asterisk. If you are unfamiliar with these, consult with your system administrator for additional information. The structure of a PHP based AGI script Every PHP based AGI script takes the following form: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Operational Code starts here */ .. .. ..?> Upon execution, Asterisk transmits a set of information to our AGI script via STDIN. Handling of that input is best performed in the following manner: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Handling execution input from Asterisk */ while (!feof($stdin)) { $temp = fgets($stdin); $temp = str_replace("n","",$temp); $s = explode(":",$temp); $agivar[$s[0]] = trim($s[1]); if $temp == "") { break; } } /* Operational Code starts here */ .. .. ..?> Once we have handled our inbound information from the Asterisk server, we can start our actual operational flow. Communication between Asterisk and AGI The communication between Asterisk and an AGI script is performed via STDIN and STDOUT (standard output). Let's examine the following diagram: In the above diagram, ASC refers to our AGI script, while AST refers to Asterisk itself. As you can see from the diagram above, the entire flow is fairly simple. It is just a set of simple I/O queries and responses that are carried through the STDIN/STDOUT data streams. Let's now examine a slightly more complicated example: The above figure shows an example that includes two new elements in our AGI logic—access to a database, and to information provided via a web service. For example, the above image illustrates something that may be used as a connection between the telephony world and a dating service. This leads to an immediate conclusion that just as AGI is capable of connecting to almost any type of information source, depending solely on the implementation of the AGI script and not on Asterisk, Asterisk is capable of interfacing with almost any type of information source via out-of-band facilities. Enough of talking! Let's write our first AGI script.
Read more
  • 0
  • 0
  • 5033
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-primer-agi-asterisk-gateway-interface
Packt
16 Oct 2009
2 min read
Save for later

A Primer to AGI: Asterisk Gateway Interface

Packt
16 Oct 2009
2 min read
How does AGI work Let's examine the following diagram: As the previous diagram illustrates, an AGI script communicates with Asterisk via two standard data streams—STDIN (Standard Input) and STDOUT (Standard Output). From the AGI script point-of-view, any input coming in from Asterisk would be considered STDIN, while output to Asterisk would be considered as STDOUT. The idea of using STDIN/STDOUT data streams with applications isn't a new one, even if you're a junior level programmer. Think of it as regarding any input from Asterisk with a read directive and outputting to Asterisk with a print or echo directive. When thinking about it in such a simplistic manner, it is clear that AGI scripts can be written in any scripting or programming language, ranging from BASH scripting, through PERL/PHP scripting, to even writing C/C++ programs to perform the same task. Let's now examine how an AGI script is invoked from within the Asterisk dialplan: exten => _X.,1,AGI(some_script_name.agi,param1,param2,param3) As you can see, the invocation is similar to the invocation of any other Asterisk dialplan application. However, there is one major difference between a regular dialplan application and an AGI script—the resources an AGI script consumes.While an internal application consumes a well-known set of resources from Asterisk, an AGI script simply hands over the control to an external process. Thus, the resources required to execute the external AGI script are now unknown, while at the same time, Asterisk consumes the resources for managing the execution of the AGI script.Ok, so BASH isn't much of a resource hog, but what about Java? This means that the choice of programming language for your AGI scripts is important. Choosing the wrong programming language can often lead to slow systems and in most cases, non-operational systems. While one may argue that the underlying programming language has a direct impact on the performance of your AGI application, it is imperative to learn the impact of each. To be more exact, it's not the language itself, but more the technology of the programming language runtime that is important. The following table tries to distinguish between three programming languages' families and their applicability to AGI development.
Read more
  • 0
  • 0
  • 5229

article-image-jbi-binding-components-netbeans-ide-6
Packt
16 Oct 2009
4 min read
Save for later

JBI Binding Components in NetBeans IDE 6

Packt
16 Oct 2009
4 min read
Binding Components Service Engines are pluggable components which connect to the Normalized Message Router (NMR) to perform business logic for clients. Binding components are also standard JSR 208 components that plug in to NMR and provide transport independence to NMR and Service Engines. The role of binding components is to isolate communication protocols from JBI container so that Service Engines are completely decoupled from the communication infrastructure. For example, BPEL Service Engine can receive requests to initiate BPEL process while reading files on the local file system. It can receive these requests from SOAP messages, from a JMS message, or from any of the other binding components installed into JBI container. Binding Component is a JSR 208 component that provides protocol independent transport services to other JBI components. The following figure shows how binding components fit into the JBI Container architecture: In this figure, we can see that the role of BC is to send and receive messages both internally and externally from Normalized Message Router using protocols, specific to the binding component. We can also see that any number of binding components can be installed into the JBI container. This figure shows that like Service Engines (SE), binding components do not communicate directly with other binding components or with Service Engines. All communication between individual binding components and between binding components and Service Engines is performed via sending standard messages through the Normalized Message Router. NetBeans Support for Binding Components The following table lists which binding components are installed into the JBI container with NetBeans 5.5 and NetBeans 6.0:   As is the case with Service Engines, binding components can be managed within the NetBeans IDE. The list of Binding Components installed into the JBI container can be displayed by expanding the Servers | Sun Java System Application Server 9 | JBI | Binding Components node within the Services explorer. The lifecycle of binding components can be managed by right-clicking on a binding component and selecting a lifecycle process—Start, Stop, Shutdown, or Uninstall. The properties of an individual binding component can also be obtained by selecting the Properties menu option from the context menu as shown in the following figure. Now that we've discussed what binding components are, and how they communicate both internally and externally to the Normalized Message Router, let's take a closer look at some of the more common binding components and how they are accessed and managed from within the NetBeans IDE. File Binding Component The file binding component provides a communications mechanism for JBI components to interact with the file system. It can act as both a Provider by checking for new files to process, or as a Consumer by outputting files for other processes or components. The figure above shows the file binding component acting as a Provider of messages. In this scenario, a message has been sent to the JBI container, and picked up by a protocol-specific binding component (for example, a SOAP message has been received). A JBI Process then occurs within the JBI container which may include routing the message between many different binding components and Service Engines depending upon the process. Finally, after the JBI Process has completed, the results of the process are sent to File Binding Component which writes out the result to a file. The figure above shows the file binding component acting as a Consumer of messages. In this situation, the File Binding Component is periodically polling the file system looking for files with a specified filename pattern in a specified directory. When the binding component finds a file that matches its criteria, it reads in the file and starts the JBI Process, which may again cause the input message to be routed between many different binding components and Service Engines. Finally, in this example, the results of the JBI Process are output via a Binding Component. Of course, it is possible that a binding component can act as both a provider and a consumer within the same JBI process. In this case, the file binding component would be initially responsible for reading an input message from the file system. After any JBI processing has occurred, the file binding component would then write out the results of the process to a file. Within the NetBeans Enterprise Pack, the entire set of properties for the file binding component can be edited within the Properties window. The properties for the binding component are displayed when either the input or output messages are selected from the WSDL in a composite application as shown in the following figure.
Read more
  • 0
  • 0
  • 1383

Packt
16 Oct 2009
17 min read
Save for later

WCF – Windows Communication Foundation

Packt
16 Oct 2009
17 min read
What is WCF? WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. WCF is Microsoft's unified programming model for building service-oriented applications. It enables developers to build secure, reliable, transacted solutions that integrate across platforms and interoperate with existing investments. WCF is built on the Microsoft .NET Framework and simplifies the development of connected systems. It unifies a broad array of distributed systems capabilities in a composable, extensible architecture that supports multiple transports, messaging patterns, encodings, network topologies, and hosting models. It is the next version of several existing products—ASP.NET's web methods (ASMX) and Microsoft Web Services Enhancements (WSE) for Microsoft .NET, .NET Remoting, Enterprise Services, and System.Messaging. The purpose of WCF is to provide a single programming model that can be used to create services on the .NET platform for organizations. Why is WCF used for SOA? As we have seen in the previous section, WCF is an umbrella technology that covers ASMX web services, .NET remoting, WSE, Enterprise Service, and System.Messaging. It is designed to offer a manageable approach to distributed computing, broad interoperability, and direct support for service orientation. WCF supports many styles of distributed application development by providing a layered architecture. At its base, the WCF channel architecture provides asynchronous, untyped message-passing primitives. Built on top of this base are protocol facilities for secure, reliable, transacted data exchange and a broad choice of transport and encoding options. Let us take an example to see why WCF is a good approach for SOA. Suppose a company is designing a service to get loan information. This service could be used by the internal call center application, an Internet web application, and a third-party Java J2EE application such as a banking system. For interactions with the call center client application, performance is important. For communication with the J2EE-based application however, interoperability becomes the highest goal. The security requirements are also quite different between the local Windows-based application, and the J2EE-based application running on another operating system. Even transactional requirements might vary, with only the internal application being allowed to make transactional requests. With these complex requirements, it is not easy to build the desired service with any single existing technology. For example, the ASMX technology may serve well for the interoperability, but its performance may not be ideal. The .NET remoting will be a good choice from the performance perspective, but it is not good at interoperability. Enterprise Services could be used for managing object lifetimes and defining distributed transactions, but Enterprise Services supports only a limited set of communication options. Now with WCF, it is much easier to implement this service. As WCF has unified a broad array of distributed systems capabilities, the get loan service can be built with WCF for all of its application-to-application communication. The following shows how WCF addresses each of these requirements: Because WCF can communicate using web service standards, interoperability with other platforms that also support SOAP, such as the leading J2EE-based application servers, is straightforward. You can also configure and extend WCF to communicate with web services using messages not based on SOAP, for example, simple XML formats such as RSS. Performance is of paramount concern for most businesses. WCF was developed with the goal of being one of the fastest distributed application platforms developed by Microsoft. To allow for optimal performance when both parties in a communication are built on WCF, the wire encoding used in this case is an optimized binary version of an XML Information Set. Using this option makes sense for communication with the call center client application, because it is also built on WCF, and performance is an important concern. Managing object lifetimes, defining distributed transactions, and other aspects of Enterprise Services, are now provided by WCF. They are available to any WCF-based application, which means that the get loan service can use them with any of the other applications that it communicates with. Because it supports a large set of the WS-* specifications, WCF helps to provide reliability, security, and transactions when communicating with any platform that supports these specifications. The WCF option for queued messaging, built on Message Queuing, allows applications to use persistent queuing without using another set of application programming interfaces. The result of this unification is greater functionality, and significantly reduced complexity. WCF architecture The following diagram illustrates the major layers of the Windows Communication Foundation (WCF) architecture. This diagram is taken from the Microsoft web site (http://msdn.microsoft.com/en-us/library/ms733128.aspx): The Contracts layer defines various aspects of the message system. For example, the Data Contract describes every parameter that makes up every message that a service can create or consume. The Service runtime layer contains the behaviors that occur only during the actual operation of the service, that is, the runtime behaviors of the service. The Messaging layer is composed of channels. A channel is a component that processes a message in some way, for example, authenticating a message. In its final form, a service is a program. Like other programs, a service must be run in an executable format. This is known as the hosting application. In the next section, we will explain these concepts in detail. Basic WCF concepts—WCF ABCs There are many terms and concepts around WCF, such as address, binding, contract, endpoint, behavior, hosting, and channels. Understanding these terms is very helpful when using WCF. Address The WCF Address is a specific location for a service. It is the specific place to which a message will be sent. All WCF services are deployed at a specific address, listening at that address for incoming requests. A WCF Address is normally specified as a URI, with the first part specifying the transport mechanism, and the hierarchical part specifying the unique location of the service. For example, http://www.myweb.com/myWCFServices/SampleService can be an address for a WCF service. This WCF service uses HTTP as its transport protocol, and it is located on the server www.myweb.com, with a unique service path of myWCFServices/SampleService. The following diagram illustrates the three parts of a WCF service address. Binding Bindings are used to specify the transport, encoding, and protocol details required for clients and services to communicate with each other. Bindings are what WCF uses to generate the underlying wire representation of the endpoint. So, most of the details of the binding must be agreed upon by the parties that are communicating. The easiest way to achieve this is for clients of a service to use the same binding that the service uses. A binding is made up of a collection of binding elements. Each element describes some aspect of how the service communicates with clients. A binding must include at least one transport binding element, at least one message encoding binding element (which can be provided by the transport binding element by default), and any number of other protocol binding elements. The process that builds a runtime out of this description allows each binding element to contribute code to that runtime. WCF provides bindings that contain common selections of binding elements. These can either be used with their default settings, or the default values can be modified according to user requirements. These system-provided bindings have properties that allow direct control over the binding elements and their settings. The following are some examples of the system-provided bindings: BasicHttpBinding, WSHttpBinding, WSDualHttpBinding, WSFederationHttpBinding, NetTcpBinding, NetNamedPipeBinding, NetMsmqBinding, NetPeerTcpBinding, and MsmqIntegrationBinding. Each one of these built-in bindings has predefined required elements for a common task, and is ready to be used in your project. For instance, the BasicHttpBinding uses HTTP as the transport for sending SOAP 1.1 messages, and it has attributes and elements such as receiveTimeout, sendTimeout, maxMessageSize, and maxBufferSize. You can accept the default settings of its attributes and elements, or overwrite them as needed. Contract A WCF contract is a set of specifications that define the interfaces of a WCF service. A WCF service communicates with other applications according to its contracts. There are several types of WCF contracts, such as Service Contract, Operation Contract, Data Contract, Message Contract, and Fault Contract. Service contract A service contract is the interface of the WCF service. Basically, it tells others what the service can do. It may include service-level settings, such as the name of the service, the namespace of the service, and the corresponding callback contracts of the service. Inside the interface, it can define a bunch of methods, or service operations for specific tasks. Normally, a WCF service has at least one service contract. Operation contract An operation contract is defined within a service contract. It defines the parameters and return type of an operation. An operation can take data of a primitive (native) data type, such as an integer as a parameter, or it can take a message, which should be defined as a message contract type. Just as a service contract is an interface, an operation contract is a definition of an operation. It has to be implemented in order that the service functions as a WCF service. An operation contract also defines operation-level settings, such as the transaction flow of the operation, the directions of the operation (one-way, two-way, or both ways), and fault contract of the operation. The following is an example of an operation contract: [WCF::FaultContract(typeof(MyWCF.EasyNorthwind.FaultContracts.ProductFault))]MyWCF.EasyNorthwind.MessageContracts.GetProductResponseGetProduct(MyWCF.EasyNorthwind.MessageContracts.GetProductRequest request); In this example, the operation contract's name is GetProduct, and it takes one input parameter, which is of type GetProductRequest (a message contract) and has one return value, which is of type GetProductResponse (another message contract). It may return a fault message, which is of type ProductFault (a fault contract), to the client applications. We will cover message contract and fault contract in the following sections. Message contract If an operation contract needs to pass a message as a parameter or return a message, the type of these messages will be defined as message contracts. A message contract defines the elements of the message, as well as any message-related settings, such as the level of message security, and also whether an element should go to the header or to the body. The following is a message contract example: namespace MyWCF.EasyNorthwind.MessageContracts{ /// <summary> /// Service Contract Class - GetProductResponse /// </summary> [WCF::MessageContract(IsWrapped = false)] public partial class GetProductResponse { private MyWCF.EasyNorthwind.DataContracts.Product product; [WCF::MessageBodyMember(Name = "Product")] public MyWCF.EasyNorthwind.DataContracts.Product Product { get { return product; } set { product = value; } } }} In this example, the namespace of the message contract is MyWCF.EasyNorthwind.MessageContracts, and the message contract's name is GetProductResponse. This message contract has one member, which is of type Product. Data contract Data contracts are data types of the WCF service. All data types used by the WCF service must be described in metadata to enable other applications to interoperate with the service. A data contract can be used by an operation contract as a parameter or return type, or it can be used by a message contract to define elements. If a WCF service uses only primitive (native) data types, it is not necessary to define any data contract. The following is an of example data contract: namespace MyWCF.EasyNorthwind.DataContracts{ /// <summary> /// Data Contract Class - Product /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "Product")] public partial class Product { private int productID; private string productName; [WcfSerialization::DataMember(Name = "ProductID", IsRequired = false, Order = 0)] public int ProductID { get { return productID; } set { productID = value; } } [WcfSerialization::DataMember(Name = "ProductName", IsRequired = false, Order = 1)] public string ProductName { get { return productName; } set { productName = value; } } }} In this example, the namespace of the data contract is MyWCF.EasyNorthwind.DataContracts, the name of the data contract is Product, and this data contract has two members (ProductID and ProductName).   Fault contract In any WCF service operation contract, if an error can be returned to the caller, the caller should be warned of that error. These error types are defined as fault contracts. An operation can have zero or more fault contracts associated with it. The following is a fault contract example: namespace MyWCF.EasyNorthwind.FaultContracts{ /// <summary> /// Data Contract Class - ProductFault /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "ProductFault")] public partial class ProductFault { private string faultMessage; [WcfSerialization::DataMember(Name = "FaultMessage", IsRequired = false, Order = 0)] public string FaultMessage { get { return faultMessage; } set { faultMessage = value; } } }} In this example, the namespace of the fault contract is MyWCF.EasyNorthwind.FaultContracts, the name of the fault contract is ProductFault, and the fault contract has only one member (FaultMessage). Endpoint Messages are sent between endpoints. Endpoints are places where messages are sent or received (or both), and they define all of the information required for the message exchange. A service exposes one or more application endpoints (as well as zero or more infrastructure endpoints). A service can expose this information as the metadata that clients can process to generate appropriate WCF clients and communication stacks. When needed, the client generates an endpoint that is compatible with one of the service's endpoints. A WCF service endpoint has an address, a binding, and a service contract(WCF ABC). The endpoint's address is a network address where the endpoint resides. It describes, in a standard-based way, where messages should be sent. Each endpoint normally has one unique address, but sometimes two or more endpoints can share the same address. The endpoint's binding specifies how the endpoint communicates with the world, including things such as transport protocol (TCP, HTTP), encoding (text, binary), and security requirements (SSL, SOAP message security). The endpoint's contract specifies what the endpoint communicates, and is essentially a collection of messages organized in the operations that have basic Message Exchange Patterns (MEPs) such as one-way, duplex, or request/reply. The following diagram shows the components of a WCF service endpoint. Behavior A WCF behavior is a type, or settings to extend the functionality of the original type. There are many types of behaviors in WCF, such as service behavior, binding behavior, contract behavior, security behavior and channel behavior. For example, a new service behavior can be defined to specify the transaction timeout of the service, the maximum concurrent instances of the service, and whether the service publishes metadata. Behaviors are configured in the WCF service configuration file. Hosting A WCF service is a component that can be called by other applications. It must be hosted in an environment in order to be discovered and used by others. The WCF host is an application that controls the lifetime of the service. With .NET 3.0 and beyond, there are several ways to host the service. Self hosting A WCF service can be self-hosted, which means that the service runs as a standalone application and controls its own lifetime. This is the most flexible and easiest way of hosting a WCF service, but its availability and features are limited. Windows services hosting A WCF service can also be hosted as a Windows service. A Windows service is a process managed by the operating system and it is automatically started when Windows is started (if it is configured to do so). However, it lacks some critical features (such as versioning) for WCF services. IIS hosting A better way of hosting a WCF service is to use IIS. This is the traditional way of hosting a web service. IIS, by nature, has many useful features, such as process recycling, idle shutdown, process health monitoring, message-based activation, high availability, easy manageability, versioning, and deployment scenarios. All of these features are required for enterprise-level WCF services. Windows Activation Services hosting The IIS hosting method, however, comes with several limitations in the service-orientation world; the dependency on HTTP is the main culprit. With IIS hosting, many of WCF's flexible options can't be utilized. This is the reason why Microsoft specifically developed a new method, called Windows Activation Services, to host WCF services. Windows Process Activation Service (WAS) is the new process activation mechanism for Windows Server 2008 that is also available on Windows Vista. It retains the familiar IIS 6.0 process model (application pools and message-based process activation) and hosting features (such as rapid failure protection, health monitoring, and recycling), but it removes the dependency on HTTP from the activation architecture. IIS 7.0 uses WAS to accomplish message-based activation over HTTP. Additional WCF components also plug into WAS to provide message-based activation over the other protocols that WCF supports, such as TCP, MSMQ, and named pipes. This allows applications that use the non-HTTP communication protocols to use the IIS features such as process recycling, rapid fail protection, and the common configuration systems that were only available to HTTP-based applications. This hosting option requires that WAS be properly configured, but it does not require you to write any hosting code as part of the application. [Microsoft MSN, Hosting Services, retrieved on 3/6/2008 from http://msdn2.microsoft.com/en-us/library/ms730158.aspx] Channels As we have seen in the previous sections, a WCF service has to be hosted in an application on the server side. On the client side, the client applications have to specify the bindings to connect to the WCF services. The binding elements are interfaces, and they have to be implemented in concrete classes. The concrete implementation of a binding element is called a channel. The binding represents the configuration, and the channel is the implementation associated with that configuration. Therefore, there is a channel associated with each binding element. Channels stack on top of one another to create the concrete implementation of the binding—the channel stack. The WCF channel stack is a layered communication stack with one or more channels that process messages. At the bottom of the stack is a transport channel that is responsible for adapting the channel stack to the underlying transport (for example, TCP, HTTP, SMTP and other types of transport). Channels provide a low-level programming model for sending and receiving messages. This programming model relies on several interfaces and other types collectively known as the WCF channel model. The following diagram shows a simple channel stack: Metadata The metadata of a service describes the characteristics of the service that an external entity needs to understand in order to communicate with the service. Metadata can be consumed by the ServiceModel Metadata Utility Tool (Svcutil.exe) to generate a WCF client and the accompanying configuration that a client application can use to interact with the service. The metadata exposed by the service includes XML schema documents, which define the data contract of the service, and WSDL documents, which describe the methods of the service. Though WCF services will always have metadata, it is possible to hide the metadata from outsiders. If you do so, you have to pass the metadata to the client side by other means. This practice is not common, but it gives your services an extra layer of security. When enabled via the configuration settings through metadata behavior, metadata for the service can be retrieved by inspecting the service and its endpoints. The following configuration setting in a WCF service configuration file will enable the metadata publishing for HTTP transport protocol: <serviceMetadata httpGetEnabled="true" />
Read more
  • 0
  • 0
  • 1422

article-image-getting-started-scratch-14-part-1
Packt
16 Oct 2009
6 min read
Save for later

Getting Started with Scratch 1.4 (Part 1)

Packt
16 Oct 2009
6 min read
Before we create any code, let's make sure we speak the same language. The interface at a glance When we encounter software that's unfamiliar to us, we often wonder, "Where do I begin?" Together, we'll answer that question and click through some important sections of the Scratch interface so that we can quickly start creating our own projects. Now, open Scratch and let's begin. Time for action – first step When we open Scratch, we notice that the development environment roughly divides into three distinct sections, as seen in the following screenshot. Moving from left to right, we have the following sections in sequential order: Blocks palette Script editor Stage Let's see if we can get our cat moving: In the blocks palette, click on the Looks button. Drag the switch to costume block onto the scripts area. Now, in the blocks palette, click on the Control button. Drag the when flag clicked block to the scripts area and snap it on top of the switch to costume block, as illustrated in the following screenshot. How to snap two blocks together?As you drag a block onto another block, a white line displays to indicate that the block you are dragging can be added to the script. When you see the white line, release your mouse to snap the block in place. In the scripts area, click on the Costumes tab to display the sprite's costumes. Click on costume2 to change the sprite on the stage. Now, click back on costume1 to change how the sprite displays on the stage. Directly beneath the stage is a sprites list. The current list displays Sprite1 and Stage. Click on the sprite named Stage and notice that the scripts area changes. Click back on Sprite1 in the sprites list and again note the change to the scripts area. Click on the flag above the stage to set our first Scratch program in motion. Watch closely, or you might miss it. What just happened? Congratulations! You created your first Scratch project. Let's take a closer look at what we did just now. As we clicked through the blocks palette, we saw that the available blocks changed depending on whether we chose Motion, Looks, or Control. Each set of blocks is color-coded to help us easily identify them in our scripts. The first block we added to the script instructed the sprite to display costume2. The second block provided a way to control our script by clicking on the flag. Blocks with a smooth top are called hats in Scratch terminology because they can be placed only at the top of a stack of blocks. Did you look closely at the blocks as you snapped the control block into the looks block? The bottom of the when flag clicked block had a protrusion like a puzzle piece that fits the indent on the top of the switch to costume block. As children, most of us probably have played a game where we needed to put the round peg into the round hole. Building a Scratch program is just that simple. We see instantly how one block may or may not fit into another block. Stack blocks have indents on top and bumps on the bottom that allow blocks to lock together to form a sequence of actions that we call a script. A block depicting its indent and bump can be seen in the following screenshot: When we clicked on the Costumes tab, we learned that our cat had two costumes or appearances. Clicking on the costume caused the cat on the stage to change its appearance. As we clicked around the sprites list, we discovered our project had two sprites: a cat and a stage. And the script we created for the cat didn't transfer to the stage. We finished the exercise by clicking on the flag. The change was subtle, but our cat appeared to take its first step when it switched to costume2. Basics of a Scratch project Inside every Scratch project, we find the following ingredients: sprites, costumes, blocks, scripts, and a stage. It's how we mix the ingredients with our imagination that creates captivating stories, animations, and games. Sprites bring our program to life, and every project has at least one. Throughout the book, we'll learn how to add and customize sprites. A sprite wears a costume. Change the costume and you change the way the sprite looks. If the sprite happens to be the stage, the costume is known as a background. Blocks are just categories of instructions that include motion, looks, sound, pen, control, sensing, operators, and variables. Scripts define a set of blocks that tell a sprite exactly what to do. Each block represents an instruction or piece of information that affects the sprite in some way. We're all actors on Scratch's stage Think of each sprite in a Scratch program as an actor. Each actor walks onto the stage and recites a set of lines from the script. How each actor interacts with another actor depends on the words the director chooses. On Scratch's stage, every object, even the stone in the corner, is a sprite capable of contributing to the story. As directors, we have full creative control. Time for action – save your work It's a good practice to get in the habit of saving your work. Save your work early, and save it often: To save your new project, click the disk icon at the top of the Scratch window or click File | Save As. A Save Project dialog box opens and asks you for a location and a New Filename. Enter some descriptive information for your project by supplying the Project author and notes About this project in the fields provided. Set the cat in motion Even though our script contains only two blocks, we have a problem. When we click on the flag, the sprite switches to a different costume and stops. If we try to click on the flag again, nothing appears to happen, and we can't get back to the first costume unless we go to the Costumes tab and select costume1. That's not fun. In our next exercise, we're going to switch between both costumes and create a lively animation.
Read more
  • 0
  • 0
  • 2454
article-image-flex-101-flash-builder-4-part-2
Packt
16 Oct 2009
7 min read
Save for later

Flex 101 with Flash Builder 4: Part 2

Packt
16 Oct 2009
7 min read
Using Flash Builder Data Services In this section, we will write the same application that we have written in part one. However this time we will not write the code ourselves instead we'll let the Flash Builder generate the code for us. Flash Builder comes with powerful new features that ease integration with external services. It can auto generate client code that can invoke external services for us and bind the results to existing User Interface components. Let us look at building the same application that we developed in the previous section via the new Data Services and Binding wizardry that Flash Builder provides. Let us create a new Flex Project as shown below: Name this Flex Project as YahooNewsWithDataServices as shown below. We will go with the default settings for the other fields. Click on Finish button. You will get the standard boilerplate code that is produced for the main Application MXML file—YahooNewsWithDataServices.mxml. Switch to the Design View and the Properties tab as shown below. Modify the values for the Layout to spark.layouts.VerticalLayout and the Width and Height to 100%. This means that the main Application window will occupy the maximum area available on your screen and by choosing a Vertical Layout, all visual components dragged to the main Application Canvas will be arranged in a vertical fashion. Stay in Design view. From the Components Tab shown below select Data Controls → DataGrid Drag it onto the canvas on the right side. It will appear as shown below: Keep the DataGrid selected and go to the Properties Tab as shown below. Give it an ID value of dgYahooNews. Set its Width and Height to 100% so that it will occupy the entire parent container like the Application Window. Then click on the Configure Columns button. Clicking on the Configure Columns button will bring up the current columns that are added by default. The screenshot is shown below. We do not need any of these columns at this point. So select each of the columns and click on the Delete button. Finally click on the OK button. Save your work at this point through the Ctrl-S key combination. At this point in time, we will simply create an Application screen with a datagrid on it. Now we need to connect it to the Yahoo News Service and bind the datagrid rows/columns to the data being returned from the Service. In the previous section, we had seen how to do this by writing code. In this case, we will let Flash Builder generate the connectivity code and also do the binding for us. First step is to select the Connect to Data/Service option from the main menu as shown below: This will bring up the wizard as shown in the following screenshot. Flash Builder allows us to connect to several types of backend systems. In our case, we wish to connect to the Yahoo Most Emailed News RSS Service that is available at: http://rss.news.yahoo.com/rss/mostemailed. So we will choose the HTTP Service as shown below and click on the Next button. The next step will be a configuration screen, where we will provide the following information: Service Name: Give the Service a name that you like which u can refer to later. We name it YahooNewsService. Operations: We give an operation name called getNews. The HTTP Method is GET and in the field for URL, you need to provide the RSS Feed URL http://rss.news.yahoo.com/rss/mostemailed You can enter more than one operation name here. Alternately, if you were invoking a HTTP Service using the POST method, you could even specify the Parameters by adding parameters via the Add button.   Click on Finish. This will display a Message window as shown below which tells you that there is still some incomplete work or a few last steps remaining before you can complete the definition of the service and bind it to a UI Component (in our case it is the DataGrid). Click on OK to dismiss the message. Once again, switch to Design view and click on the button for the Data Provider shown below. What we are going to do is associate/bind the result of the Service invocation to the Data Grid via the data provider. This will bring up the Bind To Data window as show below. It should be clear to you by now that we can select the Service and the Operation whose result we wish to bind to our data grid. Flash Builder automatically shows us the Service Name and an Operation name. If there are more Services defined, it will allow to select the appropriate Service and its respective operations. Note that it asks us to Configure Return Type as shown. This is because Flash Builder needs to understand the data that is going to be returned by the Service invocation and also to help it map the result data types to appropriate Action Script Data structures. Click on the Configure Return Type button. This will bring up the Configure Operation Return Type window. Give a name for the Custom Data Type as we have given below and click on the Next button. The next step is to specify the kind of data that will be returned. Flash Builder simplifies this by allowing us to give the complete URL so that it can invoke it and determine the structure that is returned. If you already have a sample response, you could even choose the third option. In our case, we will go with the second option of entering a complete URL as shown below. We also specify the RSS Feed url. Click on the Next button. This will invoke the RSS Feed and retrieve the HTTP Response. Since the response returned is in XML, Flash Builder is able to generically map it to a tree-like structure as shown below. The structure is a typical RSS XML structure. The root node that is rss is shown in the Select Node and all its children nodes are shown in the grid below. Since we are only interested in the RSS items and extracting out the title and category, we first navigate and choose the item field in the Select Node field. This will show all the children nodes for the item field as shown below: Delete all the fields except for title and category. To delete a field, select it and click on the Delete button. Click on the Finish button. This will bring up the original Bind To Data form. Click on the OK button. We are all set now. Save your work and click on the Run button from the main toolbar. Flash Builder will launch the application. The Data Grid on creation will invoke the HTTP Service and the items will be retrieved. The title and category fields are shown for each news item as shown below: r Summary This article provided an introduction to Flex 4 via its development environment Flash Builder. While the applications that we covered in the article are not too practical, it should give you a glimpse of the power of the Flex framework and its tools and loads of developer productivity that is one of its key strengths. Interested readers are encouraged to use this as a starting point in the world of developing Rich Internet Applications with Flex.
Read more
  • 0
  • 0
  • 1224

article-image-flex-101-flash-builder-4-part-1
Packt
16 Oct 2009
11 min read
Save for later

Flex 101 with Flash Builder 4: Part 1

Packt
16 Oct 2009
11 min read
  The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: http://www.adobe.com/products/flashplayer/ Download Flash Builder 4 Public Beta from http://labs.adobe.com/technologies/flashbuilder4/. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View as shown below: Have a look at the lower half of the left pane. You will see the Components tab as shown below, which would address most needs of your Application Visual design. Click on the Controls tree node as shown below. You will see several controls that you can use and from which, we will use the Button control for this application. Simply select the Button control and drag it to the Design View Canvas as shown below: This will drop an instance of the Button control on the Design View as shown below: Select the Button to see its properties panel as shown below. Properties Panel is where you can set several attributes at design time for the control. In case the Properties panel is not visible, you can get to that by selecting Window → Properties from the main menu. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button x="17" y="14" label="Button" id="btnSayHello" click="btnSayHello_clickHandler(event)"/> </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the   Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File →  Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below.
Read more
  • 0
  • 0
  • 2399

article-image-user-input-validation-tapestry-5
Packt
16 Oct 2009
9 min read
Save for later

User Input Validation in Tapestry 5

Packt
16 Oct 2009
9 min read
Adding Validation to Components The Start page of the web application Celebrity Collector has a login form that expects the user to enter some values into its two fields. But, what if the user didn't enter anything and still clicked on the Log In button? Currently, the application will decide that the credentials are wrong and the user will be redirected to the Registration page, and receive an invitation to register. This logic does make some sense; but, it isn't the best line of action, as the button might have been pressed by mistake. These two fields, User Name and Password, are actually mandatory, and if no value was entered into them, then it should be considered an error. All we need to do for this is to add a required validator to every field, as seen in the following code: <tr> <td> <t:label t_for="userName"> Label for the first text box</t:label>: </td> <td> <input type="text" t_id="userName" t_type="TextField" t:label="User Name" t_validate="required"/> </td></tr><tr> <td> <t:label t_for="password"> The second label</t:label>: </td><td> <input type="text" t_id="password" t_label="Password" t:type="PasswordField" t_validate="required"/></td></tr> Just one additional attribute for each component, and let's see how this works now. Run the application, leave both fields empty and click on the Log In button. Here is what you should see: Both fields, including their labels, are clearly marked now as an error. We even have some kind of graphical marker for the problematic fields. However, one thing is missing—a clear explanation of what exactly went wrong. To display such a message, one more component needs to be added to the page. Modify the page template, as done here: <t:form t_id="loginForm"> <t:errors/> <table> The Errors component is very simple, but one important thing to remember is that it should be placed inside of the Form component, which in turn, surrounds the validated components. Let's run the application again and try to submit an empty form. Now the result should look like this: This kind of feedback doesn't leave any space for doubt, does it? If you see that the error messages are strongly misplaced to the left, it means that an error in the default.css file that comes with Tapestry distribution still hasn't been fixed. To override the faulty style, define it in our application's styles.css file like this: DIV.t-error LI{ margin-left: 20px;} Do not forget to make the stylesheet available to the page. I hope you will agree that the efforts we had to make to get user input validated are close to zero. But let's see what Tapestry has done in response to them: Every form component has a ValidationTracker object associated with it. It is provided automatically, we do not need to care about it. Basically, ValidationTracker is the place where any validation problems, if they happen, are recorded. As soon as we use the t:validate attribute for a component in the form, Tapestry will assign to that component one or more validators, the number and type of them will depend on the value of the t:validate attribute (more about this later). As soon as a validator decides that the value entered associated with the component is not valid, it records an error in the ValidationTracker. Again, this happens automatically. If there are any errors recorded in ValidationTracker, Tapestry will redisplay the form, decorating the fields with erroneous input and their labels appropriately. If there is an Errors component in the form, it will automatically display error messages for all the errors in ValidationTracker. The error messages for standard validators are provided by Tapestry while the name of the component to be mentioned in the message is taken from its label. A lot of very useful functionality comes with the framework and works for us "out of the box", without any configuration or set-up! Tapestry comes with a set of validators that should be sufficient for most needs. Let's have a more detailed look at how to use them. Validators The following validators come with the current distribution of Tapestry 5: Required—checks if the value of the validated component is not null or an empty string. MinLength—checks if the string (the value of the validated component) is not shorter than the specified length. You will see how to pass the length parameter to this validator shortly. MaxLength—same as above, but checks if the string is not too long. Min—ensures that the numeric value of the validated component is not less than the specified value, passed to the validator as a parameter. Max—as above, but ensures that the value does not exceed the specified limit. Regexp—checks if the string value fits the specified pattern. We can use several validators for one component. Let's see how all this works together. First of all, let's add another component to the Registration page template: <tr> <td><t:label t_for="age"/>:</td> <td><input type="text" t_type="textfield" t_id="age"/></td></tr> Also, add the corresponding property to the Registration page class, age, of type double. It could be an int indeed, but I want to show that the Min and Max validators can work with fractional numbers too. Besides, someone might decide to enter their age as 23.4567. This will be weird, but not against the laws. Finally, add an Errors component to the form at the Registration page, so that we can see error messages: <t:form t_id="registrationForm"> <t:errors/> <table> Now we can test all the available validators on one page. Let's specify the validation rules first: Both User Name and Password are required. Also, they should not be shorter than three characters and not longer than eight characters. Age is required, and it should not be less than five (change this number if you've got a prodigy in your family) and not more than 120 (as that would probably be a mistake). Email address is not required, but if entered, should match a common pattern. Here are the changes to the Registration page template that will implement the specified validation rules: <td> <input type="text" t_type="textfield" t_id="userName" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="passwordfield" t_id="password" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="textfield" t_id="age" t:validate="required,min=5,max=120"/></td>...<input type="text" t_type="textfield" t_id="email" t:validate="regexp"/> As you see, it is very easy to pass a parameter to a validator, like min=5 or maxlength=8. But, where do we specify a pattern for the Regexp validator? The answer is, in the message catalog. Let's add the following line to the app.properties file: email-regexp=^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$ This will serve as a regular expression for all Regexp validators applied to components with ID email throughout the application. Run the application, go to the Registration page and, try to submit the empty form. Here is what you should see: Looks all right, but the message for the age could be more sensible, something like You are too young! You should be at least 5 years old. We'll deal with this later. However for now, enter a very long username, only two characters for password and an age that is more than the upper limit, and see how the messages will change: Again, looks good, except for the message about age. Next, enter some valid values for User Name, Password and Age. Then click on the check box to subscribe to the newsletter. In the text box for email, enter some invalid value and click on Submit. Here is the result: Yes! The validation worked properly, but the error message is absolutely unacceptable. Let's deal with this, but first make sure that any valid email address will pass the validation.   Providing Custom Error Messages We can provide custom messages for validators in the application's (or page's) message catalog. For such messages we use keys that are made of the validated component's ID, the name of validator and the "message" postfix. Here is an example of what we could add to the app.properties file to change error messages for the Min and Max validators of the Age component as well as the message used for the email validation: email-regexp-message=Email address is not valid.age-min-message=You are too young! You should be at least 5 years old.age-max-message=People do not live that long! Still better, instead of hard-coding the required minimal age into the message, we could insert into the message the parameter that was passed to the Min validator (following the rules for java.text.Format), like this: age-min-message=You are too young! You should be at least %s years old. If you run the application now and submit an invalid value for age, the error message will be much better: You might want to make sure that the other error messages have changed too. We can now successfully validate values entered into separate fields, but what if the validity of the input depends on how two or more different values relate to each other? For example, at the Registration page we want two versions of password to be the same, and if they are not, this should be considered as an invalid input and reported appropriately. Before dealing with this problem however, we need to look more thoroughly at different events generated by the Form component.  
Read more
  • 0
  • 0
  • 3144
article-image-introduction-legacy-modernization-oracle
Packt
16 Oct 2009
13 min read
Save for later

Introduction to Legacy Modernization in Oracle

Packt
16 Oct 2009
13 min read
IT organizations are under increasing demand to increase the ability of the business to innovate while controlling and often reducing costs. Legacy modernization is a real opportunity for these goals to be achieved. To attain these goals, the organization needs to take full advantage of emerging advances in platform and software innovations, while leveraging the investment that has been made in the business processes within the legacy environment.To make good choices for a specific roadmap to modernization, the decision makers should work to have a good understanding of what these modernization options are, and how to get there. Overview of the Modernization Options There are five primary approaches to legacy modernization: Re-architecting to a new environment SOA integration and enablement Replatforming through re-hosting and automated migration Replacement with COTS solutions Data Modernization Other organizations may have different nomenclature for what they call each type of modernization, but any of these options can generally fit into one of these five categories. Each of the options can be carried out in concert with the others, or as a standalone effort. They are not mutually exclusive endeavors. Further, in a large modernization project, multiple approaches are often used for parts of the larger modernization initiative. The right mix of approaches is determined by the business needs driving the modernization, organization's risk tolerance and time constraints, the nature of the source environment and legacy applications. Where the applications no longer meet business needs and require significant changes, re-architecture might be the best way forward. On the other hand, for very large applications that mostly meet the business needs, SOA enablement or re-platforming might be lower risk options. You will notice that the first thing we talk about in this section—the Legacy Understanding phase—isn't listed as one of the modernization options. It is mentioned at this stage because it is a critical step that is done as a precursor to any option your organization chooses. Legacy Understanding Once we have identified our business drivers and the first steps in this process, we must understand what we have before we go ahead and modernize it. Legacy environments are very complex and quite often have little or no current documentation. This introduces a concept of analysis and discovery that is valuable for any modernization technique. Application Portfolio Analysis (APA) In order to make use of any modernization approach, the first step an organization must take is to carry out an APA of the current applications and their environment. This process has many names. You may hear terms such as Legacy Understanding, Application Re-learn, or Portfolio Understanding. All these activities provide a clear view of the current state of the computing environment. This process equips the organization with the information that it needs to identify the best areas for modernization. For example, this process can reveal process flows, data flows, how screens interact with transactions and programs, program complexity and maintainability metrics and can even generate pseudocode to re-document candidate business rules. Additionally, the physical repositories that are created as a result of the analysis can be used in the next stages of modernization, be it in SOA enablement, re-architecture, or re-platforming. Efforts are currently underway by the Object Management Group (OMG) to create a standard method to exchange this data between applications. The following screenshot shows the Legacy Portfolio Analysis: APA Macroanalysis The first form of APA analysis is a very high-level abstract view of the application environment. This level of analytics looks at the application in the context of the overall IT organization. Systems information is collected at a very high level. The key here is to understand which applications exist, how they interact, and what the identified value of the desired function is. With this type of analysis, organizations can manage overall modernization strategies and identify key applications that are good candidates for SOA integration, re-architecture, or re-platforming versus a replacement with Commercial Off-the-Shelf (COTS) applications. Data structures, program code, and technical characteristics are not analyzed here. The following macro-level process flow diagram was automatically generated from Relativity Technologies Modernization Workbench tool. Using this, the user can automatically get a view of the screen flows within a COBOL application. This is used to help identify candidate areas for modernization, areas of complexity, transfer of knowledge, or legacy system documentation. The key thing about these types of reports is that they are dynamic and automatically generated. The previous flow diagram illustrates some interesting points about the system that can be understood quickly by the analyst. Remember, this type of diagram is generated automatically, and can provide instant insight into the system with no prior knowledge. For example, we now have some basic information such as: MENSAT1.MENMAP1 is the main driver and is most likely a menu program. There are four called programs. Two programs have database interfaces. This is a simplistic view, but if you can imagine hundreds of programs in a visual perspective, we can quickly identify clusters of complexity, define potential subsystems, and do much more, all from an automated tool with visual navigation and powerful cross-referencing capabilities. This type of tool can also help to re-document existing legacy assets. APA Microanalysis The second type of portfolio analysis is APA microanalysis. This examines applications at the program level. This level of analysis can be used to understand things like program logic or candidate business rules for enablement, or business rule transformation. This process will also reveal things such as code complexity, data exchange schemas, and specific interaction within a screen flow. These are all critical when considering SOA integration, re-architecture, or a re-platforming project. The following are more models generated from the Relativity Modernization Technologies Workbench tool. The first is a COBOL transaction taken from a COBOL process. We are able to take a low-level view of a business rule slice taken from a COBOL program, and understand how this process flows. The particulars of this flow map diagram are not important; rather, this model can be automatically generated and is dynamic based on the current state of the code. The second model shows how a COBOL program interacts with a screen conversation. In this example, we are able to look at specific paragraphs within a particular program. We can identify specific CICS transaction and understand which paragraphs (or subroutines) are interacting with the database. The models can be used to further refine our drive for a more re-architected system, which helps us to  identify business rules and populate a rules engine, This example is just another example of a COBOL program that interacts with screens—shown in gray, and the paragraphs that execute CICS transactions—shown in white. So with these color coded boxes, we can quickly identify paragraphs, screens, databases, and CICS transactions. Application Portfolio Management (APM) APA is only a part of IT approach known as Application Portfolio Management. While APA analysis is critical for any modernization project, APM provides guideposts on how to combine the APA results, business assessment of the applications' strategic value and future needs, and IT infrastructure directions to come up with a long term application portfolio strategy and related technology targets to support it. It is often said that you cannot modernize that which you do not know. With APM, you can effectively manage change within an organization, understand the impact of change, and also manage its compliance. APM is a constant process, be it part of a modernization project or an organization's portfolio management and change control strategy. All applications are in a constant state of change. During any modernization, things are always in a state of flux. In a modernization project, legacy code is changed, new development is done (often in parallel), and data schemas are changed. When looking into APM tool offerings, consider products that can provide facilities to capture these kinds of changes in information and provide an active repository, rather than a static view. Ideally, these tools must adhere to emerging technical standards, like those being pioneered by  the OMG. Re-Architecturing Re-architecting is based on the concept that all legacy applications contain invaluable business logic and data relevant to the business, and these assets should be leveraged in the new system, rather than throwing it all out to rebuild from scratch. Since the new modern IT environment elevates a lot of this logic above the code using declarative models supported by BPM tools, ESBs, Business Rules engines, Data integration and access solutions, some of the original technical code can be replaced by these middleware tools to achieve greater agility. The following screenshot shows an example of a system after re-architecture. The previous example shows what a system would look like, from a higher level, after re-architecture. We see that this isn't a simple transformation of one code base to another in a one-to-one format. It is also much more than remediation and refactoring of the legacy code to standard java code. It is a system that fully leverages technologies suited for the required task, for example, leveraging Identity Management for security, business rules for core business, and BPEL for process flow. Thus, re-architecting focuses on recovering and reassembling the process relevant to business from a legacy application, while eliminating the technology-specific code. Here, we want to capture the value of the business process that is independent of the legacy code base, and move it into a different paradigm. Re-architecting is typically used to handle modernizations that involve changes in architecture, such as the introduction of object orientation and process-driven services. The advantage that re-architecting has over greenfield development is that re-architecting recognizes that there is information in the application code and surrounding artifacts (example, DDLs, COPYBOOKS, user training manuals) that is useful as a source for the re-architecting process, such as application process interaction, data models, and workflow. Re-architecting will usually go outside the source code of the legacy application to incorporate concepts like workflow and new functionality that were never part of the legacy application. However, it also recognized that this legacy application contains key business rules and processes that need to be harvested and brought forward. Some of the important considerations for maximizing re-use by extracting business rules from legacy applications as part of a re-architecture project include: Eliminate dead code, environmental specifics, resolve mutually exclusive logic. Identify key input/output data (parameters, screen input, DB and file records, and so on). Keep in mind many rules outside of code (for example, screen flow described in a training manual. Populate a data dictionary specific to application/industry context. Identify and tag rules based on transaction types and key data, policy parameters, key results (output data). Isolate rules into tracking repository. Combine automation and human review to track relationships, eliminate redundancies, classify and consolidate, add annotation. A parallel method of extracting knowledge from legacy applications uses modeling techniques, often based on UML. This method attempts to mine UML artifacts from the application code and related materials, and then create full-fledged models representing the complete application. Key considerations for mining models include: Convenient code representation helps to quickly filter out technical details. Allow user-selected artifacts to be quickly represented in UML entities. Allow user to add relationships and annotate the objects to assemble more complete UML model. Use external information if possible to refine use cases (screen flows) and activity diagrams—remember that some actors, flows, and so on may not appear in the code. Export to XML-based standard notation to facilitate refinement and forward-re-engineering through UML-based tools. Modernization with this method leverages the years of investment in the legacy code base, it is much less costly and less risky than starting a new application from ground zero. However, since it does involve change, it does have its risks. As a result, a number of other modernization options have been developed that involve less risk. The next set of modernization option provide a different set of benefits with respect to a fully re-architected SOA environment. The important thing is that these other techniques allow an organization to break the process of reaching the optimal modernization target into a series of phases that lower the overall risk of modernization for an organization. In the following figure, we can see that re-architecture takes a monolithic legacy system and applies technology and process to deliver a highly adaptable modern architecture. Since SOA integration is the least invasive approach to legacy application modernization, this technique allows legacy components to be used as part of an SOA infrastructure very quickly and with little risk. Further, it is often the first step in the larger modernization process. In this method, the source code remains mostly unchanged (we will talk more about that later) and the application is wrapped using SOA components, thus creating services that can be exposed and registered to an SOA management facility on a new platform, but are implemented via the exiting legacy code. The exposed services can then be re-used and combined with the results of other more invasive modernization techniques such as re-architecting. Using SOA integration, an organization can begin to make use of SOA concepts, including the orchestration of services into business processes, leaving the legacy application intact. Of course, the appropriate interfaces into the legacy application must exist and the code behind these interfaces must perform useful functions in a manner that can be packaged as services. SOA readiness assessment involves analysis of service granularity, exception handling, transaction integrity and reliability requirements, considerations of response time, message sizes, and scalability, issues of end-to-end messaging security, and requirements for services orchestration and SLA management. Following an assessment, any issues discovered need to be rectified before exposing components as services, and appropriate run-time and lifecycle governance policies created and implemented. It is important to note that there are three tiers where integration can be done: Data, Screen, and Code. So, each of the tiers, based upon the state and structure of the code, can be extended with this technique. As mentioned before, this is often the first step in modernization. In this example, we can see that the legacy systems still stay on the legacy platform. Here, we isolate and expose this information as a business service using legacy adapters. The table below lists important considerations in SOA integration and enablement projects. Criteria for identifying well defined services Represent a core enterprise function re-usable by many client applications Present a coarse-grained interface Single interaction vs. multi-screen flows UI, business logic, data access layers Exception handling-returning results without branching to another screen Discovering "Services" beyond screen flows Conversational vs. sync/async calls COMMAREA transactions (re-factored to use reasonable message size) Security policies and their enforcement RACF vs. LDAP-based or SSO mechanism End-to-end messaging security and Authentication, Authorization, Audition   Services integration and orchestration Wrapping and proxying via middle-tier gate-way vs. mainframe-based services Who's responsible for input validation? Orchestrating "composite" MF services Supporting bidirectional integration Quality of Service (QoS) requirements Response time, throughput, scalability End-to-end monitoring and SLA management Transaction integrity and global transaction coordination End-to-end monitoring and tracing Services lifecycle governance Ownership of service interfaces and change control process Service discovery (repository, tools) Orchestration, extension BPM integration
Read more
  • 0
  • 0
  • 1724

article-image-trixbox-ce-functions-and-features
Packt
15 Oct 2009
6 min read
Save for later

trixbox CE Functions and Features

Packt
15 Oct 2009
6 min read
Standard features The following sections will break down the list of available features by category. While the codes listed are the default settings, they can be modified in the PBX Configuration tool using the Feature Codes module. These features are invoked by dialing the code from a registered SIP or IAX endpoint, or via an analog extension plugged into an FXS port. Some of the following features require the appropriate PBX Configuration tool module to be installed. Call forwarding The call forwarding mechanism is both powerful and flexible. With the different options, you can perform a number of different functions or even create a basic find-me/follow-me setup when using a feature like call forward on no answer, or send callers to your assistant if you are on a call using call forward on busy. Function Code Call Forward All Activate *72 Call Forward All Deactivate *73 Call Forward All Prompting *74 Call Forward Busy Activate *90 Call Forward Busy Deactivate *91 Call Forward Busy Prompting Deactivate *92 Call Forward No Answer/Unavailable Activate *52 Call Forward No Answer/Unavailable Deactivate *53 Call waiting The call waiting setting determines whether a call will be put through to your phone if you are already on a call. This can be useful in some call center environments where you don't want agents to be disturbed by other calls when they are working with clients. Function Code Call Waiting Activate *70 Call Waiting Deactivate *71 Core features The core features control basic functions such as transfers and testing inbound calls. Simulating an inbound call is useful for testing a system without having to call into it. If you don't have any trunks hooked up, it is the easiest way to check your call flow. Once you have telephone circuits connected, you can still use the function to test your call flow without having to take up any of your circuits. Function Code Call Pickup ** Dial System FAX 666 Simulate Incoming Call 7777 Active call codes These codes are active during a call for features like transferring and recording calls. While some phones have some of these features built into the device itself, others are only available via feature codes. For example, you can easily do call transfers using most modern SIP phones, like Aastra's or Polycom's, by hitting the transfer button during a call. Function Code In-Call Asterisk Attended Transfer *2 In-Call Asterisk Blind Transfer ## Transfer call directly to extension's mailbox *+Extension Begin recording current call *1 End Recording current call *2 Park current call #70 Agent features The agent features are used most often in a Call Center environment to monitor different calls and for agents to log in and log out of queues. Function Code Agent Logoff *12 Agent Logon *11 ChanSpy (Monitor different channels) 555 ZapBarge (Monitor Zap channels) 888 Blacklisting If you have the PBX Configuration tool Blacklist module installed, then you have the ability to blacklist callers from being able to call into the system. This is great for blocking telemarketers, bill collectors, ex-girl/boyfriends, and your mother-in-law. Function Code Blacklist a number *30 Blacklist the last caller *32 Remove a number from the blacklist *31 Day / Night mode If you have the PBX Configuration tool Day/Night mode module installed, then you can use a simple key command to switch between day and night IVR recordings. This is great for companies that don't work off a set schedule everyday but want to manually turn on and off an off-hours greeting. Function Code Toggle Day / Night Mode *28 Do not disturb Usually, do-not-disturb functions are handled at the phone level. If you do not have phones with a DND button on them, then you can install this module to enable key commands to toggle Do Not Disturb on and off. Function Code DND Activate *78 DNS Deactivate *79 Info services The info services are some basic functions that provide information back to you without changing any settings. These are most often used for testing and debugging purposes. Function Code Call Trace *69 Directory # Echo Test *43 Speak your extension number *65 Speaking Clock *60 Intercom If you have a supported model of phone then you can install the PBX Configuration tool module to enable paging and intercom via the telephone's speakerphones. Function Code Intercom Prefix *80 User Allow Intercom *54 User Disallow Intercom *55 Voicemail If you want to access your voicemail from any extension then you need to choose 'Dial Voicemail System', otherwise using 'Dial My Voicemail' will use the extension number you are calling from and only prompt for the password. Function Code Dial Voicemail System *98 Dial My Voicemail *97 Adding new features The ability to add new features is built into the system. One common thing to do is to redirect 411 calls to a free service like Google's free service. The following steps will walk you through how to add a custom feature like this to your system. Begin by going to the Misc Destination module and enter a Description of the destination you want to create. Next, go to Misc Application to create the application. Here we will enter anotherDescription and the number we want to use to dial the application, make sure the feature is enabled, and then point to the destination that we created in the previous step. As you can see, any code can be assigned to any destination and a custom destination can consist of anything you can dial. This allows you to create many different types of custom features within your system. Voicemail features trixbox CE comes with the Asterisk Mail voicemail system. Asterisk mail is a fairly robust and useful voicemail system. The Asterisk Mail voicemail system can be accessed by any internal extension or by dialing into the main IVR system. As we saw earlier in this article, there are two ways of accessing the voicemail system, 'Dial Voicemail' and 'Dial My Voicemail'. To access the main voicemail system, we can dial *98 from any extension; we will then be prompted for our extension and our voicemail password. If we dial *97 for the 'My Voicemail' feature, the system will use the extension number you dialed in from and only prompt you for your voicemail password. The following tables will show you the basic structure of the voicemail menu system: Voicemail main menu options Press: 1 to Listen to (New) Messages 2 to Change Folders 0 for Mailbox Options * for Help # to Exit Listen to messages Press: 5 to Repeat Message 6 to Play Next Message 7 to Delete Message 8 to Forward to another userEnter Extension and press # 1 to Prepend a Message to forwarded message 2 to Forward without prepending 9 to Save Message 0 for New Messages 1 for Old Messages 2 for Work Messages 3 for Family Messages 4 for Friends Messages * for Help # to Cancel/Exit to Main Menu Change folders Press: 0 for New Messages 1 for Old Messages 2 for Work Messages 3 for Family Messages 4 for Friends' Messages # to Cancel/Exit to Main Menu Mailbox options Press: 1 to Record your Un-Available Message 2 to Record your Busy message 3 to Record your Name 4 to Change your Password # to Cancel/Exit to Main Menu
Read more
  • 0
  • 0
  • 3956