Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-class-less-objects-javascript
Packt
23 Oct 2009
7 min read
Save for later

Class-less Objects in JavaScript

Packt
23 Oct 2009
7 min read
JavaScript Objects When you think about a JavaScript object, think a hash. That's all there is to objects - they are just collections of name-value pairs, where the values can be anything including other objects and functions. When an object's property is a function, you can also call it a method. This is an empty object: var myobj = {}; Now you can start adding some meaningful functionality to this object: myobj.name = "My precious";myobj.getName = function() {return this.name}; Note a few things here: this inside a method refers to the current object, as expected you can add/tweak/remove properties at any time, not only during creation Another way to create an object and add properties/methods to it at the same time, is like this: var another = { name: 'My other precious', getName: function() { return this.name; }}; This syntax is the so-called object literal notation - you wrap everything in curly braces { and } and separate the properties inside the object with a comma. Key:value pairs are separated by colons. This syntax is not the only way to create objects though. Constructor Functions Another way to create a JavaScript object is by using a constructor function. Here's an example of a constructor function: function ShinyObject(name) { this.name = name; this.getName = function() { return this.name; }} Now creating an object is much more Java-like: var my = new ShinyObject('ring');var myname = my.getName(); // "ring" There is no difference in the syntax for creating a constructor function as opposed to any other function, the difference is in the usage. If you invoke a function with new, it creates and returns an object and, via this, you have access to modifying the object before you return it. By convention though, constructor functions are named with a capital letter to distinguish visually from normal functions and methods. So which way is better - object literal or constructor function? Well, that depends on your specific task. For example, if you need to create many different, yet similar objects, then the class-like constructors may be the right choice. But if your object is more of a one-off singleton, then object literal is definitely simpler and shorter. OK then, so since there are no classes, how about inheritance? Before we get there, here comes a little surprise - in JavaScript, functions are actually objects. (Actually in JavaScript pretty much everything is an object, with the exception of the few primitive data types - string, boolean, number and undefined. Functions are objects, arrays are objects, even null is an object. Furthermore, the primitive data types can also be converted and used as objects, so for example "string".length is valid.) Function Objects and Prototype Property In JavaScript, functions are objects. They can be assigned to variables, you can add properties and methods to them and so on. Here's an example of a function: var myfunc = function(param) { alert(param);}; This is pretty much the same as: function myfunc(param) { alertparam);} No matter how you create the function, you end up with a myfunc object and you can access its properties and methods. alert(myfunc.length); // alerts 1, the number of parametersalert(myfunc.toString()); // alerts the source code of the function One of the interesting properties that every function object has is the prototype property. As soon as you create a function, it automatically gets a prototype property which points to an empty object. Of course, you can modify the properties of that empty object. alert(typeof myfunc.prototype); // alerts "object"myfunc.prototype.test = 1; // completely OK to do so The question is, how is this prototype thing useful? It's used only when you invoke a function as a constructor to create objects. When you do so, the objects automatically get a secret link to the prototype's properties and can access them as their own properties. Confusing? Let's see an example. A new function: function ShinyObject(name) { this.name = name;} Augmenting the prototype property of the function with some functionality: ShinyObject.prototype.getName = function() { return this.name;}; Using the function as a constructor function to create an object: var iphone = new ShinyObject('my precious');iphone.getName(); // returns "my precious" As you can see the new objects automatically get access to the prototype's properties. And when something is getting functionality "for free", this starts to smell like code reusability and inheritance.   Inheritance via Prototype Now let's see how you can use the prototype to implement inheritance. Here's a constructor function which will be the parent: function NormalObject() { this.name = 'normal'; this.getName = function() { return this.name; };} Now a second constructor: function PreciousObject(){ this.shiny = true; this.round = true;} Now the inheritance part: PreciousObject.prototype = new NormalObject(); Voila! Now you can create precious objects and they'll get all the functionality of the normal objects: var crystal_ball = new PreciousObject();crystal_ball.name = 'Ball, Crystal Ball.';alert(crystal_ball.round); // truealert(crystal_ball.getName()); // "Ball, Crystal Ball." Notice how we needed to create an object with new and assign it to the prototype, because the prototype is just an object. It's not like one constructor function inherited from another, in essence we inherited from an object. JavaScript doesn't have classes that inherit from other classes, here objects inherit from other objects. If you have several constructor functions that will inherit NormalObject objects, you may create new NormalObject() every time, but it's not necessary. Even the whole NormalObject constructor may not be needed. Another way to do the same would be to create one (singleton) normal object and use it as a base for the other objects. var normal = { name: 'normal', getName: function() { return this.name; }}; Then the PreciousObject can inherit like this: PreciousObject.prototype = normal; Inheritance by Copying Properties Since inheritance is all about reusing code, yet another way to implement it is to simply copy properties. Imagine you have these objects: var shiny = { shiny: true, round: true};var normal = { name: 'name me', getName: function() { return this.name; }}; How can shiny get normal's properties? Here's a simple extend() function that loops through and copies properties: function extend(parent, child) { for (var i in parent) { child[i] = parent[i]; }}extend(normal, shiny); // inheritshiny.getName(); // "name me" Now this property copying may look like overhead and not performing too well, but truth is, for many tasks it's just fine. You can also see that this is an easy way to implement mixins and multiple inheritance. Crockford's beget Object Douglas Crockford, a JavaScript guru and creator of JSON, suggests this interesting begetObject() way of implementing inheritance: function begetObject(o) { function F() {} F.prototype = o; return new F();} Here you create a temp constructor so you can use the prototype functionality, the idea is that you create a new object, but instead of starting fresh, you inherit some functionality from another, already existing, object. Parent object: var normal = { name: 'name me', getName: function() { return this.name; }}; A new object inheriting from the parent: var shiny = begetObject(normal); Augment the new object with more functionality: shiny.round = true;shiny.preciousness = true; YUI's extend() Let's wrap up with yet another way to implement inheritance, which is probably the closest to Java, because in this method, it looks like a constructor function inherits from another constructor function, hence it looks a bit like a class inheriting from a class. This method is used in the popular YUI JavaScript library (Yahoo! User Interface) and here's a little simplified version: function extend(Child, Parent) { var F = function(){}; F.prototype = Parent.prototype; Child.prototype = new F();} With this method you pass two constructor functions and the first (the child) gets all the properties and methods of the second (the parent) via the prototype property. Summary Let's quickly summarize what we just learned about JavaScript: there are no classes objects inherit from objects object literal notation var o = {}; constructor functions provide Java-like syntax var o = new Object(); functions are objects all function objects have a prototype property And finally, there are dozens of ways to implement inheritance, you can pick and choose depending on your task at hand, personal preferences, team preferences, mood or the current phase of the Moon.  
Read more
  • 0
  • 0
  • 2137

article-image-writing-package-python
Packt
23 Oct 2009
18 min read
Save for later

Writing a Package in Python

Packt
23 Oct 2009
18 min read
Writing a Package Its intents are: To shorten the time needed to set up everything before starting the real work, in other words the boiler-plate code To provide a standardized way to write packages To ease the use of a test-driven development approach To facilitate the releasing process It is organized in the following four parts: A common pattern for all packages that describes the similarities between all Python packages, and how distutils and setuptools play a central role How generative programming (http://en.wikipedia.org/wiki/Generative_programming) can help this through the template-based approach The package template creation, where everything needed to work is set Setting up a development cycle A Common Pattern for All Packages The easiest way to organize the code of an application is to split it into several packages using eggs. This makes the code simpler, and easier to understand, maintain, and change. It also maximizes the reusability of each package. They act like components. Applications for a given company can have a set of eggs glued together with a master egg. Therefore, all packages can be built using egg structures. This section presents how a namespaced package is organized, released, and distributed to the world through distutils and setuptools. Writing an egg is done by layering the code in a nested folder that provides a common prefix namespace. For instance, for the Acme company, the common namespace can be acme. The result is a namespaced package. For example, a package whose code relates to SQL can be called acme.sql. The best way to work with such a package is to create an acme.sql folder that contains the acme and then the sql folder: setup.py, the Script That Controls Everything The root folder contains a setup.py script, which defines all metadata as described in the distutils module, combined as arguments in a call to the standard setup function. This function was extended by the third-party library setuptools that provides most of the egg infrastructure. The boundary between distutils and setuptools is getting fuzzy, and they might merge one day. Therefore, the minimum content for this file is: from setuptools import setupsetup(name='acme.sql') name gives the full name of the egg. From there, the script provides several commands that can be listed with the -help-commands option. $ python setup.py --help-commands Standard commands:  build             build everything needed to install  ...  install           install everything from build directory  sdist         create a source distribution  register      register the distribution  bdist         create a built (binary) distributionExtra commands:  develop       install package in 'development mode'  ...  test          run unit tests after in-place build alias         define a shortcut  bdist_egg     create an "egg" distribution The most important commands are the ones left in the preceding listing. Standard commands are the built-in commands provided by distutils, whereas Extra commands are the ones created by third-party packages such as setuptools or any other package that defines and registers a new command. sdist The sdist command is the simplest command available. It creates a release tree where everything needed to run the package is copied. This tree is then archived in one or many archived files (often, it just creates one tar ball). The archive is basically a copy of the source tree. This command is the easiest way to distribute a package from the target system independently. It creates a dist folder with the archives in it that can be distributed. To be able to use it, an extra argument has to be passed to setup to provide a version number. If you don't give it a version value, it will use version = 0.0.0: from setuptools import setupsetup(name='acme.sql', version='0.1.1' This number is useful to upgrade an installation. Every time a package is released, the number is raised so that the target system knows it has changed. Let's run the sdist command with this extra argument: $ python setup.py sdistrunning sdist...creating disttar -cf dist/acme.sql-0.1.1.tar acme.sql-0.1.1gzip -f9 dist/acme.sql-0.1.1.tarremoving 'acme.sql-0.1.1' (and everything under it)$ ls dist/acme.sql-0.1.1.tar.gz Under Windows, the archive will be a ZIP file. The version is used to mark the name of the archive, which can be distributed and installed on any system having Python. In the sdist distribution, if the package contains C libraries or extensions, the target system is responsible for compiling them. This is very common for Linux-based systems or Mac OS because they commonly provide a compiler. But it is less usual to have it under Windows. That's why a package should always be distributed with a pre-built distribution as well, when it is intended to run under several platforms. The MANIFEST.in File When building a distribution with sdist, distutils browse the package directory looking for files to include in the archive. distutils will include: All Python source files implied by the py_modules, packages, and scripts option All C source files listed in the ext_modules option Files that match the glob pattern test/test*.py README, README.txt, setup.py, and setup.cfg files Besides, if your package is under Subversion or CVS, sdist will browse folders such as .svn to look for files to include .sdist builds a MANIFEST file that lists all files and includes them into the archive. Let's say you are not using these version control systems, and need to include more files. Now, you can define a template called MANIFEST.in in the same directory as that of setup.py for the MANIFEST file, where you indicate to sdist which files to include. This template defines one inclusion or exclusion rule per line, for example: include HISTORY.txtinclude README.txtinclude CHANGES.txtinclude CONTRIBUTORS.txtinclude LICENSErecursive-include *.txt *.py The full list of commands is available at http://docs.python.org/dist/sdist-cmd.html#sdist-cmd. build and bdist To be able to distribute a pre-built distribution, distutils provide the build command, which compiles the package in four steps: build_py: Builds pure Python modules by byte-compiling them and copying them into the build folder. build_clib: Builds C libraries, when the package contains any, using Python compiler and creating a static library in the build folder. build_ext: Builds C extensions and puts the result in the build folder like build_clib. build_scripts: Builds the modules that are marked as scripts. It also changes the interpreter path when the first line was set (!#) and fixes the file mode so that it is executable. Each of these steps is a command that can be called independently. The result of the compilation process is a build folder that contains everything needed for the package to be installed. There's no cross-compiler option yet in the distutils package. This means that the result of the command is always specific to the system it was build on. Some people have recently proposed patches in the Python tracker to make distutils able to cross-compile the C parts. So this feature might be available in the future. When some C extensions have to be created, the build process uses the system compiler and the Python header file (Python.h). This include file is available from the time Python was built from the sources. For a packaged distribution, an extra package called python-dev often contains it, and has to be installed as well. The C compiler used is the system compiler. For Linux-based system or Mac OS X, this would be gcc. For Windows, Microsoft Visual C++ can be used (there's a free command-line version available) and the open-source project MinGW as well. This can be configured in distutils. The build command is used by the bdist command to build a binary distribution. It calls build and all dependent commands, and then creates an archive in the same was as sdist does. Let's create a binary distribution for acme.sql under Mac OS X: $ python setup.py bdistrunning bdistrunning bdist_dumbrunning build...running install_scriptstar -cf dist/acme.sql-0.1.1.macosx-10.3-fat.tar .gzip -f9 acme.sql-0.1.1.macosx-10.3-fat.tarremoving 'build/bdist.macosx-10.3-fat/dumb' (and everything under it)$ ls dist/acme.sql-0.1.1.macosx-10.3-fat.tar.gz    acme.sql-0.1.1.tar.gz Notice that the newly created archive's name contains the name of the system and the distribution it was built under (Mac OS X 10.3). The same command called under Windows will create a specific distribution archive: C:acme.sql> python.exe setup.py bdist...C:acme.sql> dir dist25/02/2008  08:18     <DIR>          .25/02/2008  08:18     <DIR>          ..25/02/2008  08:24             16 055 acme.sql-0.1.win32.zip               1 File(s)          16 055 bytes               2 Dir(s)   22 239 752 192 bytes free If a package contains C code, apart from a source distribution, it's important to release as many different binary distributions as possible. At the very least, a Windows binary distribution is important for those who don't have a C compiler installed. A binary release contains a tree that can be copied directly into the Python tree. It mainly contains a folder that is copied into Python's site-packages folder. bdist_egg The bdist_egg command is an extra command provided by setuptools. It basically creates a binary distribution like bdist, but with a tree comparable to the one found in the source distribution. In other words, the archive can be downloaded, uncompressed, and used as it is by adding the folder to the Python search path (sys.path). These days, this distribution mode should be used instead of the bdist-generated one. install The install command installs the package into Python. It will try to build the package if no previous build was made and then inject the result into the Python tree. When a source distribution is provided, it can be uncompressed in a temporary folder and then installed with this command. The install command will also install dependencies that are defined in the install_requires metadata. This is done by looking at the packages in the Python Package Index (PyPI). For instance, to install pysqlite and SQLAlchemy together with acme.sql, the setup call can be changed to: from setuptools import setupsetup(name='acme.sql', version='0.1.1',      install_requires=['pysqlite', 'SQLAlchemy']) When we run the command, both dependencies will be installed. How to Uninstall a Package The command to uninstall a previously installed package is missing in setup.py. This feature was proposed earlier too. This is not trivial at all because an installer might change files that are used by other elements of the system. The best way would be to create a snapshot of all elements that are being changed, and a record of all files and directories created. A record option exists in install to record all files that have been created in a text file: $ python setup.py install --record installation.txtrunning install...writing list of installed files to 'installation.txt' This will not create any backup on any existing file, so removing the file mentioned might break the system. There are platform-specific solutions to deal with this. For example, distutils allow you to distribute the package as an RPM package. But there's no universal way to handle it as yet. The simplest way to remove a package at this time is to erase the files created, and then remove any reference in the easy-install.pth file that is located in the sitepackages folder. develop setuptools added a useful command to work with the package. The develop command builds and installs the package in place, and then adds a simple link into the Python site-packages folder. This allows the user to work with a local copy of the code, even though it's available within Python's site-packages folder. All packages that are being created are linked with the develop command to the interpreter. When a package is installed this way, it can be removed specifically with the -u option, unlike the regular install: $ sudo python setup.py developrunning develop...Adding iw.recipe.fss 0.1.3dev-r7606 to easy-install.pth fileInstalled /Users/repos/ingeniweb.sourceforge.net/iw.recipe.fss/trunkProcessing dependencies ...$ sudo python setup.py develop -urunning developRemoving...Removing iw.recipe.fss 0.1.3dev-r7606 from easy-install.pth file Notice that a package installed with develop will always prevail over other versions of the same package installed. test Another useful command is test. It provides a way to run all tests contained in the package. It scans the folder and aggregates the test suites it finds. The test runner tries to collect tests in the package but is quite limited. A good practice is to hook an extended test runner such as zope.testing or Nose that provides more options. To hook Nose transparently to the test command, the test_suite metadata can be set to 'nose.collector' and Nose added in the test_requires list: setup(...test_suite='nose.collector',test_requires=['Nose'],...) register and upload To distribute a package to the world, two commands are available: register: This will upload all metadata to a server. upload: This will upload to the server all archives previously built in the dist folder. The main PyPI server, previously named the Cheeseshop, is located at http://pypi.python.org/pypi and contains over 3000 packages from the community. It is a default server used by the distutils package, and an initial call to the register command will generate a .pypirc file in your home directory. Since the PyPI server authenticates people, when changes are made to a package, you will be asked to create a user over there. This can also be done at the prompt: $ python setup.py registerrunning register...We need to know who you are, so please choose either: 1. use your existing login, 2. register as a new user, 3. have the server generate a new password for you (and email it toyou), or 4. quitYour selection [default 1]: Now, a .pypirc file will appear in your home directory containing the user and password you have entered. These will be used every time register or upload is called: [server-index]username: tarekpassword: secret There is a bug on Windows with Python 2.4 and 2.5. The home directory is not found by distutils unless a HOME environment variable is added. But, this has been fixed in 2.6. To add it, use the technique where we modify the PATH variable. Then add a HOME variable for your user that points to the directory returned by os.path.expanduser('~'). When the download_url metadata or the url is specified, and is a valid URL, the PyPI server will make it available to the users on the project web page as well. Using the upload command will make the archive directly available at PyPI, so the download_url can be omitted: Distutils defines a Trove categorization (see PEP 301: http://www.python.org/dev/peps/pep-0301/#distutils-trove-classification) to classify the packages, such as the one defined at Sourceforge. The trove is a static list that can be found at http://pypi.python.org/pypi?%3Aaction=list_classifiers, and that is augmented from time to time with a new entry. Each line is composed of levels separated by "::": ...Topic :: TerminalsTopic :: Terminals :: SerialTopic :: Terminals :: TelnetTopic :: Terminals :: Terminal Emulators/X TerminalsTopic :: Text Editors Topic :: Text Editors :: DocumentationTopic :: Text Editors :: Emacs... A package can be classified in several categories, which can be listed in the classifiers meta-data. A GPL package that deals with low-level Python code (for instance) can use: Programming Language :: PythonTopic :: Software Development :: Libraries :: Python ModulesLicense :: OSI Approved :: GNU General Public License (GPL) Python 2.6 .pypirc Format The .pypirc file has evolved under Python 2.6, so several users and their passwords can be managed along with several PyPI-like servers. A Python 2.6 configuration file will look somewhat like this: [distutils]index-servers =    pypi    alternative-server    alternative-account-on-pypi[pypi]username:tarekpassword:secret[alternative-server]username:tarekpassword:secretrepository:http://example.com/pypi The register and upload commands can pick a server with the help of the -r option, using the repository full URL or the section name: # upload to http://example.com/pypi$ python setup.py sdist upload -r   alternative-server#  registers with default account (tarek at pypi)$ python setup.py register#  registers to http://example.com$ python setup.py register -r http://example.com/pypi This feature allows interaction with servers other than PyPI. When dealing with a lot of packages that are not to be published at PyPI, a good practice is to run your own PyPI-like server. The Plone Software Center (see http://plone.org/products/plonesoftwarecenter) can be used, for example, to deploy a web server that can interact with distutils upload and register commands. Creating a New Command distutils allows you to create new commands, as described in http://docs.python.org/dist/node84.html. A new command can be registered with an entry point, which was introduced by setuptools as a simple way to define packages as plug-ins. An entry point is a named link to a class or a function that is made available through some APIs in setuptools. Any application can scan for all registered packages and use the linked code as a plug-in. To link the new command, the entry_points metadata can be used in the setup call: setup(name="my.command",          entry_points="""             [distutils.commands]             my_command = my.command.module.Class          """) All named links are gathered in named sections. When distutils is loaded, it scans for links that were registered under distutils.commands. This mechanism is used by numerous Python applications that provide extensibility. setup.py Usage Summary There are three main actions to take with setup.py: Build a package. Install it, possibly in develop mode. Register and upload it to PyPI. Since all the commands can be combined in the same call, some typical usage patterns are: # register the package with PyPI, creates a source and# an egg distribution, then upload them$ python setup.py register sdist bdist_egg upload# installs it in-place, for development purpose$ python setup.py develop# installs it$ python setup.py install The alias Command To make the command line work easily, a new command has been introduced by setuptools called alias. In a file called setup.cfg, it creates an alias for a given combination of commands. For instance, a release command can be created to perform all actions needed to upload a source and a binary distribution to PyPI: $ python setup.py alias release register sdist bdist_egg uploadrunning aliasWriting setup.cfg$ python setup.py release... Other Important Metadata Besides the name and the version of the package being distributed, the most important arguments setup can receive are: description: A few sentences to describe the package long_description: A full description that can be in reStructuredText keywords: A list of keywords that define the package author: The author's name or organization author_email: The contact email address url: The URL of the project license: The license (GPL, LGPL, and so on) packages: A list of all names in the package; setuptools provides a small function called find_packages that calculates this namespace_packages: A list of namespaced packages A completed setup.py file for acme.sql would be: import osfrom setuptools import setup, find_packagesversion = '0.1.0'README = os.path.join(os.path.dirname(__file__), 'README.txt')long_description = open(README).read() + 'nn'setup(name='acme.sql',      version=version,      description=("A package that deals with SQL, "                    "from ACME inc"),      long_description=long_description,      classifiers=[        "Programming Language :: Python",        ("Topic :: Software Development :: Libraries ::          "Python Modules"),        ],      keywords='acme sql',      author='Tarek',      author_email='[email protected]',      url='http://ziade.org',      license='GPL',      packages=find_packages(),      namespace_packages=['acme'],      install_requires=['pysqlite','SQLAchemy']      ) The two comprehensive guides to keep under your pillow are: The distutils guide at http://docs.python.org/dist/dist.html The setuptools guide at http://peak.telecommunity.com/DevCenter/setuptools 
Read more
  • 0
  • 0
  • 4623

article-image-ejb-3-security
Packt
23 Oct 2009
15 min read
Save for later

EJB 3 Security

Packt
23 Oct 2009
15 min read
Authentication and authorization in Java EE Container Security There are two aspects covered by Java EE container security: authentication and authorization. Authentication is the process of verifying that users are who they claim to be. Typically this is performed by the user providing credentials such as a password. Authorization, or access control, is the process of restricting operations to specific users or categories of users. The EJB specification provides two kinds of authorization: declarative and programmatic, as we shall see later in the article. The Java EE security model introduces a few concepts common to both authentication and authorization. A principal is an entity that we wish to authenticate. The format of a principal is application-specific but an example is a username. A role is a logical grouping of principals. For example, we can have administrator, manager, and employee roles. The scope over which a common security policy applies is known as a security domain, or realm. Authentication For authentication, every Java EE compliant application server provides the Java Authentication and Authorization Service (JAAS) API. JAAS supports any underlying security system. So we have a common API regardless of whether authentication is username/password verification against a database, iris or fingerprint recognition for example. The JAAS API is fairly low level and most application servers provide authentication mechanisms at a higher level of abstraction. These authentication mechanisms are application-server specific however. We will not cover JAAS any further here, but look at authentication as provided by the GlassFish application server. GlassFish Authentication There are three actors we need to define on the GlassFish application server for authentication purposes: users, groups, and realms. A user is an entity that we wish to authenticate. A user is synonymous with a principal. A group is a logical grouping of users and is not the same as a role. A group's scope is global to the application server. A role is a logical grouping of users whose scope is limited to a specific application. Of course for some applications we may decide that roles are identical to groups. For other applications we need some mechanism for mapping the roles onto groups. We shall see how this is done later. A realm, as we have seen, is the scope over which a common security policy applies. GlassFish provides three kinds of realms: file, certificate, and admin-realm. The file realm stores user, group, and realm credentials in a file named keyfile. This file is stored within the application server file system. A file realm is used by web clients using http or EJB application clients. The certificate realm stores a digital certificate and is used for authenticating web clients using https. The admin-realm is similar to the file realm and is used for storing administrator credentials. GlassFish comes pre-configured with a default file realm named file. We can add, edit, and delete users, groups, and realms using the GlassFish administrator console. We can also use the create-file-user option of the asadmin command line utility. To add a user named scott to a group named bankemployee, in the file realm, we would use the command: <target name="create-file-user"> <exec executable="${glassfish.home}/bin/asadmin" failonerror="true" vmlauncher="false"> <arg line="create-file-user --user admin --passwordfile userpassword --groups bankemployee scott"/> </exec> </target> --user specifies the GlassFish administrator username, admin in our example. --passwordfile specifies the name of the file containing password entries. In our example this file is userpassword. Users, other than GlassFish administrators, are identified by AS_ADMIN_USERPASSWORD. In our example the content of the userpassword file is: AS_ADMIN_USERPASSWORD=xyz This indicates that the user's password is xyz. --groups specifies the groups associated with this user (there may be more than one group). In our example there is just one group, named bankemployee. Multiple groups are colon delineated. For example if the user belongs to both the bankemployee and bankcustomer groups, we would specify: --groups bankemployee:bankcustomer The final entry is the operand which specifies the name of the user to be created. In our example this is scott. There is a corresponding asadmin delete-file-user option to remove a user from the file realm. Mapping Roles to Groups The Java EE specification specifies that there must be a mechanism for mapping local application specific roles to global roles on the application server. Local roles are used by an EJB for authorization purposes. The actual mapping mechanism is application server specific. As we have seen in the case of GlassFish, the global application server roles are called groups. In GlassFish, local roles are referred to simply as roles. Suppose we want to map an employee role to the bankemployee group. We would need to create a GlassFish specific deployment descriptor, sun-ejb-jar.xml, with the following element: <security-role-mapping> <role-name>employee</role-name> <group-name>bankemployee</group-name> </security-role-mapping> We also need to access the configuration-security screen in the administrator console. We then disable the Default Principal To Role Mapping flag. If the flag is enabled then the default is to map a group onto a role with the same name. So the bankemployee group will be mapped to the bankemployee role. We can leave the default values for the other properties on the configuration-security screen. Many of these features are for advanced use where third party security products can be plugged in or security properties customized. Consequently we will give only a brief description of these properties here. Security Manager: This refers to the JVM security manager which performs code-based security checks. If the security manager is disabled GlassFish will have better performance. However, even if the security manager is disabled, GlassFish still enforces standard Java EE authentication/authorization. Audit Logging: If this is enabled, GlassFish will provide an audit trail of all authentication and authorization decisions through audit modules. Audit modules provide information on incoming requests, outgoing responses and whether authorization was granted or denied. Audit logging applies for web-tier and ejb-tier authentication and authorization. A default audit module is provided but custom audit modules can also be created. Default Realm: This is the default realm used for authentication. Applications use this realm unless they specify a different realm in their deployment descriptor. The default value is file. Other possible values are admin-realm and certificate. We discussed GlassFish realms in the previous section. Default Principal: This is the user name used by GlassFish at run time if no principal is provided. Normally this is not required so the property can be left blank. Default Principal Password: This is the password of the default principal. JACC: This is the class name of a JACC (Java Authorization Contract for Containers) provider. This enables the GlassFish administrator to set up third-party plug in modules conforming to the JACC standard to perform authorization. Audit Modules: If we have created custom modules to perform audit logging, we would select from this list. Mapped Principal Class: This is only applicable when Default Principal to Role Mapping is enabled. The mapped principal class is used to customize the java.security.Principal implementation class used in the default principal to role mapping. If no value is entered, the com.sun.enterprise.deployment.Group implementation of java.security.Principal is used. Authenticating an EJB Application Client Suppose we want to invoke an EJB, BankServiceBean, from an application client. We also want the application client container to authenticate the client. There are a number of steps we first need to take which are application server specific. We will assume that all roles will have the same name as the corresponding application server groups. In the case of GlassFish we need to use the administrator console and enable Default Principal To Role Mapping. Next we need to define a group named bankemployee with one or more associated users. An EJB application client needs to use IOR (Interoperable Object Reference) authentication. The IOR protocol was originally created for CORBA (Common Object Request Broker Architecture) but all Java EE compliant containers support IOR. An EJB deployed on one Java EE compliant vendor may be invoked by a client deployed on another Java EE compliant vendor. Security interoperability between these vendors is achieved using the IOR protocol. In our case the client and target EJB both happen to be deployed on the same vendor, but we still use IOR for propagating security details from the application client container to the EJB container. IORs are configured in vendor specific XML files rather than the standard ejb-jar.xml file. In the case of GlassFish, this is done within the <ior-security-config> element within the sun-ejb-jar.xml deployment descriptor file. We also need to specify the invoked EJB, BankServiceBean, in the deployment descriptor. An example of the sun-ejb-jar.xml deployment descriptor is shown below: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD       Application Server 9.0 EJB 3.0//EN"       "http://www.sun.com/software/appserver/dtds/sun-ejb-jar_3_0-0.dtd"> <sun-ejb-jar>   <enterprise-beans>     <ejb>       <ejb-name>BankServiceBean</ejb-name>         <ior-security-config>           <as-context>              <auth-method>USERNAME_PASSWORD</auth-method>              <realm>default</realm>              <required>true</required>           </as-context>         </ior-security-config>     </ejb>   </enterprise-beans> </sun-ejb-jar> The as in <as-context> stands for the IOR authentication service. This specifies authentication mechanism details. The <auth-method> element specifies the authentication method. This is set to USERNAME_PASSWORD which is the only value for an application client. The <realm> element specifies the realm in which the client is authenticated. The <required> element specifies whether the above authentication method is required to be used for client authentication. When creating the corresponding EJB JAR file, the sun-ejb-jar.xml file should be included in the META-INF directory, as follows: <target name="package-ejb" depends="compile">     <jar jarfile="${build.dir}/BankService.jar">         <fileset dir="${build.dir}">              <include name="ejb30/session/**" />                           <include name="ejb30/entity/**" />               </fileset>               <metainf dir="${config.dir}">             <include name="persistence.xml" />                          <include name="sun-ejb-jar.xml" />         </metainf>     </jar> </target> As soon as we run the application client, GlassFish will prompt with a username and password form, as follows: If we reply with the username scott and password xyz the program will run. If we run the application with an invalid username or password we will get the following error message: javax.ejb.EJBException: nested exception is: java.rmi.AccessException: CORBA NO_PERMISSION 9998 ..... EJB Authorization Authorization, or access control, is the process of restricting operations to specific roles. In contrast with authentication, EJB authorization is completely application server independent. The EJB specification provides two kinds of authorization: declarative and programmatic. With declarative authorization all security checks are performed by the container. An EJB's security requirements are declared using annotations or deployment descriptors. With programmatic authorization security checks are hard-coded in the EJBs code using API calls. However, even with programmatic authorization the container is still responsible for authentication and for assigning roles to principals. Declarative Authorization As an example, consider the BankServiceBean stateless session bean with methods findCustomer(), addCustomer() and updateCustomer(): package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Customer; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import javax.annotation.security.PermitAll; import java.util.*; @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Customer cust; @PermitAll public Customer findCustomer(int custId) { return ((Customer) em.find(Customer.class, custId)); } public void addCustomer(int custId, String firstName, String lastName) { cust = new Customer(); cust.setId(custId); cust.setFirstName(firstName); cust.setLastName(lastName); em.persist(cust); } public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } } We have prefixed the bean class with the annotation: @RolesAllowed("bankemployee") This specifies the roles allowed to access any of the bean's method. So only users belonging to the bankemployee role may access the addCustomer() and updateCustomer() methods. More than one role can be specified by means of a brace delineated list, as follows: @RolesAllowed({"bankemployee", "bankcustomer"}) We can also prefix a method with @RolesAllowed, in which case the method annotation will override the class annotation. The @PermitAll annotation allows unrestricted access to a method, overriding any class level @RolesAllowed annotation. As with EJB 3 in general, we can use deployment descriptors as alternatives to the @RolesAllowed and @PermitAll annotations. Denying Authorization Suppose we want to deny all users access to the BankServiceBean.updateCustomer() method. We can do this using the @DenyAll annotation: @DenyAll public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } Of course if you have access to source code you could simply delete the method in question rather than using @DenyAll. However suppose you do not have access to the source code and have received the EJB from a third party. If you in turn do not want your clients accessing a given method then you would need to use the <exclude-list> element in the ejb-jar.xml deployment descriptor: <?xml version="1.0" encoding="UTF-8"?> <ejb-jar version="3.0"                         xsi_schemaLocation="http://java.sun.com/xml/ns/javaee             http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd"> <enterprise-beans> <session> <ejb-name>BankServiceBean</ejb-name> </session> </enterprise-beans> <assembly-descriptor> <exclude-list><method> <ejb-name>BankServiceBean</ejb-name> <method-name>updateCustomer</method-name></method></exclude-list> </assembly-descriptor> </ejb-jar> EJB Security Propagation Suppose a client with an associated role invokes, for example, EJB A. If EJB A then invokes, for example, EJB B then by default the client's role is propagated to EJB B. However, you can specify with the @RunAs annotation that all methods of an EJB execute under a specific role. For example, suppose the addCustomer() method in the BankServiceBean EJB invokes the addAuditMessage() method of the AuditServiceBean EJB: @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { private @EJB AuditService audit; ....      public void addCustomer(int custId, String firstName,                                                          String lastName) {              cust = new Customer();              cust.setId(custId);              cust.setFirstName(firstName);              cust.setLastName(lastName);              em.persist(cust);              audit.addAuditMessage(1, "customer add attempt");      }      ... } Note that only a client with an associated role of bankemployee can invoke addCustomer(). If we prefix the AuditServiceBean class declaration with @RunAs("bankauditor") then the container will run any method in AuditServiceBean as the bankauditor role, regardless of the role which invokes the method. Note that the @RunAs annotation is applied only at the class level, @RunAs cannot be applied at the method level. @Stateless @RunAs("bankauditor") public class AuditServiceBean implements AuditService { @PersistenceContext(unitName="BankService") private EntityManager em; @TransactionAttribute( TransactionAttributeType.REQUIRES_NEW) public void addAuditMessage (int auditId, String message) { Audit audit = new Audit(); audit.setId(auditId); audit.setMessage(message); em.persist(audit); } } Programmatic Authorization With programmatic authorization the bean rather than the container controls authorization. The javax.ejb.SessionContext object provides two methods which support programmatic authorization: getCallerPrincipal() and isCallerInRole(). The getCallerPrincipal() method returns a java.security.Principal object. This object represents the caller, or principal, invoking the EJB. We can then use the Principal.getName() method to obtain the name of the principal. We have done this in the addAccount() method of the BankServiceBean as follows: Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); The isCallerInRole() method checks whether the principal belongs to a given role. For example, the code fragment below checks if the principal belongs to the bankcustomer role. If the principal does not belong to the bankcustomer role, we only persist the account if the balance is less than 99. if (ctx.isCallerInRole("bankcustomer")) {     em.persist(ac); } else if (balance < 99) {            em.persist(ac);   } When using the isCallerInRole() method, we need to declare all the security role names used in the EJB code using the class level @DeclareRoles annotation: @DeclareRoles({"bankemployee", "bankcustomer"}) The code below shows the BankServiceBean EJB with all the programmatic authorization code described in this section: package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Account; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import java.security.Principal; import javax.annotation.Resource; import javax.ejb.SessionContext; import javax.annotation.security.DeclareRoles; import java.util.*; @Stateless @DeclareRoles({"bankemployee", "bankcustomer"}) public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Account ac; @Resource SessionContext ctx; @RolesAllowed({"bankemployee", "bankcustomer"}) public void addAccount(int accountId, double balance, String accountType) { ac = new Account(); ac.setId(accountId); ac.setBalance(balance); ac.setAccountType(accountType); Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); if (ctx.isCallerInRole("bankcustomer")) { em.persist(ac); } else if (balance < 99) { em.persist(ac); } } ..... } Where we have a choice declarative authorization is preferable to programmatic authorization. Declarative authorization avoids having to mix business code with security management code. We can change a bean's security policy by simply changing an annotation or deployment descriptor instead of modifying the logic of a business method. However, some security rules, such as the example above of only persisting an account within a balance limit, can only be handled by programmatic authorization. Declarative security is based only on the principal and the method being invoked, whereas programmatic security can take state into consideration. Because an EJB is typically invoked from the web-tier by a servlet, JSP page or JSF component, we will briefly mention Java EE web container security. The web-tier and EJB tier share the same security model. So the web-tier security model is based on the same concepts of principals, roles and realms.
Read more
  • 0
  • 0
  • 2737
Visually different images

article-image-automation-python-and-stafstax
Packt
23 Oct 2009
13 min read
Save for later

Automation with Python and STAF/STAX

Packt
23 Oct 2009
13 min read
The reader should note that the solution is only intended to explain how Python and STAF may be used. No claim is made that the solution presented here is the best one in any way, just that is one more option that the reader may consider in future developments. The Problem Let's imagine that we have a computer network in which a machine periodically generates some kind of file with information that is of interest to other machines in that network. For example, let's say that this file is a new software build of a product that must transferred to a group of remote machines, in which its functionality has to be tested to make sure it can be delivered to the client. The Python-only solution Sequential A simple solution to make the software build available to all the testing machines could be to copy it to a specific directory whenever a new file is available. For additional security, let's suppose that we're required to verify that the md5 sum for both original and destination files is equal to ensure that build file was copied correctly. If it is considered that /tmp is a good destination directory, then the following script will do the job: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines sequentially 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 10 def main(args): 11 logging.basicConfig(level=logging.INFO, format="%(message)s") 12 13 # Calculate md5 sum before copyin the file 14 orig_md5 = run_command("md5sum %s" % args.file).split()[0] 15 16 # Copy the file to every requested machine and verify 17 # that md5 sum of the destination file is equal 18 # to the md5 sum of the original file 19 for machine in args.machines: 20 run_command("scp %s %s:/tmp/" % (args.file, machine)) 21 dest_md5 = run_command("ssh %s md5sum /tmp/%s" 22 % (machine, os.path.basename(args.file))).split()[0] 23 assert orig_md5 == dest_md5 24 25 def run_command(command_str): 26 """ 27 Run a given command and another process and return stdout 28 """ 29 logging.info(command_str) 30 return subprocess.Popen(command_str, stdout=subprocess.PIPE, 31 shell=True).communicate()[0] 32 33 if __name__ == "__main__": 34 parser = argparse.ArgumentParser(description=__doc__) 35 parser.add_argument("file", 36 help="File to copy") 37 parser.add_argument(metavar="machine", dest="machines", nargs="+", 38 help="List of machines to which file must be copied") 39 40 args = parser.parse_args() 41 args.file = os.path.realpath(args.file) 42 main(args) Here it is assumed that ssh keys have been exchanged between origin and destination machines for automatic authentication without human intervention. The script makes use of the Popen class in the subprocess python standard library. This powerful library provides the capability to launch new operating system processes and capture not only the result code, but also the standard output and error streams. However, it should be taken into account that the Popen class cannot be used to invoke commands on a remote machine by itself. However, as it can be seen in the code, ssh and related commands may be used to launch processes on remote machines when configured properly. For example, if the file of interest was STAF325-src.tar.gz (STAF 3.2.5 source) and the remote machines were 192.168.1.1 and 192.168.1.2, then the file would be copied using the copy.py script in the following way: $ ./copy.py STAF325-src.tar.gz 192.168.1.{1,2}md5sum STAF325-src.tar.gzscp STAF325-src.tar.gz 192.168.1.1:/tmp/ssh 192.168.1.1 md5sum /tmp/STAF325-src.tar.gzscp STAF325-src.tar.gz 192.168.1.2:/tmp/ssh 192.168.1.2 md5sum /tmp/STAF325-src.tar.gz Parallel What would happen if the files were copied in parallel? For this example, it might not make much sense given that probably the network is at bottleneck and there isn't any increase in performance. However, in the case of the md5sum operation, it's a waste of time waiting for the operation to complete on one machine while the other is essentially idle waiting for the next command. Clearly, it would be more interesting to make both machines do the job in parallel to take advantage of CPU cycles. A parallel implementation similar to the sequential one is displayed below: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines in parallel 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import threading 10 11 def main(args): 12 logging.basicConfig(level=logging.INFO, format="%(threadName)s: %(message)s") 13 orig_md5 = run_command("md5sum %s" % args.file).split()[0] 14 15 # Create one thread for machine 16 threads = [ WorkingThread(machine, args.file, orig_md5) 17 for machine in args.machines] 18 19 # Run all threads 20 for thread in threads: 21 thread.start() 22 23 # Wait for all threads to finish 24 for thread in threads: 25 thread.join() 26 27 class WorkingThread(threading.Thread): 28 """ 29 Thread that performs the copy operation for one machine 30 """ 31 def __init__(self, machine, orig_file, orig_md5): 32 threading.Thread.__init__(self) 33 34 self.machine = machine 35 self.file = orig_file 36 self.orig_md5 = orig_md5 37 38 def run(self): 39 # Copy file to remote machine 40 run_command("scp %s %s:/tmp/" % (self.file, self.machine)) 41 42 # Calculate md5 sum of the file copied at the remote machine 43 dest_md5 = run_command("ssh %s md5sum /tmp/%s" 44 % (self.machine, os.path.basename(self.file))).split()[0] 45 assert self.orig_md5 == dest_md5 46 47 def run_command(command_str): 48 """ 49 Run a given command and another process and return stdout 50 """ 51 logging.info(command_str) 52 return subprocess.Popen(command_str, stdout=subprocess.PIPE, 53 shell=True).communicate()[0] 54 55 if __name__ == "__main__": 56 parser = argparse.ArgumentParser(description=__doc__) 57 parser.add_argument("file", 58 help="File to copy") 59 parser.add_argument(metavar="machine", dest="machines", nargs="+", 60 help="List of machines to which file must be copied") 61 62 args = parser.parse_args() 63 args.file = os.path.realpath(args.file) 64 main(args) Here the same assumptions as in the sequential case are made. In this solution the work that was done inside the for loop is now implemented in the run method of a class that is inherited from threading.Thread class, which is a class that provides an easy way to create working threads such as the ones in the example. In this case, the output of the command, using the same arguments as in the previous example, is: $ ./copy_parallel.py STAF325-src.tar.gz 192.168.1.{1,2}MainThread: md5sum STAF325-src.tar.gzThread-1: scp STAF325-src.tar.gz 192.168.1.1:/tmp/Thread-2: scp STAF325-src.tar.gz 192.168.1.2:/tmp/Thread-2: ssh 192.168.1.2 md5sum /tmp/STAF325-src.tar.gzThread-1: ssh 192.168.1.1 md5sum /tmp/STAF325-src.tar.gz As it can be seen in the logs, md5sum command execution isn't necessarily executed in the same order as threads were created. This solution isn't much more complex than the sequential one, but it finishes earlier. Hence, in the case in which a CPU intensive task must be performed in every machine, the parallel solution will be more convenient since the small increment in coding complex will pay off in execution performance. The Python+STAF solution Sequential The solutions to the problem presented in the previous section are perfectly fine. However, some developers may find it cumbersome to write scripts from scratch using Popen class and desire to work with a platform with feature such as launching process on remote machines already implemented. That's were STAF (Software Testing Automation Framework) might be helpful. STAF is a framework that provides the ability to automate jobs specially, but not uniquely, for testing environments. STAF is implemented as a process which runs on every machine that provides services that may be used by clients to accomplish different tasks. For more information regarding STAF, please refer to the project homepage. The Python+STAF sequential version of the program that has been used as example throughout this article is below: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines sequentially 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import PySTAF 10 11 def main(args): 12 logging.basicConfig(level=logging.INFO, format="%(message)s") 13 handle = PySTAF.STAFHandle(__file__) 14 15 # Calculate md5 sum before copyin the file 16 orig_md5 = run_process_command(handle, "local", "md5sum %s" % args.file).split()[0] 17 18 # Copy the file to every requested machine and verify 19 # that md5 sum of the destination file is equal 20 # to the md5 sum of the original file 21 for machine in args.machines: 22 copy_file(handle, args.file, machine) 23 dest_md5 = run_process_command(handle, machine, "md5sum /tmp/%s" 24 % os.path.basename(args.file)).split()[0] 25 assert orig_md5 == dest_md5 26 27 handle.unregister() 28 29 def run_process_command(handle, location, command_str): 30 """ 31 Run a given command and another process and return stdout 32 """ 33 logging.info(command_str) 34 35 result = handle.submit(location, "PROCESS", "START SHELL COMMAND %s WAIT RETURNSTDOUT" 36 % PySTAF.STAFWrapData(command_str)) 37 assert result.rc == PySTAF.STAFResult.Ok 38 39 mc = PySTAF.unmarshall(result.result) 40 return mc.getRootObject()['fileList'][0]['data'] 41 42 def copy_file(handle, filename, destination): 43 """ 44 Run a given command and another process and return stdout 45 """ 46 logging.info("copying %s to %s" % (filename, destination)) 47 48 result = handle.submit("local", "FS", "COPY FILE %s TODIRECTORY /tmp TOMACHINE %s" 49 % (PySTAF.STAFWrapData(filename), 50 PySTAF.STAFWrapData(destination))) 51 assert result.rc == PySTAF.STAFResult.Ok 52 53 if __name__ == "__main__": 54 parser = argparse.ArgumentParser(description=__doc__) 55 parser.add_argument("file", 56 help="File to copy") 57 parser.add_argument(metavar="machine", dest="machines", nargs="+", 58 help="List of machines to which file must be copied") 59 60 args = parser.parse_args() 61 args.file = os.path.realpath(args.file) 62 main(args) The code makes use of PySTAF, a python library, which is shipped with the STAF software that provides the ability to interact with the framework as a client. The typical usage of the library may summarized as follows: Register a handle in STAF (line 13): The communication with the server process is managed using handles. A client must have a handle to be able to send requests to local and/or remote machines. Submit requests (lines 35 and 48): Once the handle is available at the client, the client can use it to submit requests to any location and service. The two basic services that are used in this example are PROCESS, which is used to launch processes on a machine the same way ssh was used in the python-only version of the example; and FS, which is used to copy files between different machines as scp was used in the python-only solution. Check result code (lines 37 and 51): After a request has been submitted, result code should be checked to make sure that there wasn't any communication or syntax problem. Unmarshall results (lines 39-40): When the standard output is captured, it must be unmarshalled before using it in python since responses are encoded in a language independent format. Unregister handle (line 27): When STAF isn't needed anymore, it's advisable to unregister the handle to free resources allocated to the client in the server. Compared with the python-only solution, the advantages of STAF aren't appreciable at first sight. The handler syntax isn't easier than creating Popen objects and we have to deal with marshalling when we previously were just parsing text. However, as a framework, if has to be taken into account that it is has a learning curve and has much more functionality to offer than this one that makes it worthwhile. Please bear with me until section 5, in which the STAX solution we'll be shown, with an example with a completely different approach to the problem. Using the script in this section, the output would be pretty much the same as the previous sequential example: $ ./staf_copy.py STAF325-src.tar.gz 192.168.1.{1,2}md5sum STAF325-src.tar.gzcopying STAF325-src.tar.gz to 192.168.1.1md5sum /tmp/STAF325-src.tar.gzcopying STAF325-src.tar.gz to 192.168.1.2md5sum /tmp/STAF325-src.tar.gz As in the previous section, the sequential solution suffers the same problems when CPU intensive tasks are to be performed. Hence, the same comments apply. Parallel When using STAF, the parallel solution requires the same changes that were explained before. That is, create a new class that inherits from threading.Thread and implement the working threads. The code below shows how this might be implemented: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines in parallel 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import threading 10 import PySTAF 11 12 def main(args): 13 logging.basicConfig(level=logging.INFO, format="%(threadName)s %(message)s") 14 handle = PySTAF.STAFHandle(__file__) 15 orig_md5 = run_process_command(handle, "local", "md5sum %s" % args.file).split()[0] 16 17 # Create one thread for machine 18 threads = [ WorkingThread(machine, args.file, orig_md5) 19 for machine in args.machines] 20 21 # Run all threads 22 for thread in threads: 23 thread.start() 24 25 # Wait for all threads to finish 26 for thread in threads: 27 thread.join() 28 29 handle.unregister() 30 31 class WorkingThread(threading.Thread): 32 """ 33 Thread that performs the copy operation for one machine 34 """ 35 def __init__(self, machine, orig_file, orig_md5): 36 threading.Thread.__init__(self) 37 38 self.machine = machine 39 self.file = orig_file 40 self.orig_md5 = orig_md5 41 self.handle = PySTAF.STAFHandle("%s:%s" % (__file__, self.getName())) 42 43 def run(self): 44 # Copy file to remote machine 45 copy_file(self.handle, self.file, self.machine) 46 47 # Calculate md5 sum of the file copied at the remote machine 48 dest_md5 = run_process_command(self.handle, self.machine, "md5sum /tmp/%s" 49 % os.path.basename(self.file)).split()[0] 50 assert self.orig_md5 == dest_md5 51 self.handle.unregister() 52 53 def run_process_command(handle, location, command_str): 54 """ 55 Run a given command and another process and return stdout 56 """ 57 logging.info(command_str) 58 59 result = handle.submit(location, "PROCESS", "START SHELL COMMAND %s WAIT RETURNSTDOUT" 60 % PySTAF.STAFWrapData(command_str)) 61 assert result.rc == PySTAF.STAFResult.Ok 62 63 mc = PySTAF.unmarshall(result.result) 64 return mc.getRootObject()['fileList'][0]['data'] 65 66 def copy_file(handle, filename, destination): 67 """ 68 Run a given command and another process and return stdout 69 """ 70 logging.info("copying %s to %s" % (filename, destination)) 71 72 result = handle.submit("local", "FS", "COPY FILE %s TODIRECTORY /tmp TOMACHINE %s" 73 % (PySTAF.STAFWrapData(filename), 74 PySTAF.STAFWrapData(destination))) 75 assert result.rc == PySTAF.STAFResult.Ok 76 77 if __name__ == "__main__": 78 parser = argparse.ArgumentParser(description=__doc__) 79 parser.add_argument("file", 80 help="File to copy") 81 parser.add_argument(metavar="machine", dest="machines", nargs="+", 82 help="List of machines to which file must be copied") 83 84 args = parser.parse_args() 85 args.file = os.path.realpath(args.file) 86 main(args) As it happened before, this solution is faster since it takes advantage of having multiple CPUs working on md5sum calculation instead of just one at a time. The output we get invoking the script could be: $ ./staf_copy_parallel.py STAF325-src.tar.gz 192.168.1.{1,2}MainThread md5sum STAF325-src.tar.gzThread-1 copying STAF325-src.tar.gz to 192.168.1.1Thread-2 copying STAF325-src.tar.gz to 192.168.1.2Thread-2 md5sum /tmp/STAF325-src.tar.gzThread-1 md5sum /tmp/STAF325-src.tar.gz This time it can be seen that md5sum calculation mustn't necessarily start in the same order as file copy operation. Once again, this solution is slightly more complex, but the gain in performance makes it convenient when dealing with tasks with high computational cost.    
Read more
  • 0
  • 0
  • 2997

article-image-10-minute-guide-enterprise-service-bus-and-netbeans-soa-pack
Packt
23 Oct 2009
4 min read
Save for later

10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack

Packt
23 Oct 2009
4 min read
Introduction When you are integrating different systems together, it can be very easy to use your vendor’s APIs and program directly against them. Using that approach, developers can easily integrate applications. Supporting these applications however can become problematical. If we have a few systems integrated together in this approach, everything is fine, but the more systems we integrate together, the more integration code we have and it rapidly becomes unfeasible to support this point-to-point integration. To overcome this problem, the integration hub was developed. In this scenario, developers would write against the API of the integration vendor and only had to learn one API. This is a much better approach than the point-to-point integration method however it still has its limitations. There is still a proprietary API to learn (albeit only one this time), but if the integration hub goes down for any reason, then entire Enterprise can become unavailable. The Enterprise Service Bus (ESB) overcomes these problems by providing a scalable, standards based integration architecture. The NetBeans SOA pack includes a copy of OpenESB, which follows this architecture promoted by the Java Business Integration Specification JSR 208. Workings of an ESB At the heart of the ESB is the Normalized Message Router (NMR) - a pluggable framework that allows Java Business Integration (JBI) components to be plugged into it as required.  The NMR is responsible for passing messages between all of the different JBI components that are plugged into it. The two main JBI components that are plugged into the NMR are Binding Components and Service Engines.  Binding Components are responsible for handling all protocol specific transport such as HTTP, SOAP, JMS, File system access, etc.  Service Engines on the other hand execute business logic as BPEL processes, SQL statements, invoking external Java EE web services, etc.   There is a clear separation between Binding Components and Service Engines with protocol specific transactions being handled by the former and business logic being performed by the latter. This architecture promotes loose coupling in that service engines do not communicate directly with each other.  All communication between different JBI components is performed through Binding Components by use of normalized messages as shown in the sequence chart below. In the case of OpenESB, all of these normalized messages are based upon WSDL.  If, for example, a BPEL process needs to invoke a web service or send an email, it does not need to know about SOAP or SMTP that is the responsibility of the Binding Components.  For one Service Engine to invoke another Service Engine all that is required is a WSDL based message to be constructed, which can then be routed via the NMR and Binding Components to the destination Service Engine. OpenESB provides many different Binding Components and Service Engines enabling integration with many varied different systems. So, we can see that OpenESB provides us with a standard based architecture that promotes loose coupling between components.  NetBeans 6 provides tight integration with OpenESB allowing developers to take full advantage of its facilities. Integrating Netbeans6 IDE with OpenESB Integration with NetBeans comes in two parts.  First, NetBeans allows the different JBI components to be managed from within the IDE.  Binding Components and Service Engines can be installed into OpenESB from within NetBeans and from thereon the full lifecycle of the components (start, stop, restart, uninstall) can be controlled directly from within the IDE. Secondly, and more interestingly, the NetBeans IDE provides full editing support for developing Composite Applications¬ applications that bring together business logic and data from different sources.  One of the main features of Composite Applications is probably the BPEL editor.  This allows BPL process to be built up graphically allowing interaction with different data sources via different partner links, which may be web services, different BPEL processes, or SQL statements. Once a BPEL process or composite application has been developed, the NetBeans SOA pack provides tools to allow different bindings to be added onto the application depending on the Binding Components installed into OpenESB.  So, for example, a file binding could be added to a project that could poll the file system periodically looking for input messages to start a BPEL process, the output of which could be saved into a different file or sent directly to an FTP site. In addition to support for developing Composite Applications, the NetBeans SOA pack provides support for some features many Java developers would find useful, namely XML and WSDL editing and validation.  XML and WSDL files can be edited within the IDE as either raw text, or via graphical editors.  If changes are made in the raw text, the graphical editors update accordingly and vice versa.  
Read more
  • 0
  • 0
  • 2047

article-image-troubleshooting-lotus-notesdomino-7-applications
Packt
23 Oct 2009
19 min read
Save for later

Troubleshooting Lotus Notes/Domino 7 applications

Packt
23 Oct 2009
19 min read
Introduction The major topics that we'll cover in this article are: Testing your application (in other words, uncovering problems before your users do it for you). Asking the right questions when users do discover problems. Using logging to help troubleshoot your problems. We'll also examine two important new Notes/Domino 7 features that can be critical for troubleshooting applications: Domino Domain Monitoring (DDM) Agent Profiler   For more troubleshooting issues visit: TroubleshootingWiki.org Testing your Application Testing an application before you roll it out to your users may sound like an obvious thing to do. However, during the life cycle of a project, testing is often not allocated adequate time or money. Proper testing should include the following: A meaningful amount of developer testing and bug fixing: This allows you to catch most errors, which saves time and frustration for your user community. User representative testing: A user representative, who is knowledgeable about the application and how users use it, can often provide more robust testing than the developer. This also provides early feedback on features. Pilot testing: In this phase, the product is assumed to be complete, and a pilot group uses it in production mode. This allows for limited stress testing as well as more thorough testing of the feature set. In addition to feature testing, you should test the performance of the application. This is the most frequently skipped type of testing, because some consider it too complex and difficult. In fact, it can be difficult to test user load, but in general, it's not difficult to test data load. So, as part of any significant project, it is a good practice to programmatically create the projected number of documents that will exist within the application, one or two years after it has been fully deployed, and have a scheduled agent trigger the appropriate number of edits-per-hour during the early phases of feature testing. Although this will not give a perfect picture of performance, it will certainly help ascertain whether and why the time to create a new document is unacceptable (for example, because the @Db formulas are taking too long, or because the scheduled agent that runs every 15 minutes takes too long due to slow document searches). Asking the Right Questions Suppose that you've rolled out your application and people are using it. Then the support desk starts getting calls about a certain problem. Maybe your boss is getting an earful at meetings about sluggish performance or is hearing gripes about error messages whenever users try to click a button to perform some action. In this section, we will discuss a methodology to help you troubleshoot a problem when you don't necessarily have all the information at your disposal. We will include some specific questions that can be asked verbatim for virtually any application. The first key to success in troubleshooting an application problem is to narrow down where and when it happens. Let's take these two very different problems suggested above (slow performance and error messages), and pose questions that might help unravel them: Does the problem occur when you take a specific action? If so, what is that action? Your users might say, "It's slow whenever I open the application", or "I get an error when I click this particular button in this particular form". Does the problem occur for everyone who does this, or just for certain people? If just certain people, what do they have in common? This is a great way to get your users to help you help them. Let them be a part of the solution, not just "messengers of doom". For example, you might ask questions such as, "Is it slow only for people in your building or your floor? Is it slow only for people accessing the application remotely? Is it slow only for people who have your particular access (for example, SalesRep)?" Does this problem occur all the time, at random times, or only at certain times? It's helpful to check whether or not the time of day or the day of week/month is relevant. So typical questions might be similar to the following: "Do you get this error every time you click the button or just sometimes? If just sometimes, does it give you the error during the middle of the day, but not if you click it at 7 AM when you first arrive? Do you only get the error on Mondays or some other day of the week? Do you only see the error if the document is in a certain status or has certain data in it? If it just happens for a particular document, please send me a link to that document so that I can inspect it carefully to see if there is invalid or unexpected data." Logging Ideally, your questions have narrowed down the type of problem it could be. So at this point, the more technical troubleshooting can start. You will likely need to gather concrete information to confirm or refine what you're hearing from the users. For example, you could put a bit of debugging code into the button that they're clicking so that it gives more informative errors, or sends you an email (or creates a log document) whenever it's clicked or whenever an error occurs. Collecting the following pieces of information might be enough to diagnose the problem very quickly: Time/date User name Document UNID (if the button is pushed in a document) Error Status or any other likely field that might affect your code By looking for common denominators (such as the status of the documents in question, or access or roles of the users), you will likely be able to further narrow down the possibilities of why the problem is happening. This doesn't solve your problem of course, but it helps in advancing you a long way towards that goal. A trickier problem to troubleshoot might be one we mentioned earlier: slow performance. Typically, after you've determined that there is some kind of performance delay, it's a good idea to first collect some server logging data. Set the following Notes.ini variables in the Server Configuration document in your Domino Directory, on the Notes.ini tab: Log_Update=1Log_AgentManager=1 These variables instruct the server to write output to the log.nsf database in the Miscellaneous Events view. Note that they may already be set in your environment. If not, they're fairly unobtrusive, and shouldn't trouble your administration group. Set them for a 24-hour period during a normal business week, and then examine the results to see if anything pops out as being suspicious. For view indexing, you should look for lines like these in the Miscellaneous Events (Log_Update=1): 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:17 AM Finished updating views in appsTracking.nsf07/01/2006 09:30:17 AM Updating views in appsZooSchedule.nsf07/01/2006 09:30:18 AM Finished updating views in appsZooSchedule.nsf And lines like these for Agent execution (Log_AgentManager=1): 06/30/2006 09:43:49 PM AMgr: Start executing agent 'UpdateTickets' in 'appsSalesPipeline.nsf ' by Executive '1'06/30/2006 09:43:52 PM AMgr: Start executing agent 'ZooUpdate' in 'appsZooSchedule.nsf ' by Executive '2'06/30/2006 09:44:44 PM AMgr: Start executing agent 'DirSynch' in 'appsTracking.nsf ' by Executive '1' Let's examine these lines to see whether or not there is anything we can glean from them. Starting with the Log_Update=1 setting, we see that it gives us the start and stop times for every database that gets indexed. We also see that the database file paths appear alphabetically. This means that, if we search for the text string updating views and pull out all these lines covering (for instance) an hour during a busy part of the day, and copy/paste these lines into a text editor so that they're all together, then we should see complete database indexing from A to Z on your server repeating every so often. In the log.nsf database, there may be many thousands of lines that have nothing to do with your investigation, so culling the important lines is imperative for you to be able to make any sense of what's going on in your environment. You will likely see dozens or even hundreds of databases referenced. If you have hundreds of active databases on your server, then culling all these lines might be impractical, even programmatically. Instead, you might focus on the largest group of databases. You will notice that the same databases are referenced every so often. This is the Update Cycle, or view indexing cycle. It's important to get a sense of how long this cycle takes, so make sure you don't miss any references to your group of databases. Imagine that SalesPipeline.nsf and Tracking.nsf were the two databases that you wanted to focus on. You might cull the lines out of the log that have updating views and which reference these two databases, and come up with something like the following: 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:20 AM Finished updating views in appsTracking.nsf07/01/2006 10:15:55 AM Updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Updating views in appsTracking.nsf07/01/2006 10:16:43 AM Finished updating views in appsTracking.nsf07/01/2006 11:22:31 AM Updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Updating views in appsTracking.nsf07/01/2006 11:23:44 AM Finished updating views in appsTracking.nsf This gives us some very important information: the Update task (view indexing) is taking approximately an hour to cycle through the databases on the server; that's too long. The Update task is supposed to run every 15 minutes, and ideally should only run for a few minutes each time it executes. If the cycle is an hour, then that means update is running full tilt for that hour, and as soon as it stops, it realizes that it's overdue and kicks off again. It's possible that if you examine each line in the log, you'll find that certain databases are taking the bulk of the time, in which case it might be worth examining the design of those databases. But it might be that every database seems to take a long time, which might be more indicative of a general server slowdown. In any case, we haven't solved the problem; but at least we know that the problem is probably server-wide. More complex applications, and newer applications, tend to reflect server‑performance problems more readily, but that doesn't necessarily mean they carry more responsibility for the problem. In a sense, they are the "canary in the coal mine". If you suspect the problem is confined to one database (or a few), then you can increase the logging detail by setting Log_Update=2. This will give you the start time for every view in every database that the Update task indexes. If you see particular views taking a long time, then you can examine the design of those views. If no database(s) stand out, then you might want to see if the constant indexing occurs around the clock or just during business hours. If it's around the clock, then this might point to some large quantities of data that are changing in your databases. For example, you may be programmatically synchronizing many gigabytes of data throughout the day, not realizing the cost this brings in terms of indexing. If slow indexing only occurs during business hours, then perhaps the user/data load has not been planned out well for this server. As the community of users ramps up in the morning, the server starts falling behind and never catches up until evening. There are server statistics that can help you determine whether or not this is the case. (These server statistics go beyond the scope of this book, but you can begin your investigation by searching on the various Notes/Domino forums for "server AND performance AND statistics".) As may be obvious at this point, troubleshooting can be quite time-consuming. The key is to make sure that you think through each step so that it either eliminates something important, or gives you a forward path. Otherwise, you can find yourself still gathering information weeks and months later, with users and management feeling very frustrated. Before moving on from this section, let's take a quick look at agent logging. Agent Manager can run multiple agents in different databases, as determined by settings in your server document. Typically, production servers only allow two or three concurrent agents to run during business hours, and these are marked in the log as Executive '1', Executive '2', and so on. If your server is often busy with agent execution, then you can track Executive '1' and see how many different agents it runs, and for how long. If there are big gaps between when one agent starts and when the next one does (for Executive '1'), this might raise suspicion that the first agent took that whole time to execute. To verify this, turn up the logging by setting the Notes.ini variable debug_amgr=*. (This will output a fair amount of information into your log, so it's best not to leave it on for too long, but normally one day is not a problem.) Doing this will give you a very important piece of information: the number of "ticks" it took for the agent to run. One second equals 100 ticks, so if the agent takes 246,379 ticks, this equals 2,463 seconds (about 41 minutes). As a general rule, you want scheduled agents to run in seconds, not minutes; so any agent that is taking this long will require some examination. In the next section, we will talk about some other ways you can identify problematic agents. Domino Domain Monitoring (DDM) Every once in a while, a killer feature is introduced—a feature so good, so important, so helpful, that after using it, we just shake our heads and wonder how we ever managed without it for so long. Domino Domain Monitor (DDM) is just such a feature. DDM is too large to be completely covered in this one section, so we will confine our overview to what it can do in terms of troubleshooting applications. For a more thorough explanation of DDM and all its features, see the book, Upgrading to Lotus Notes and Domino (www.packtpub.com/upgrading_lotus/book). In the events4.nsf database, you will find a new group of documents you can create for tracking agent or application performance. On Domino 7 servers, a new database is created automatically with the filename ddm.nsf. This stores the DDM output you will examine. For application troubleshooting, some of the most helpful areas to track using DDM are the following: Full-text index needs to be built. If you have agents that are creating a full‑text index on the fly because the database has no full‑text index built, DDM can track that potential problem for you. Especially useful is the fact that DDM compiles the frequency per database, so (for instance) you can see if it happens once per month or once per hour. Creating full‑text indexes on the fly can result in a significant demand on server resources, so having this notification is very useful. We discuss an example of this later in this section. Agent security warnings. You can manually examine the log to try to find errors about agents not being able to execute due to insufficient access. However, DDM will do this for you, making it much easier to find (and therefore fix) such problems. Resource utilization. You can track memory, CPU, and time utilization of your agents as run by Agent Manager or by the HTTP task. This means that at any time you can open the ddm.nsf database and spot the worst offenders in these categories, over your entire server/domain. We will discuss an example of CPU usage later in this section. The following illustration shows the new set of DDM views in the events4.nsf (Monitoring configuration) database: The following screenshot displays the By Probe Server view after we've made a few document edits: Notice that there are many probes included out-of-the-box (identified by the property "author = Lotus Notes Template Development") but set to disabled. In this view, there are three that have been enabled (ones with checkmarks) and were created by one of the authors of this book. If you edit the probe document highlighted above, Default Application Code/Agents Evaluated By CPU Usage (Agent Manager), the document consists of three sections. The first section is where you choose the type of probe (in this case Application Code) and the subtype (in this case Agents Evaluated By CPU Usage). The second section allows you to choose the servers to run against, and whether you want this probe to run against agents/code executed by Agent Manager or by the HTTP task (as shown in the following screenshot). This is an important distinction. For one thing, they are different tasks, and therefore one can hit a limit while the other still has room to "breathe". But perhaps more significantly, if you choose a subtype of Agents Evaluated By Memory Usage, then the algorithms used to evaluate whether or not an agent is using too much memory are very different. Agents run by the HTTP task will be judged much more harshly than those run by the Agent Manager task. This is because with the HTTP task, it is possible to run the same agent with up to hundreds of thousands of concurrent executions. But with Agent Manager, you are effectively limited to ten concurrent instances, and none within the same database. The third section allows you to set your threshold for when DDM should report the activity: You can select up to four levels of warning: Fatal, Failure, Warning (High), and Warning (Low). Note that you do not have the ability to change the severity labels (which appear as icons in the view). Unless you change the database design of ddm.nsf, the icons displayed in the view and documents are non-configurable. Experiment with these settings until you find the approach that is most useful for your corporation. Typically, customers start by overwhelming themselves with information, and then fine-tuning the probes so that much less information is reported. In this example, only two statuses are enabled: one for six seconds, with a label of Warning (High), and one for 60 seconds, with a label of Failure. Here is a screenshot of the DDM database: Notice that there are two Application Code results, one with a status of Failure (because that agent ran for more than 60 seconds), and one with a status of Warning (High) (because that agent ran for more than six seconds but less than 60 seconds). These are the parameters set in the Probe document shown previously, which can easily be changed by editing that Probe document. If you want these labels to be different, you must enable different rows in the Probe document. If you open one of these documents, there are three sections. The top section gives header information about this event, such as the server name, the database and agent name, and so on. The second section includes the following table, with a tab for the most recent infraction and a tab for previous infractions. This allows you to see how often the problem is occurring, and with what severity. The third section provides some possible solutions, and (if applicable) automation. For example, in our example, you might want to "profile" your agent. (We will profile one of our agents in the final section of this article.) DDM can capture full-text operations against a database that is not full‑text indexed. It tracks the number of times this happens, so you can decide whether to full‑text index the database, change the agent, or neither. For a more complete list of the errors and problems that DDM can help resolve, check the Domino 7 online help or the product documentation (www.lotus.com). Agent Profiler If any of the troubleshooting tips or techniques we've discussed in this article causes you to look at an agent and think, "I wonder what makes this agent so slow", then the Agent Profiler should be the next tool to consider. Agent Profiler is another new feature introduced in Notes/Domino 7. It gives you a breakdown of many methods/properties in your LotusScript agent, telling you how often each one was executed and how long they took to execute. In Notes/Domino 7, the second (security) tab of Agent properties now includes a checkbox labeled Profile this agent. You can select this option if you want an agent to be profiled. The next time the agent runs, a profile document in the database is created and filled with the information from that execution. This document is then updated every time the agent runs. You can view these results from the Agent View by highlighting your agent and selecting Agent | View Profile Results. The following is a profile for an agent that performed slow mail searches: Although this doesn't completely measure (and certainly does not completely troubleshoot) your agents, it is an important step forward in troubleshooting code. Imagine the alternative: dozens of print statements, and then hours of collating results! Summary In closing, we hope that this article has opened your eyes to new possibilities in troubleshooting, both in terms of techniques and new Notes/Domino 7 features. Every environment has applications that users wish ran faster, but with a bit of care, you can troubleshoot your performance problems and find resolutions. After you have your servers running Notes/Domino 7, you can use DDM and Agent Profiler (both exceptionally easy to use) to help nail down poorly performing code in your applications. These tools really open a window on what had previously been a room full of mysterious behavior. Full-text indexing on the fly, code that uses too much memory, and long running agents are all quickly identified by Domino Domain Monitoring (DDM). Try it!
Read more
  • 0
  • 0
  • 2388
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-introduction-re-host-based-modernization-using-tuxedo
Packt
23 Oct 2009
19 min read
Save for later

Introduction to Re-Host based Modernization Using Tuxedo

Packt
23 Oct 2009
19 min read
Introduction SOA enablement wraps key application interfaces in services, and integrates it into the SOA. This largely leaves the existing application logic intact, minimizing changes and adding risk only to those components that needed restructuring work to become SOA-ready. While the interfaces are modernized, without subjecting the core application components to a lot of change, the high costs and the various legacy risks associated with the mainframe platform remain. In addition, the performance and scalability of the new interfaces needs to be well-specified and tested, and the additional load they place on the system should be included in any planned capacity upgrades, potentially increasing the overall costs. Reducing or eliminating the legacy mainframe costs and risks via re-host based modernization also helps customers to fund SOA enablement, and the re-architecture phases of legacy modernization, and lay the groundwork for these steps. SOA-enabling a re-hosted application is a much easier process on an open-systems-based, SOA-ready software stack, and a more efficient one as well in terms of system resource utilization and cost. Re-architecting selected components of a re-hosted application based on specific business needs is a lower risk approach than re-architecting the entire applications en masse, and the risk can be further reduced by ensuring that target re-hosting stack provides rugged and transparent integration between re-hosted services and new components. Keeping It Real: Selective re-architecture is all about maximizing ROI by focusing re-architecture investment in the areas with the best pay-off. Undertaking a change from one language or development paradigm to another shouldn't be undertaken lightly—the investment and risks need to be well understood and justified. It is the right investment for components that require frequent maintenance changes but are difficult to maintain, because of poor /structure and layered changes. The payback on re-architecture investment will come from reducing the cost of future maintenance. Similarly, components that need significant functional changes to meet new business requirements can benefit from substantial productivity increase after re-architecture to a more modern development framework with richer tools to support future changes. The payback comes from greater business agility and time-to-market improvements. On the other hand, well-structured and maintainable COBOL components that do not need extensive changes to meet business needs will have very little return to show for the significant re-architecture investment. Leaving them in COBOL on a modern, extensible platform saves significant re-architecture costs that can be invested elsewhere, reduces risk, and shortens payback time. These considerations can help to optimize ROI for medium to large modernization projects where components measure in hundreds or thousands and contain millions or tens of millions lines of code. Re-Hosting Based Modernization For many organizations, mainframe modernization has become a matter of 'how', and not 'if'. Numerous enterprises and public sector organizations choose re-hosting as the first tangible step in their legacy modernization program precisely because it delivers the best ROI in the fastest possible manner, and accelerates the move to SOA enablement and selective re-architecture. Oracle together with our services partners provides a comprehensive re-hosting-based modernization solution that many customers have leveraged for a successful migration of selected applications or complete mainframe environments ranging from a few hundred MIPS to well over 10,000 MIPS. Two key pillars support successful re-hosting projects: Optimal target environment that lowers the Total Cost of Ownership (TCO) by 50–80 percent and maintains mainframe-class Quality of Service (QoS) using open, extensible, SOA-ready, future-proof architecture Predictable, efficient projects delivered by our SI partners with proven methodologies and automated tools Optimal target environment provided by Oracle is powered by proven open systems software stack leveraging Oracle Database and Oracle Tuxedo for a rock-solid, mainframe-class transaction processing (TP) infrastructure closely matching mainframe requirements for online applications. Mainframe-compatible Transaction Processing: Support for IBM CICS or IMS TM applications in native COBOL or C/C++ language containers with mainframe-compatible TP features. RASP: Mainframe-class performance, reliability, and scalability provided by Oracle Real Application Clusters (RAC) and Tuxedo multi-node and multi-domain clustering for load-balancing and high availability despite failure of individual nodes or network links. Workload and System Management: End-to-end transaction and service monitoring to support 24X7 operations management provided by Oracle's Enterprise Manager Grid Control and Tuxedo System and Application Monitor. SOA Enablement and Integration: Extensibility with Web services using Oracle Services Architecture Leveraging Tuxedo (SALT), J2EE integration (using WebLogic-Tuxedo Connector (WTC), Enterprise Service Bus (ESB), Portal, and BPM technologies to enable easy integration of re-hosted applications into modern Service-Oriented Architectures (SOAs). Scalable Platforms and Commodity Hardware: Scalable, Linux/UNIX-based open systems from HP, Dell, Sun, and IBM, providing: Performance on a par with mainframe systems for most workloads at significantly reduced TCO Reliability and workload management similar to mainframe installations, including physical and logical partitioning Robust clustering technologies for high availability and fail-over capabilities within a data center or across the world The diagram below shows conceptual mapping of mainframe environment to compatible open systems infrastructure: Predictable, efficient projects delivered by leading SIs and key modernization specialists use risk-mitigation methodologies, and automated tools honed over numerous projects to address a complete range of Online, Batch, and Data architectures, and the various technologies used in them. These project methodologies and automated tools that support them encompass all phases of a migration project: Preliminary Assessment Study Application Asset Discovery and Analysis Application and Data Conversion (pilot or entire application portfolio) System and Application Integration Test Engineering Regression and Performance Testing Education and Training Operations Migration Switch-Over Combining a proven target architecture stack that is well-matched to the needs of mainframe applications with mature methodologies supported by automated tools has led to a large and growing number of successful re-hosting projects. There is a rising interest to leverage the re-hosting approach to mainframe application modernization, as a way to get off a mainframe fast, and with minimal risk, in a more predictable manner for large, business-critical applications evolved over a long term and multiple development teams. Re-hosting based modernization approach preserves an organizations long term investment in critical business logic and data without risking business operations or sacrificing the QoS, while enabling customers to: Reduce or eliminate mainframe maintenance costs, and/or defer upgrade costs, saving customers 50–80 percent of their annual maintenance and operations budget Increase productivity and flexibility in IT development and operations, protecting long-term investment through application modernization Speed up and simplify application integration via SOA, without losing transactional integrity and the high performance expected by the users The rest of this article explores the critical success factors and proven transformation architecture for re-hosting legacy applications and data, describes SOA integration options and considerations when SOA-enabling re-hosted applications, highlights key risk mitigation methodologies, and provides a foundation for the financial analysis and ROI model derived from over a hundred, mainframe re-hosting projects. Critical Success Factors in Mainframe Re-Hosting Companies considering a re-hosting-based modernization strategy that involves migrating some applications off the mainframe have to address a range of concerns, which can be summarized by the following questions: How to preserve the business logic of these applications and their valuable data? How to ensure that migrated applications continue to meet performance requirements? How to maintain scalability, reliability, transactional integrity, and other QoS attributes in an open system environment? How to migrate in phases, maintaining robust integration links between migrated and mainframe applications? How to achieve predictable, cost-effective results and ensure a low-risk project? Meeting these challenges requires a versatile and powerful application infrastructure—one that natively supports key mainframe languages and services, enables automated adaptation of application code, and delivers proven, mainframe-like QoS on open system platforms. For re-hosting to enable broader aspects of the modernization strategy, this infrastructure must also provide native Web services and ESB capabilities to rapidly integrate re-hosted applications as first-class services in an SOA. Equally important is a proven, risk-mitigation methodology, automated tools, and project services specifically honed to address automated conversion and adaptation of application code and data, supported by cross-platform test engineering and execution methodology, strong system and application integration expertise, and deep experience with operations migration and switch-over. Preserving Application Logic and Data The re-hosting approach depends on a mainframe-compatible transaction processing and application services platform supporting common mainframe languages such as COBOL and C, which preserves the original business logic and data for the majority of mainframe applications and avoids the risks and uncertainties of a re-write. A complete re-hosting solution provides native support for TP and Batch programs, leveraging an application server-based platform that provides container-based support for COBOL and C/C++ application services, and TP APIs similar to IBM CICS, IMS TM, or other mainframe TP monitors. Online Transaction Processing Environment Oracle Tuxedo is the most popular TP platform for open systems, as well as leading re-hosting platform that can run most of mainframe COBOL and C applications unchanged in container-based framework that combines common application server features, including health monitoring, fail-over, service virtualization, and dynamic load balancing critical to large-scale OLTP applications together with standard TP features, including transaction management and reliable coordination of distributed transactions (a.k.a. Two-Phase Commit or XA standard). It provides the highest possible performance and scalability, and has been recently benchmarked against a mainframe at over 100,000 transactions per second, with sub-second response time. Oracle Tuxedo supports common mainframe programming languages, that is, COBOL and C, and provides comprehensive TP features compatible with CICS and IMS TM, which makes it a preferred application platform choice for re-hosting CICS or IMS TM applications with minimal changes and risks. In the Tuxedo environment, COBOL or C business logic remains unchanged. The only adaptation required is automated mapping of CICS APIs (CICS EXEC calls) to equivalent Tuxedo API functions. This mapping typically leverages a pre-processor and a mapping library implemented on Tuxedo platform, and using a full range of Tuxedo APIs. The automated nature of pre-processing and comprehensive coverage provided by the library ensures that most CICS COBOL or C programs are easily transformed into Tuxedo services. Unlike other solutions that embed this transformation in their compiler coupled with a proprietary emulation run-time, Tuxedo-based solution provides this mapping as a compiler-independent source module, which can be easily extended as needed. The resultant code uses Tuxedo API at native speed, allowing it to reach tens of thousands of transactions per second, while taking advantage of all Tuxedo facilities. In a re-hosted application CICS transactions become Tuxedo services, registered for processing by Tuxedo server processes. These services can be deployed in a single machine or across multiple machines in a Tuxedo domain (SYSPLEX-like cluster.). The services are called by front-end Java, .Net, or Tuxedo/WS clients, or UI components (tn3270 or web-based converted 3270/BMS screens), or by other services in case of transaction linking. Deferred transactions are handled by Tuxedo's/Q component, which provides in-memory and persistent queuing services. The diagram below shows Oracle Tuxedo and its surrounding ecosystem of SOA, J2EE, ESB, CORBA, MQ, and Mainframe integration components:   User Interface Migration The UI elements in these programs are typically defined using CICS Basic Mapping Support (BMS) for 3270 "green screen" terminals. While it is possible to preserve these using tn3270 emulation, many customers in re-hosting projects choose to take advantage of automated conversion of BMS macros into JSP/HTML for Web UI. Supported by a specialized Javascript library, these Web screens mimic the appearance and the behavior of "green screens" in a web browser, including tab-based navigation and PF keys. These UI components can connect to re-hosted CICS transactions running as Tuxedo services using Oracle Jolt (Java client interface for Tuxedo), Weblogic-Tuxedo Connector (WTC), or Tuxedo's Web services gateway provided by Oracle Services Architecture Leveraging Tuxedo (SALT) product. The diagram on the next page depicts a target re-hosting architecture for a typical mainframe OLTP application. The architecture uses Tuxedo services to run re-hosted CICS programs and a web application server to run re-hosted BMS UI. The servlets or JSPs containing the HTML that defines the screens, connect with Tuxedo services via Oracle Jolt, WTC, or SALT. Customers using mainframe 4GLs or languages such as PL/I or Assembler frequently choose to convert these applications to COBOL or C/C++. The adaptation of CICS or IMS TM API calls is automated through a mapping layer, which minimizes overall changes for the development team and allows them to maintain the familiar applications. For more significant extensions and new capabilities, customers incrementally leverage Tuxedo's own APIs and facilities, or leverage a tightly-linked J2EE environment provided by the WebLogic Server, and even transparently make Web services calls. The optimal extensibility options depend on application needs, availability of Java or C/COBOL skills, and other factors.   Feature or Action CICS Verb Tuxedo API Communications Area DFHCOMMAREA Typed Buffer Transaction Request LINK tpcall Transaction Return RETURN tpreturn Transfer Control XCTL tpforward Allocate Storage GETMAIN tpalloc Queues READQ / WRITEQ TD,TS /Q tpenqueue / tpdequeue Begin new transaction START TRANID /Q and TMQFORWARD Abort transaction ISSUE ABEND tpreturn TPFAIL Commit or Rollback SYNCPOINT / SYNCPOINT ROLLBACK tpcommit / tpabort     Keeping it Real:For those familiar with CICS, this is a very short example of the CICS verbs. CICS has many functions, most of which either map natively to a similar Tuxedo API or are provided by migration specialists based on their extensive experience with such migrations. In summary, Tuxedo provides a popular platform for deploying, executing, and managing COBOL and C re-hosted transactional applications requiring any of the following OLTP and infrastructure services: Native, compiler-independent support for COBOL, C, or C++ Rich set of infrastructure services for managing and scaling diverse workloads Feature-set compatibility and inter-operability with IBM CICS and IMS/TM Two-Phase Commit (2PC) for managing transactions across multiple application domains and XA-compliant resource managers (databases, message queues) Guaranteed inter-application messaging and transactional queuing Transactional data access (using XA-compliant resource managers) with ACID qualities Services virtualization and dynamic load balancing Centralized management of multiple nodes in a domain, and across multiple domains Communications gateways for multiple traditional and modern communication protocols SOA Enablement through native Web services and ESB integration Workload Monitoring and Management An important aspect of the mainframe environment is workload monitoring and management, which provides information for effective performance analysis and capabilities that enable mainframe systems to achieve better throughput and responsiveness. Oracle's Tuxedo System and Application Monitor (TSAM) provides similar capabilities too. Define monitoring policies and patterns based on application requests, services, system servers such as gateways, bridges, and XA-defined stages of a distributed transaction Define SLA thresholds that can trigger a variety of events within Tuxedo event services including notifications, and instantiation of additional servers Monitor transactions on an end-to-end basis from a client call through all services across all domains involved in a client request Collect service statistics for all infrastructure components such as servers and gateways Detail time spent on IPC queues, waiting on network links, and time spent on subordinate services TSAM provides a built-in, central, web-based management and monitoring console, and an open framework for integration with third-party performance management tools. Batch Jobs Mainframe batch jobs are a response to a human 24-hour clock on which many businesses run. It includes beginning-of-period or end-of-period (day, week, month, quarter) processing for batched updates, reconciliation, reporting, statement generation, and similar applications. In some industries, external events tied to a fixed schedule such as intra-day, opening or closing trade in a stock exchange, drive specific processing needs. Batch applications are an equally important asset, and often need to be preserved and migrated as well. The batch environment uses Job Control Language (JCL) jobs managed and monitored by JES2 or JES3 (Job Entry System), which invoke one or more programs, access and manipulate large datasets and databases using sort and other specialized utilities, and often run under the control of a job scheduler such as CA-7/CA-11. JCL defines a series of job steps—a sequence of programs and utilities, specifies input and output files, and provides exception handling. Automated parsing and translation of JCL jobs to UNIX scripts such as Korn shell (ksh) or Perl, enables the overall structure of the job to remain the same, including job steps, classes, and exception handling. Standard shell processing is supplemented with required utilities such as SyncSort, and support for Generation Data Group (GDG) files. REXX/CLIST/PROC scripting environments on the mainframe are similarly converted to ksh or other scripting languages. Integration with Oracle Scheduler, or other job schedulers running in UNIX/Linux or Windows provides a rich set of calendar and event-based scheduling capabilities as well as dependency management similar to mainframe schedulers. In some cases, reporting done via batch jobs can be replaced using standard reporting packages such as Oracle BI Publisher. The diagram below shows a typical target re-hosting architecture for batch. It includes a scheduler to control and trigger batch jobs, scripting framework to support individual job scripts, and an application server execution framework for the batch COBOL or C programs. Unlike other solutions that run these programs directly as OS processes without the benefit of application server middleware, Oracle recommends using container-based middleware to provide higher reliability, availability, and monitoring to the batch programs. The target batch programs invoked by the scripts can also run directly as OS processes, but if mainframe-class management and monitoring similar to JES2 or JES3 environment is a requirement, these programs can run as services under Tuxedo, benefiting from the health monitoring, fail-over, load balancing, and other application server-like features it provides. Files and Databases When moving platforms (mainframe to open systems), the application and data have to be moved together. Data schemas and data stores need to be moved in a re-hosted mainframe modernization project just as with a re-architecture. The approach taken depends on the source data store. DB2 is the most straightforward, since DB2 and Oracle are both relational databases. In addition to migrating the data, customers sometimes choose to perform data cleansing, field extensions, merge columns, or other data maintenance practices leveraging the automated tooling that synchronizes all data changes with changes to the application's data access code. Mainframe DB2 DB2 is a predominant relational database on IBM mainframes. When migrating to Oracle Database, the migration approach is highly automated, and resolves all discrepancies between the two RDBMS in terms of field formats as well as error codes returned to applications, so as to maintain application behavior unchanged, including stored procedures if any. IMS IMS/DB (also known as DL/1) is a popular hierarchical database for older applications. Creating appropriate relational data schema for this data requires an understanding of the application access patterns so as to optimize the schema for best performance based on the most frequent access paths. To minimize code impact, a translation layer can be used at run-time to support IMS DB style data access from the application, and map it to appropriate SQL calls. This allows the applications to interface with the segments, now translated as DB2 UDB or ORACLE tables, without impacting application code and maintenance. VSAM VSAM files are used for keyed-sequential data access, and can be readily migrated to ISAM files or to Oracle Database tables wherever transactional integrity is required (XA features). Some customers also choose to migrate VSAM files to Oracle Database to provide accessibility from other distributed applications, or to simplify the re-engineering required to extend certain data fields or merge multiple data sources. Meeting Performance and Other QoS Requirements The mainframe's performance, reliability, scalability, manageability, and other QoS attributes have earned it pre-eminence for business-critical applications. How well do re-hosting solutions measure up against these characteristics? Earlier solutions based on IBM CICS emulators derived from development tools often did not measure up to the demands of mainframe workloads since they were never intended for true production environment and have not been exposed to large-scale applications. As a result, they have only been used for re-hosting small systems under 300 MIPS and not requiring any clustering or distributed workload handling. Oracle Tuxedo was built to scale ground up, to support high performance telecommunications operations. It has the distinction of being the only non-mainframe TP solution recognized for its mainframe-like performance, reliability, and QoS characteristics. Most large enterprise customers requiring such capabilities in distributed systems have traditionally relied on Tuxedo. Consistently rated by IDC and Gartner as the market leader, and predominant in non-mainframe OLTP applications, it has also become the preferred COBOL/C application platform and transaction engine for re-hosted mainframe applications requiring high performance and/or mission-critical availability and reliability. Reasons for the broad recognition of Tuxedo as the only mainframe-class application platform and transaction engine for distributed systems are based on mainframe-class performance, scalability, reliability, availability, and other QoS attributes proven in multiple customer deployments. The following table highlights some of these capabilities:   Reliability Availability Guaranteed messaging and transactional integrity Hardened code from 25 years of use in the world's largest transaction applications Transaction integrity across systems and domains through a two phase commit (XA) for all resources such as databases, queues, and so on. Proven in mainframe-to-mainframe transactions and messaging No single point of failure, 99.999% uptime with N+1/N+2 clusters Application services upgradeable in operation Self-monitoring, automated fail-over, datadriven routing for super high availability Centralized monitoring and management with clustered domains; automated, lights-out operations     Workload Management   Performance and Scalability   Resource management and prioritization across Tuxedo services Dynamic load balancing across domains based on load conditions Data-driven routing enables horizontally distributed database grids and differentiated QoS End-to-end monitoring of Tuxedo system and application services enables SLA enforcement Virtualization support enables spawning of Tuxedo servers on demand Parallel processing to maximize resource utilization with low latency code paths that provide sub-second response at any load Horizontal and vertical scaling of system resources yields linear performance increases Request multiplexing (synchronous and asynchronous) maximizes CPU utilization Proven in credit card authorizations at over 13.5K tps, and in telco billing at over 56K tps. Middleware of choice in HP, Fujitsu, Sun, IBM, and NEC TPC-C benchmarks    
Read more
  • 0
  • 0
  • 2284

article-image-need-java-business-integration-and-service-engines-netbeans
Packt
23 Oct 2009
6 min read
Save for later

Need for Java Business Integration and Service Engines in NetBeans

Packt
23 Oct 2009
6 min read
In this article, we will discuss the following topics: Need for Java Business Integration (JBI) Enterprise Service Bus Normalized Message Router Service Engines in NetBeans Need for Java Business Integration (JBI) To have a good understanding of Service Engines (a specific type of JBI component), we need to first understand the reason for Java Business Integration. In the business world, not all systems talk the same language. They use different protocols and different forms of communications. Legacy systems in particular can use proprietary protocols for external communication. The advent and acceptance of XML has been greatly beneficial in allowing systems to be easily integrated, but XML itself is not the complete solution. When some systems were first developed, they were not envisioned to be able to communicate with many other systems; they were developed with closed interfaces using closed protocols. This, of course, is fine for the system developer, but makes system integration very difficult. This closed and proprietary nature of enterprise systems makes integration between enterprise applications very difficult. To allow enterprise systems to effectively communicate between each other, system integrators would use vendor-supplied APIs and data formats or agree on common exchange mechanisms between their systems. This is fine for small short term integration, but quickly becomes unproductive as the number of enterprise applications to integrate gets larger. The following figure shows the problems with traditional integration. As we can see in the figure, each third party system that we want to integrate with uses a different protocol. As a system integrator, we potentially have to learn new technologies and new APIs for each system we wish to integrate with. If there are only two or three systems to integrate with, this is not really too much of a problem. However, the more systems we wish to integrate with, the more proprietary code we have to learn and integration with other systems quickly becomes a large problem. To try and overcome these problems, the Enterprise Application Integration (EAI) server was introduced. This concept has an integration server acting as a central hub. The EAI server traditionally has proprietary links to third party systems, so the application integrator only has to learn one API (the EAI server vendors). With this architecture however, there are still several drawbacks. The central hub can quickly become a bottleneck, and because of the hub-and-spoke architecture, any problems at the hub are rapidly manifested at all the clients. Enterprise Service Bus To help solve this problem, leading companies in the integration community (led by Sun Microsystems) proposed the Java Business Integration Specification Request (JSR 208) (Full details of the JSR can be found at http://jcp.org/en/jsr/detail?id=208). JSR 208 proposed a standard framework for business integration by providing a standard set of service provider interfaces (SPIs) to help alleviate the problems experienced with Enterprise Application Integration. The standard framework described in JSR 208 allows pluggable components to be added into a standard architecture and provides a standard common mechanism for each of these components to communicate with each other based upon WSDL. The pluggable nature of the framework described by JSR 208 is depicted in the following figure. It shows us the concept of an Enterprise Service Bus and introduces us to the Service Engine (SE) component: JSR 208 describes a service engine as a component, which provides business logic and transformation services to other components, as well as consuming such services. SEs can integrate Java-based applications (and other resources), or applications with available Java APIs. Service Engine is a component which provides (and consumes) business logic and transformation services to other components. There are various Service Engines available, such as the BPEL service engine for orchestrating business processes, or the Java EE service engine for consuming Java EE Web Services. The Normalized Message Router As we can see from the previous figure, SE's don't communicate directly with each other or with the clients, instead they communicate via the NMR. This is one of the key concepts of JBI, in that it promotes loose coupling of services. So, what is NMR and what is its purpose? NMR is responsible for taking messages from clients and routing them to the appropriate Service Engines for processing. (This is not strictly true as there is another standard JBI component called the Binding Component responsible for receiving client messages. Again, this further enhances the support for loose coupling within JBI, as Service Engines are decoupled from their transport infrastructure). NMR is responsible for passing normalized (that is based upon WSDL) messages between JBI components. Messages typically consist of a payload and a message header which contains any other message data required for the Service Engine to understand and process the message (for example, security information). Again, we can see that this provides a loosely coupled model in which Service Engines have no prior knowledge of other Service Engines. This therefore allows the JBI architecture to be flexible, and allows different component vendors to develop standard based components. Normalized Message Router enables technology for allowing messages to be passed between loosely coupled services such as Service Engines. The figure below gives an overview of the message routing between a client application and two service engines, in this case the EE and SQL service engines. In this figure, a request is made from the client to the JBI Container. This request is passed via NMR to the EE Service Engine. The EE Service Engine then makes a request to the SQL Service Engine via NMR. The SQL Service Engine returns a message to the EE Service Engine again via NMR. Finally, the message is routed back to the client through NMR and JBI framework. The important concept here is that NMR is a message routing hub not only between clients and service engines, but also for intra-communication between different service engines. The entire architecture we have discussed is typically referred to as an Enterprise Service Bus. Enterprise Service Bus (ESB) is a standard-based middleware architecture that allows pluggable components to communicate with each other via a messaging subsystem.
Read more
  • 0
  • 0
  • 1764

article-image-business-process-modeling
Packt
23 Oct 2009
13 min read
Save for later

Business Process Modeling

Packt
23 Oct 2009
13 min read
Modeling Business Processes The transparency of the process flow is crucial, as this gives the process owners, process analysts, and all others involved an insight into what is going on. An understanding of the as-is process flow also ensures that we can judge the efficiency and the quality of the process. The main objective of process modeling is the definition of the as-is process flow. Process modeling needs to answer the following questions: What is the outcome of the business process? What activities are performed within the business process? What is the order of activities? Who performs the activities? Which business documents are exchanged within the process? How foolproof is the process, and how can it be extended in the future? After answering these and some other questions, we get a good insight into how the process works. We can also identify structural, organizational, and technological weak points and even bottlenecks, and identify potential improvements to the process. We will model business process to satisfy the following objectives: To specify the exact result of the business process, and to understand the business value of this result. To understand the activities of the business process. Knowing the exact tasks and activities that have to be performed is crucial to understanding the details of the process. To understand the order of activities. Activities can be performed in sequence or in parallel, which can help improve the overall time required to fulfill a business process. Activities can be short-running or long-running. To understand the responsibilities, to identify (and later supervise) who is responsible for which activities and tasks. To understand the utilization of resources consumed in the business process. Knowing who uses which resources can help improve the utilization of resources as resource requirements can be planned for and optimized. To understand the relationship between people involved in the processes, and their communication. Knowing exactly who communicates with whom is important and can help to organize and optimize communications. To understand the document flow. Business processes produce and consume documents (regardless of whether these are paper or electronic documents). Understanding where the documents are going, and where they are coming from is important. A good overview of the documents also gives us the opportunity to identify whether all of the documents are really necessary. To identify potential bottlenecks and points of improvements, which can be used later in the process optimization phase. To introduce quality standards such as ISO 9001 more successfully, and to better pass certification. To improve the understandability of quality regulations that can be supplemented with process diagrams. To use business process models as work guidelines for new employees who can introduce themselves to the business processes faster and more efficiently. To understand business processes, which will enable us to understand and describe the company as a whole. A good understanding of business processes is very important for developing IT support. Applications that provide end-to-end support for business processes, can be developed efficiently only if we understand the business processes in details. Modeling Method and Notation Efficient process modeling requires a modeling method that provides a structured and controlled approach to process modeling. Several modeling methods have been developed over the years. Examples include IDS Sheer's the ARIS methodology, CSC's Catalyst, Business Genetics, SCOR and the extensions PCOR and VCOR, POEM, and so on. The ARIS methodology has been the most popular methodology, and has been adopted by many software vendors. In the next section, we will describe the basics of the ARIS methodology, which has lately been adapted to be conformant with SOA. ARIS ARIS is both a BPM methodology, and an architectural framework for designing enterprise architectures. Enterprise architecture combines business models (process models, organizational models, and so on) with IT models (IT architecture, data model, and so on). ARIS stands for Architecture of Integrated Information Systems and comprises of two things, the methodology and framework, and the software that supports both. Here, we will give a brief introduction to ARIS methodology and framework, which dates back to 1992. The objective of ARIS is to narrow the gap between business requirements and IT. The ARIS framework is not only about process models (describing business processes), although process models are one of the most important things of ARIS. As enterprise architecture is complex, ARIS defines several views that focus on specific aspects such as business, technology, information, and so on, to reduce the complexity. The ARIS framework describes the following: Business processes Products and services related to the processes The structure of the organization Business objectives and strategies Information flows IT architecture and applications The data model Resources (people and hardware resources) Costs Skills and knowledge These views are gathered under the concept of ARIS House, which provides a structured view on all information on business processes. ARIS House offers five views: The process view (also called the control view) is the central view that shows the behavior of the processes, how the processes relate to the products and services, organization, functions, and data. The process view includes the process models in the selected notation, and other diagrams such as information flow, material flow, value chains, communication diagrams, and so on. The product and service view shows the products and services, their structures, relations, and product/service trees. The organizational view shows the organizational structure of the company, including departments, roles, and employees. It shows these in hierarchical organizational charts. The organization view also shows technical resources and communication networks. The function view defines process tasks and describes business objectives, function hierarchies, and application software. The data view shows business data and information. This view includes data models, information maps, database models, and knowledge structures. The ARIS House is illustrated in the following figure: In ARIS House, the process view is the central view of the dynamic behavior of the business processes and brings together the other four static views, the organizational view, data view, function view and product/service view. In this book, we will focus primarily on the process view. Each ARIS view is divided further into phases. The translation of business requirements into IT applications requires that we follow certain phases. Globally, three general phases are likely to be used: Requirements phase Design specification phase Implementation phase ARIS is particularly strong in the requirements phase, while other phases may differ depending on the implementation method and the architecture we use. We will talk about these later in this article. Let us now look at the other important aspect, the business process modeling notations. Modeling Notation Process modeling also requires a notation In the past, several notations were used to model processes. Flow diagrams and block diagrams were representatives of the first-generation notations. Then, more sophisticated notations were defined, such as EPC (Event Process Chain) and eEPC (Extended Event Process Chain). UML activity diagrams, XPDL, and IDEF 3 were also used, in addition to some other less-known notations. A few years ago a new notation, called Business Process Modeling Notation (BPMN) was developed. BPMN was developed particularly for modeling business processes in accordance with SOA. In this article, we will use BPMN for modeling processes. BPMN BPMN is the most comprehensive notation for process modeling so far. It has been developed under the hood of OMG (Object Management Group). Let us look into the brief introduction of the most important BPMN elements so that we can read the diagrams presented later in this article. The most important goals while designing BPMN have been: To develop a notation, which will be understandable at all levels: In business process modeling different people are involved, from business users, business analysts, and process owners, to the technical architects and developers. The management reviews business processes at periodic intervals. Therefore, the goal of BPMN has been to provide a graphical notation the is simple to understand, yet powerful enough to model business processes at the required level of detail. To enable automatic transformation into executable code, that is, BPEL, and vice-versa: The gap between the business process models and the information technology (application software) has been quite large in existing technologies. There is no clear definition on how one relates to the other. Therefore, BPMN has been designed specifically to provide such transformations. To model the diagrams, BPMN defines four categories of elements: Flow objects, which are activities, events, and gateways. Activities can be tasks or sub-processes. Events can be triggers or results. Three types of events are supported: start, intermediate, and end. Gateways control the divergence of sequential flows into concurrent flows, and their convergence back to sequential flow. Connecting objects are used to connect flow objects together. Connectors are sequence flows, message flows, and associations. Swim lanes are used to organize activities into visual categories in order to illustrate different responsibilities or functional capabilities. Pools and lanes can be used for swim lanes. Artifacts are used to add specific context to the business processes that are being modeled. Data objects are used to show how data is produced or required by the process. Groups are used to group together similar activities or other elements. Annotations are used to add text information to the diagram. We can also define custom artifacts. The following diagrams show the various notations used in BPMN: Activities are the basic elements of BPMN and are represented by rectangles with rounded corners. A plus sign denotes that the activity can be further decomposed: Decisions are shown as diamonds. A plus sign inside the diamond denotes a logical AND, while an x denotes a logical OR: Events are shown as double circles: Roles are shown as pools and swim-lanes within pools: A Document is shown as follows: The order of activities is indicated by an arrow: The flow of a document or information is shown with a dashed line: BPMN can be used to model parts of processes or whole processes. Processes can be modeled at different levels of fidelity. BPMN is equally suitable for internal (private) business processes, and for public (collaborative) business-to-business processes. Internal business processes focus on the point of view of a single company, and define activities that are internal to the company. Such processes might also define interactions with external partners. Public collaborative processes show the interaction between all involved businesses and organizations. Such processes models should be modeled from the general point of view, and should show interactions between the participants. Process Design The main activity in process design is the recording of the actual processes. The objective is to develop the as-is process model. To develop the as-is model, it is necessary to gather all knowledge about the process. This knowledge often exists only in the heads of the employees, who are involved in the process. Therefore, it is necessary to perform detailed interviews with all involved people. Often, process supervisors might think that they know exactly how the process is performed. However, after talking with those employees who really carry out the work, they see that the actual situation differs considerably. It is very important to gather all this information about the process, otherwise it will not be possible to develop a sound process model, that reflects the as-is state of the process. The first question related to the as-is model is the business result that the process generates. Understanding the business result is crucial, as sometimes it may not be clearly articulated. After the business result is identified, we should understand the process flow. The process flow consists of activities (or tasks) that are performed in a certain order. The process flow is modeled at various levels of abstraction. At the highest level of abstraction, the process flow shows only the most important activities (usually up to ten). Each of the top-level activities are then decomposed into detailed flows. The process complexity, and the required level of detail, are the criteria that instruct us how deep we should decompose. To understand the process behavior completely, it makes sense to decompose until atomic activities (that is, activities that cannot be further decomposed) are reached. When developing the as-is process model, one of the most important things to consider is the level of detail. In order to provide end-to-end support for business processes using SOA, detailed process modeling should be done. The difficulties often hide in the details! In the process design, we should understand the detailed structure of the business process. Therefore, we should identify at least the following: Process activities at various levels of detail Roles responsible for carrying out each process activity Events that trigger the process execution and events that interrupt the process flow Documents exchanged within the process. This includes input documents and output documents Business rules that are part of the process We should design the usual (also called optimal) process flow and identify possible exception scenarios. Exceptions interrupt the usual process flow. Therefore, we need to specify how the exceptions will be handled. The usual approach to the process design includes the following steps: Identifying the roles Identifying the activities Connecting activities to roles Defining the order of activities Adding events Adding documents We should also understand the efficiency of the business process. This includes resource utilization, the time taken by involved employees, possible bottlenecks, and inefficiencies. This is the reason why we should also identify metrics that are used to measure the efficiency of the process. While some of these metrics may be KPIs, other metrics relevant to the process should also be identified. We should identify if the process is compliant with standards or reference processes. In some industry domains, reference processes have been defined. An example is the telecommunications industry where the TMF (Telecom Management Forum) has defined NGOSS. Part of NGOSS is eTom (Enhanced Telecom Operations Map), which specifies compliant business processes for telecom companies. Other industries have also started to develop similar reference processes. We should also identify the business goals to which the process contributes to. Business goals are the same as the process results. A business process should not only have at least one result, but should also contribute to at least one (preferably more than one) business goal. Here, we can look into the company strategy to identify the business goals. We should also identify the events that can interrupt the process flow. Each process can be interrupted, and we should understand how this happens. If a process is interrupted, we might need to compensate those activities of the process that have already been successfully completed. Therefore, we should also specify the compensation logic related to different interruption events. Finally, we should also understand the current software support for the business process. This is important because existing software may hide the details of process behavior. This information can also be re-used for end-to-end process support. Once we have identified all of these artifacts, we will have gathered a good understanding of the process. Therefore, let us now look at the results of the process modeling.
Read more
  • 0
  • 0
  • 2287

article-image-application-development-visual-c-tetris-application
Packt
23 Oct 2009
19 min read
Save for later

Application Development in Visual C++ - The Tetris Application

Packt
23 Oct 2009
19 min read
This application supports the single document interface, which implies that we have one document class object and one view class object. The other applications support the multiple document interface, they have one document class object and zero or more view class objects. The following screenshot depicts a classic example of the Tetris Application: We start by generating the application's skeleton code with The Application Wizard. The process is similar to the Ring application code. There is a small class Square holding the position of one square and a class ColorGrid managing the game grid. The document class manages the data of the game and handles the active (falling down) figure and the next (shown to the right of the game grid) figure. The view class accepts input from the keyboard and draws the figures and the game grid. The Figure class manages a single figure. It is responsible for movements and rotations. There are seven kinds of figures. The Figure Info files store information pertaining to their colors and shapes. The Tetris Files We start by creating a MFC application with the name Tetris and follow the steps of the Ring application. The classes CTetrisApp, CMainFrame, CTetrisDoc, CTetrisView, and CAboutDlg are then created and added to the project. There are only two differences. We need to state that we are dealing with a "Single Document Application Type", that the file extension is "Trs" and that the file type long name is "A Game of Tetris". Otherwise, we just accept the default settings. Note that in this application we accept the CView base class instead of the CScrollView like we did in the Ring application.     We add the marked lines below. In all other respects, we leave the file unmodified. We will not need to modify the files Tetris.h, MainFrm.h, MainFrm.cpp, StdAfx.h, StdAfx.cpp, Resource.h, and Tetris.rc. #include"stdafx.h"#include "Square.h"#include"Figure.h"#include "ColorGrid.h"#include"Tetris.h"#include "MainFrm.h"#include"TetrisDoc.h"#include "TetrisView.h"//... The Color Grid Class The ColorGrid handles the background game grid of twenty rows and twenty columns. Each square can have a color. At the beginning, every square is initialized to the default color white. The Index method is overloaded with a constant version that returns the color of the given square, and a non-constant version that returns a reference to the color. The latter version makes it possible to change the color of a square. ColorGrid.h classSquare{public:Square();Square(int iRow, int iCol);int Row() const {return m_iRow;}int Col() const {return m_iCol;}private:int m_iRow, m_iCol;}; There are two Index methods, the second one is intended to be called on a constant object. Both methods check that the given row and position have valid values. The checks are, however, for debugging purposes only. The methods are always called with valid values. Do not forget to include the file StdAfx.h. ColorGrid.cpp const int ROWS = 20;const int COLS = 10;classColorGrid{public:ColorGrid();void Clear();COLORREF&Index(int iRow, int iCol);const COLORREF Index(int iRow, int iCol)const;void Serialize(CArchive&archive);private:COLORREF m_buffer[ROWS * COLS];}; The Document Class CTetrisDoc is the document class of this application. When created, it overrides OnNewDocument and Serialize from its base class CDocument. We add to the CTetrisDoc class a number of fields and methods. The field m_activeFigure is active figure, that is the one falling down during the game. The field m_nextFigure is the next figure, that is the one showed in the right part of the game view. They both are copies of the objects in the m_figureArray, which is an array figure object. There is one figure object of each kind (one figure of each color). The integer list m_scoreList holds the ten top list of the game. It is loaded from the file ScoreList.txt by the constructor and saved by the destructor. The integer field m_iScore holds the score of the current game. GetScore, GetScoreList, GetActiveFigure, GetNextFigure, and GetGrid are called by the view class in order to draw the game grid. They simply return the values of the corresponding fields. The field m_colorGrid is an object of the class ColorGrid, which we defined in the previous section. It is actually just a matrix holding the colors of the squares of the game grid. Each square is intialized to the color white and a square is considered to be empty as long as it is white. When the application starts, the constructor calls the C standard library function srand. The name is an abbreviation for sowing a random seed. By calling srand with an integer seed, it will generate a series of random number. In order to find a new seed every time the application starts, the C standard library function time is called, which returns the number of seconds elapsed since January 1, 1970. In order to obtain the actual random number, we call rand that returns a number in the interval from zero to the predefined constant RAND_MAX. The prototypes for these functions are defined in time.h (time) and stdlib.h (rand and srand), respectively. #include"StdAfx.h"COLORREF& ColorGrid::Index(int iRow, int iCol){check((iRow >= 0) && (iRow < ROWS));check((iCol >= 0) && (iCol < COLS));return m_buffer[iRow * COLS + iCol];}const COLORREF ColorGrid::Index(int iRow, int iCol)const{check((iRow >= 0) && (iRow < ROWS));check((iCol >= 0) && (iCol < COLS));return m_buffer[iRow * COLS + iCol];} When the user presses the space key and the active figure falls down or when a row is filled and is flashed, we have to slow down the process in order for the user to apprehand the event. There is a Win32 API function Sleep that pauses the application for the given amount of milliseconds. #include <time.h>#include <stdlib.h>time_ttime(time_t *pTimer);void srand(unsigned int uSeed);intrand(); The user can control the horizontal movement and rotation of the falling figures by pressing the arrow keys. Left and right arrow keys move the figure to the left or right. The up and down arrow key rotates the figure clockwise or counter clockwise, respectively. Every time the user presses one of those keys, a message is sent to the view class object and caught by the method OnKeyDown, which in turn calls one of the methods LeftArrowKey, RightArrowKey, UpArrowKey, DownArrowKey to deal with the message. They all work in a similar fashion. They try to execute the movement or rotation in question. If it works, both the old and new area of the figure is repainted by making calls to UpdateAllViews. The view class also handles a timer that sends a message every second the view is in focus. The message is caught by the view class method OnTimer that in turn calls Timer. It tries to move the active figure one step downwards. If that is possible, the area of the figure is repainted in the same way as in the methods above. However, if it is not possible, the squares of the figure are added to the game grid. The active figure is assigned to the next figure, and the next figure is assigned a copy of a randomly selected figure in m_figureArray. We also check whether any row has been filled. In that case, it will be removed and we will check to see if the game is over. The user can speed up the game by pressing the space key. The message is caught and sent to SpaceKey. It simply calls OnTimer as many times as possible at intervals of twenty milliseconds in order to make the movement visible to the user. When a figure has reached its end position and any full rows have been removed, the figure must be valid. That is, its squares are not allowed to occupy any already colored position. If it does, the game is over and GameOver is called. It starts by making the game grid gray and asks the users whether they want to play another game. If they do, the game grid is cleared and set back to colored mode and a new game starts. If they do not, the application exits. NewGame informs the players whether they made to the top ten list and inquires about another game by displaying a message box. AddToScore examines whether the player has made to the ten top list. If so, the score is added to the list and the ranking is returned, if not, zero is returned. DeleteFullRows traverses the game grid from top to bottom flashing and removing every full row. IsRowFull traverses the given row and returns true if no square has the default color (white). FlashRow flashes the row by showing it three times in grayscale and color at intervals of twenty milliseconds. DeleteRow removes the row by moving all rows above one step downwards and inserting an empty row (all white squares) at top. The next figure and the current high score are painted at specific positions on the client area, the rectangle constants NEXT_AREA and SCORE_AREA keep track of those positions. TetrisDoc.h void Sleep(int iMilliSeconds); The field m_figureArray holds seven figure objects, one of each color. When we need a new figure, we just randomly copy one of them. TetrisDoc.cpp typedef CList<int>IntList;const int FIGURE_ARRAY_SIZE = 7;class CTetrisDoc :publicCDocument{protected:CTetrisDoc();public:virtual ~CTetrisDoc();void SaveScoreList();protected:DECLARE_MESSAGE_MAP()DECLARE_DYNCREATE(CTetrisDoc)public:virtual void Serialize(CArchive& archive);int GetScore() const {return m_iScore;}const IntList* GetScoreList() {return &m_scoreList;}const ColorGrid* GetGrid() {return &m_colorGrid;}const Figure& GetActiveFigure() const{return m_activeFigure;}const Figure& GetNextFigure() const {return m_nextFigure;}public:void LeftArrowKey();void RightArrowKey();void UpArrowKey();void DownArrowKey();BOOL Timer();void SpaceKey();private:void GameOver();BOOL NewGame();int AddScoreToList();void DeleteFullRows();BOOL IsRowFull(int iRow);void FlashRow(int iFlashRow);void DeleteRow(int iDeleteRow);private:ColorGrid m_colorGrid;Figure m_activeFigure, m_nextFigure;int m_iScore;IntList m_scoreList;const CRect NEXT_AREA, SCORE_AREA;static Figure m_figureArray[FIGURE_ARRAY_SIZE];}; When the user presses the left arrow key, the view class object catches the message and calls LeftArrowKey in the document class object. We try to move the active figure one step to the left. It is not for sure that we succeed. The figure may already be located at the left part of the game grid. However, if the movement succeeds, the figure's position is repainted and true is returned. In that case, we repaint the figure's old and new graphic areas in order to repaint the figure. Finally, we set the modified flag since the figure has been moved. The method RightArrowKey works in a similar way. Figure redFigure(NORTH, RED, RedInfo);Figure brownFigure(EAST, BROWN, BrownInfo);Figure turquoiseFigure(EAST, TURQUOISE, TurquoiseInfo);Figure greenFigure(EAST, GREEN, GreenInfo);Figure blueFigure(SOUTH, BLUE, BlueInfo);Figure purpleFigure(SOUTH, PURPLE, PurpleInfo);Figure yellowFigure(SOUTH, YELLOW, YellowInfo);Figure CTetrisDoc::m_figureArray[] = {redFigure, brownFigure, turquoiseFigure, greenFigure, yellowFigure, blueFigure, purpleFigure}; Timer is called every time the active figure is to moved one step downwards. That is,each second when the application has focus. If the downwards movement succeeds, then the figure is repainted in a way similar to LeftArrowKey above. However, if the movement does not succeed, the movement of the active figure has come to an end. We call AddToGrid to color the squares of the figure. Then we copy the next figure to the active figure and randomly copy a new next figure. The next figure is the one shown to the right of the game grid. However, the case may occur that the game grid is full. That is the case if the new active figure is not valid, that is, the squares occupied by the figure are not free. If so, the game is over, and the user is asked whether he wants a new game. void CTetrisDoc::LeftArrowKey(){CRectrcOldArea = m_activeFigure.GetArea();if (m_activeFigure.MoveLeft()){CRectrcNewArea = m_activeFigure.GetArea();UpdateAllViews(NULL, COLOR, (CObject*) &rcOldArea);UpdateAllViews(NULL, COLOR, (CObject*) &rcNewArea);SetModifiedFlag();}} If the user presses the space key, the active figure falling will fall faster. The Timer method is called every 20 milliseconds.   BOOLCTetrisDoc::Timer(){SetModifiedFlag();CRectrcOldArea = m_activeFigure.GetArea();if (m_activeFigure.MoveDown()){CRectrcNewArea = m_activeFigure.GetArea();UpdateAllViews(NULL, COLOR, (CObject*) &rcOldArea);UpdateAllViews(NULL, COLOR, (CObject*) &rcNewArea);returnTRUE;}else{m_activeFigure.AddToGrid();m_activeFigure = m_nextFigure;CRect rcActiveArea = m_activeFigure.GetArea();UpdateAllViews(NULL, COLOR, (CObject*) &rcActiveArea);m_nextFigure = m_figureArray[rand() % FIGURE_ARRAY_SIZE];UpdateAllViews(NULL, COLOR, (CObject*) &NEXT_AREA);DeleteFullRows();if (!m_activeFigure.IsFigureValid()){GameOver();}returnFALSE;}} When the game is over, the users are asked whether they want a new game. If so, we clear the grid, randomly select the the next active and next figure, and repaint the whole client area. void CTetrisDoc::SpaceKey(){while(Timer()){Sleep(20);}} Each time a figure is moved, one or more rows may be filled. We start by checking the top row and then go through the rows downwards. For each full row, we first flash it and then remove it. voidCTetrisDoc::GameOver(){UpdateAllViews(NULL, GRAY);if (NewGame()){m_colorGrid.Clear();m_activeFigure = m_figureArray[rand() %FIGURE_ARRAY_SIZE];m_nextFigure = m_figureArray[rand() % FIGURE_ARRAY_SIZE];UpdateAllViews(NULL, COLOR);else{SaveScoreList();exit(0);}} When a row is completely filled, it will flash before it is removed. The flash effect is executed by redrawing the row in color and in grayscale three times with an interval of 50 milliseconds. void CTetrisDoc::DeleteFullRows(){int iRow = ROWS - 1;while (iRow >= 0){if(IsRowFull(iRow)){FlashRow(iRow);DeleteRow(iRow);++m_iScore;UpdateAllViews(NULL, COLOR, (CObject*) &SCORE_AREA);}else{--iRow;}}} When a row is removed, we do not really remove it. If we did, the game grid would shrink. Instead, we copy the squares above it and clear the top row. voidCTetrisDoc::FlashRow(int iRow){for (int iCount = 0; iCount < 3; ++iCount){CRect rcRowArea(0, iRow, COLS, iRow + 1);UpdateAllViews(NULL, GRAY, (CObject*) &rcRowArea);Sleep(50);CRect rcRowArea2(0, iRow, COLS, iRow + 1);UpdateAllViews(NULL, COLOR, (CObject*) &rcRowArea2);Sleep(50);}} The View Class CTetrisView is the view class of the application. It receives system messages and (completely or partly) redraws the client area. The field m_iColorStatus holds the painting status of the view. Its status can be either color or grayscale. The color status is the normal mode, m_iColorStatus is initialized to color in the constructor. The grayscale is used to flash rows and to set the game grid in grayscale while asking the user for another game. OnCreate is called after the view has been created but before it is shown. The field m_pTetrisDoc is set to point at the document class object. It is also confirmed to be valid. OnSize is called each time the size of the view is changed. It sets the global variables g_iRowHeight and g_iColWidth (defi ned in Figure.h), which are used by method of the Figure and ColorGrid classes to paint the squares of the figures and the grid. OnSetFocus and OnKillFocus are called when the view receives and loses the input focus. Its task is to handle the timer. The idea is that the timer shall continue to send timer messages every second as long as the view has the input focus. Therefore, OnSetFocus sets the timer and OnKillFocus kills it. This arrangement implies that OnTimer is called each second the view has input focus. In Windows, the timer cannot be turned off temporarily; instead, we have to set and kill it. The base class of the view, CWnd, has two methods: SetTimer that initializes a timer and KillTimer that stops the timer. The first parameter is a unique identifier to distinguish this particular timer from any other one. The second parameter gives the time interval of the timer, in milliseconds. When we send a null pointer as the third parameter, the timer message will be sent to the view and caught by OnTimer. KillTimer simply takes the identity of the timer to finish. void CTetrisDoc::DeleteRow(int iMarkedRow){for (int iRow = iMarkedRow; iRow > 0; --iRow){for (int iCol = 0; iCol < COLS; ++iCol){m_colorGrid.Index(iRow, iCol) = m_colorGrid.Index(iRow - 1, iCol);}}for (int iCol = 0; iCol < COLS; ++iCol){m_colorGrid.Index(0, iCol) = WHITE;}CRect rcArea(0, 0, COLS, iMarkedRow + 1);UpdateAllViews(NULL, COLOR, (CObject*) &rcArea);} OnKeyDown is called every time the user presses a key on the keyboard. It analizes the pressed key and calls suitable methods in the document class if the left, right, up, or down arrow key or the space key is pressed. When a method of the document class calls UpdateAllViews, OnUpdate of the view class object connected to the document object is called. As this is a single view application, the application has only one view object on which OnUpdate is called. UpdateAllViews takes two extra parameters, hints, which are sent to OnUpdate. The first hint tells us whether the next repainting shall be done in color or in grayscale, the second hint is a pointer to a rectangle holding the area that is to be repainted. If the pointer is not null, we calculate the area and repaint it. If it is null, the whole client area is repainted. OnUpdate is also called by OnInitialUpdate of the base class CView with both hints set to zero. That is not a problem because the COLOR constant is set to zero. The effect of this call is that the whole view is painted in color. OnUpdate calls UpdateWindow in CView that in turn calls OnPaint and OnDraw with a device context. OnPaint is also called by the system when the view (partly or completely) needs to be repainted. OnDraw loads the device context with a black pen and then draws the grid, the score list, and´the active and next figures. TetrisView.h UINT_PTR SetTimer(UINT_PTR iIDEvent, UINT iElapse, void (CALLBACK* lpfnTimer)(HWND, UINT, UINT_PTR, DWORD));BOOL KillTimer(UINT_PTR nIDEvent); TetrisView.cpp This application catches the messsages WM_CREATE, WM_SIZE, WM_SETFOCUS, WM_KILLFOCUS, WM_TIMER, and WM_KEYDOWN. const intTIMER_ID = 0;enum {COLOR = 0, GRAY = 1};class CTetrisDoc;COLORREF GrayScale(COLORREF rfColor);class CTetrisView : public CView{protected: CTetrisView();DECLARE_DYNCREATE(CTetrisView)DECLARE_MESSAGE_MAP()public: afx_msgint OnCreate(LPCREATESTRUCT lpCreateStruct);afx_msg void OnSize(UINT nType, int iClientWidth, int iClientHeight);afx_msg void OnSetFocus(CWnd* pOldWnd);afx_msg void OnKillFocus(CWnd* pNewWnd);afx_msg void OnKeyDown(UINT nChar, UINT nRepCnt, UINT nFlags);afx_msg void OnTimer(UINT nIDEvent);void OnUpdate(CView* /* pSender */, LPARAM lHint, CObject* pHint);void OnDraw(CDC* pDC); private: void DrawGrid(CDC* pDC);void DrawScoreAndScoreList(CDC* pDC);void DrawActiveAndNextFigure(CDC* pDC);private: CTetrisDoc* m_pTetrisDoc;int m_iColorStatus;}; When the view object is created, is connected to the document object by the pointer m_pTetrisDoc. BEGIN_MESSAGE_MAP(CTetrisView, CView)ON_WM_CREATE()ON_WM_SIZE()ON_WM_SETFOCUS()ON_WM_KILLFOCUS()ON_WM_TIMER()ON_WM_KEYDOWN()END_MESSAGE_MAP() The game grid is dimensioned by the constants ROWS and COLS. Each time the user changes the size of the application window, the global variables g_iRowHeight and g_iColWidth, which are defined in Figure.h, store the height and width of one square in pixels. int CTetrisView::OnCreate(LPCREATESTRUCT lpCreateStruct){// We check that the view has been correctly created.if (CView::OnCreate(lpCreateStruct) == -1){return -1;}m_pTetrisDoc = (CTetrisDoc*) m_pDocument;check(m_pTetrisDoc != NULL);ASSERT_VALID(m_pTetrisDoc);return 0;} OnUpdate is called by the system when the window needs to be (partly or completely) repainted. In that case, the parameter pHint is zero and the whole client area is repainted. However, this method is also indirectly called when the document class calls UpdateAllView. In that case, lHint has the value color or gray, depending on whether the client area shall be repainted in color or in a grayscale. If pHint is non-zero, it stores the coordinates of the area to be repainted. The coordinates are given in grid coordinates that have to be translated into pixel coordinates before the area is invalidated. The method first calls Invalidate or InvalidateRect to define the area to be repainted, then the call to UpdateWindow does the actual repainting by calling OnPaint in CView, which in turn calls OnDraw below. void CTetrisView::OnSize(UINT /* uType */,int iClientWidth, int iClientHeight){g_iRowHeight = iClientHeight / ROWS;g_iColWidth = (iClientWidth / 2) / COLS;} OnDraw is called when the client area needs to be repainted, by the system or by UpdateWindow in OnUpdate. It draws a vertical line in the middle of the client area, and then draws the game grid, the high score list, and the current figures. voidCTetrisView::OnUpdate(CView* /* pSender */, LPARAM lHint, CObject*pHint){m_iColorStatus = (int) lHint;if (pHint != NULL){CRect rcArea = *(CRect*) pHint;rcArea.left *= g_iColWidth;rcArea.right *= g_iColWidth;rcArea.top *= g_iRowHeight;rcArea.bottom *= g_iRowHeight;InvalidateRect(&rcArea);}else{Invalidate();}UpdateWindow();} DrawGrid traverses through the game grid and paints each non-white square. If a square is not occupied, it has the color white and it not painted. The field m_iColorStatus decides whether the game grid shall be painted in color or in grayscale. void CTetrisView::OnDraw(CDC* pDC){CPen pen(PS_SOLID, 0, BLACK);CPen* pOldPen = pDC->SelectObject(&pen);pDC->MoveTo(COLS * g_iColWidth, 0);pDC->LineTo(COLS * g_iColWidth, ROWS * g_iRowHeight);DrawGrid(pDC);DrawScoreAndScoreList(pDC);DrawActiveAndNextFigure(pDC);pDC->SelectObject(&pOldPen);} GrayScale returns the grayscale of the given color, which is obtained by mixing the average of the red, blue, and green component of the color. voidCTetrisView::DrawGrid(CDC* pDC){const ColorGrid* pGrid = m_pTetrisDoc->GetGrid();for (int iRow = 0; iRow < ROWS; ++iRow){for (int iCol = 0; iCol < COLS; ++iCol){ COLORREF rfColor = pGrid->Index(iRow, iCol);if (rfColor != WHITE){CBrushbrush((m_iColorStatus == COLOR) ? rfColor:GrayScale(rfColor));CBrush* pOldBrush = pDC->SelectObject(&brush);DrawSquare(iRow, iCol, pDC);pDC->SelectObject(pOldBrush);}}}} The active figure (m_activeFigure) is the figure falling down on the game grid.The next figure (m_nextFigure) is the figure announced at the right side of the client area. In order for it to be painted at the right-hand side, we alter the origin to the middle of the client area, and one row under the upper border by calling SetWindowOrg.
Read more
  • 0
  • 0
  • 3186
article-image-prototyping-javascript
Packt
23 Oct 2009
7 min read
Save for later

Prototyping JavaScript

Packt
23 Oct 2009
7 min read
In this article by Stoyan Stefanov, you'll learn about the prototype property of the function objects. Understanding how the prototype works is an important part of learning the JavaScript language. After all, JavaScript is classified as having a prototype-based object model. There's nothing particularly difficult about the prototype, but it is a new concept and as such may sometimes take some time to sink in. It's one of these things in JavaScript (closures are another) which, once you "get" them, they seem so obvious and make perfect sense. As with the rest of the article, you're strongly encouraged to type in and play around with the examples; this makes it much easier to learn and remember the concepts. The following topics are discussed in this article: Every function has a prototype property and it contains an object Adding properties to the prototype object Using the properties added to the prototype The difference between own properties and properties of the prototype __proto__, the secret link every object keeps to its prototype Methods such as isPrototypeOf(), hasOwnProperty(), and propertyIsEnumerable() The prototype Property The functions in JavaScript are objects and they contain methods and properties. Some of the common methods are apply() and call() and some of the common properties are length and constructor. Another property of the function objects is prototype. If you define a simple function foo() you can access its properties as you would do with any other object: >>>function foo(a, b){return a * b;}>>>foo.length 2 >>>foo.constructor Function() prototype is a property that gets created as soon as you define the function. Its initial value is an empty object. >>>typeof foo.prototype "object" It's as if you added this property yourself like this: >>>foo.prototype = {} You can augment this empty object with properties and methods. They won't have any effect of the foo() function itself; they'll only be used when you use foo()as a constructor. Adding Methods and Properties Using the Prototype Constructor functions can be used to create (construct) new objects. The main idea is that inside a function invoked with new you have access to the value this, which contains the object to be returned by the constructor. Augmenting (adding methods and properties to) this object is the way to add functionality to the object being created. Let's take a look at the constructor function Gadget() which uses this to add two properties and one method to the objects it creates. function Gadget(name, color) {   this.name = name;   this.color = color;   this.whatAreYou = function(){    return 'I am a ' + this.color + ' ' + this.name;   }} Adding methods and properties to the prototype property of the constructor function is another way to add functionality to the objects this constructor produces. Let's add two more properties, price and rating, and a getInfo() method. Since prototype contains an object, you can just keep adding to it like this: Gadget.prototype.price = 100;Gadget.prototype.rating = 3;Gadget.prototype.getInfo = function() {   return 'Rating: ' + this.rating + ', price: ' + this.price;}; Instead of adding to the prototype object, another way to achieve the above result is to overwrite the prototype completely, replacing it with an object of your choice: Gadget.prototype = {   price: 100,   rating: 3,   getInfo: function() {    return 'Rating: ' + this.rating + ', price: ' + this.price;   }}; Using the Prototype's Methods and Properties All the methods and properties you have added to the prototype are directly available as soon as you create a new object using the constructor. If you create a newtoy object using the Gadget() constructor, you can access all the methods and properties already defined. >>> var newtoy = new Gadget('webcam', 'black');>>> newtoy.name; "webcam" >>> newtoy.color; "black" >>> newtoy.whatAreYou(); "I am a black webcam" >>> newtoy.price; 100 >>> newtoy.rating; 3 >>> newtoy.getInfo(); "Rating: 3, price: 100" It's important to note that the prototype is "live". Objects are passed by reference in JavaScript, and therefore the prototype is not copied with every new object instance. What does this mean in practice? It means that you can modify the prototype at any time and all objects (even those created before the modification) will inherit the changes. Let's continue the example, adding a new method to the prototype: Gadget.prototype.get = function(what) {   return this[what];}; Even though newtoy was created before the get() method was defined, newtoy will still have access to the new method: >>> newtoy.get('price'); 100 >>> newtoy.get('color'); "black" Own Properties versus prototype Properties In the example above getInfo() used this internally to address the object. It could've also used Gadget.prototype to achieve the same result: Gadget.prototype.getInfo = function() {   return 'Rating: ' + Gadget.prototype.rating + ', price: ' + Gadget.prototype.price;}; What's is the difference? To answer this question, let's examine how the prototype works in more detail. Let's again take our newtoy object: >>> var newtoy = new Gadget('webcam', 'black'); When you try to access a property of newtoy, say newtoy.name the JavaScript engine will look through all of the properties of the object searching for one called name and, if it finds it, will return its value. >>> newtoy.name "webcam" What if you try to access the rating property? The JavaScript engine will examine all of the properties of newtoy and will not find the one called rating. Then the script engine will identify the prototype of the constructor function used to create this object (same as if you do newtoy.constructor.prototype). If the property is found in the prototype, this property is used. >>> newtoy.rating 3 This would be the same as if you accessed the prototype directly. Every object has a constructor property, which is a reference to the function that created the object, so in our case: >>> newtoy.constructor Gadget(name, color) >>> newtoy.constructor.prototype.rating 3 Now let's take this lookup one step further. Every object has a constructor. The prototype is an object, so it must have a constructor too. Which in turn has a prototype. In other words you can do: >>> newtoy.constructor.prototype.constructor Gadget(name, color) >>> newtoy.constructor.prototype.constructor.prototype Object price=100 rating=3 This might go on for a while, depending on how long the prototype chain is, but you eventually end up with the built-in Object() object, which is the highest-level parent. In practice, this means that if you try newtoy.toString() and newtoy doesn't have an own toString() method and its prototype doesn't either, in the end you'll get the Object's toString() >>> newtoy.toString() "[object Object]" Overwriting Prototype's Property withOwn Property As the above discussion demonstrates, if one of your objects doesn't have a certain property of its own, it can use one (if exists) somewhere up the prototype chain. What if the object does have its own property and the prototype also has one with the same name? The own property takes precedence over the prototype's. Let's have a scenario where a property name exists both as an own property and as a property of the prototype object: function Gadget(name) {   this.name = name;}Gadget.prototype.name = 'foo'; "foo" Creating a new object and accessing its name property gives you the object's ownname property. >>> var toy = new Gadget('camera');>>> toy.name; "camera" If you delete this property, the prototype's property with the same name"shines through": >>> delete toy.name; true >>> toy.name; "foo" Of course, you can always re-create the object's own property: >>> toy.name = 'camera';>>> toy.name; "camera"  
Read more
  • 0
  • 0
  • 6238

article-image-jboss-plug-and-eclipse-web-tools-platform
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS plug-in and the Eclipse Web Tools Platform

Packt
23 Oct 2009
4 min read
In this article, we recommend that you use the JBoss AS (version 4.2), which is a free J2EE Application Server that can be downloaded from http://www.jboss.org/jbossas/downloads/ (complete documentation can be downloaded from http://www.jboss.org/jbossas/docs/). JBoss AS plug-in and the Eclipse WTP JBoss AS plug-in can be treated as an elegant method of connecting a J2EE Application Server to the Eclipse IDE. It's important to know that JBoss AS plug-in does this by using the WTP support, which is a project included by default in the Eclipse IDE. WTP is a major project that extends the Eclipse platform with a strong support for Web and J2EE applications. In this case, WTP will sustain important operations, like starting the server in run/debug mode, stopping the server, and delegating WTP projects to their runtimes. For now, keep in mind that Eclipse supports a set of WTP servers and for every WTP server you may have one WTP runtime. Now, we will see how to install and configure the JBoss 4.2.2 runtime and server. Adding a WTP runtime in Eclipse In case of JBoss Tools, the main scope of Server Runtimes is to point to a server installation somewhere on your machine. By runtimes, we can use different configurations of the same server installed in different physical locations. Now, we will create a JBoss AS Runtime (you can extrapolate the steps shown below for any supported server): From the Window menu, select Preferences. In the Preferences window, expand the Server node and select the Runtime Environments child-node. On the right side of the window, you can see a list of currently installed runtimes, as shown in the following screenshot, where you can see that an Apache Tomcat runtime is reported (this is just an example, the Apache Tomcat runtime is not a default one). Now, if you want to install a new runtime, you should click the Add button from the top-right corner. This will bring in front the New Server Runtime Environment window as you can see in the following screenshot. Because we want to add a JBoss 4.2.2 runtime, we will select the JBoss 4.2 Runtime option (for other adapters proceed accordingly). After that, click Next for setting the runtime parameters. In the runtimes list, we have runtimes provided by WTP and runtimes provided by JBoss Tools (see the section marked in red on the previous screenshot). Because this article is about JBoss Tools, we will further discuss only the runtimes from this category. Here, we have five types of runtimes with the mention that the JBoss Deploy-Only Runtime type is for developers who start/stop/debug applications outside Eclipse. In this step, you will configure the JBoss runtime by indicating the runtime's name (in the Name field), the runtime's home directory (in the Home Directory field), the Java Runtime Environment associated with this runtime (in the JRE field), and the configuration type (in the Configuration field).In the following screenshot, we have done all these settings for our JBoss 4.2 Runtime. The official documentation of JBoss AS 4.2.2 recommends using JDK version 5. If you don't have this version in the JRE list, you can add it like this: Display the Preferences window by clicking the JRE button. In this window, click the Add button to display the Add JRE window. Continue by selecting the Standard VM option and click on the Next button. On the next page, use the Browse button to navigate to the JRE 5 home directory. Click on the Finish button and you should see a new entry in the Installed JREs field of the Preferences window (as shown in the following screenshot). Just check the checkbox of this new entry and click OK. Now, JRE 5 should be available in the JRE list of the New Server Runtime Environment window. After this, just click on the Finish button and the new runtime will be added, as shown in the following screenshot: From this window, you can also edit or remove a runtime by using the Edit and Remove buttons. These are automatically activated when you select a runtime from the list. As a final step, it is recommended to restart the Eclipse IDE.
Read more
  • 0
  • 0
  • 2704

article-image-python-data-persistence-using-mysql
Packt
23 Oct 2009
8 min read
Save for later

Python Data Persistence using MySQL

Packt
23 Oct 2009
8 min read
To keep things simple though, the article doesn’t discuss how to implement database-backed web pages with Python, concentrating only on how to connect Python with MySQL. Sample Application The best way to learn new programming techniques is to write an application that exercises them. This article will walk you through the process of building a simple Python application that interacts with a MySQL database. In a nutshell, the application picks up some live data from a web site and then persists it to an underlying MySQL database. For the sake of simplicity, it doesn’t deal with a large dataset. Rather, it picks up a small subset of data, storing it as a few rows in the underlying database. In particular, the application gets the latest post from the Packt Book Feed page available at http://feeds.feedburner.com/packtpub/sDsa?format=xml. Then, it analyzes the post’s title, finding appropriate tags for the article associated with the post, and finally inserts information about the post into the posts and posttags underlying database tables. As you might guess, a single post may be associated with more than one tag, meaning a record in the posts table may be related to several records in the posttags table. Diagrammatically, the sample application components and their interactions might look like this: Note the use of appsample.py. This script file will contain all the application code written in Python. In particular, it will contain the list of tags, as well as several Python functions packaging application logic. Software Components To build the sample discussed in the article you’re going to need the following software components installed on your computer: Python 2.5.x MySQLdb 1.2.x MySQL 5.1 All these software components can be downloaded and used for free. Although you may already have these pieces of software installed on your computer, here’s a brief overview of where you can obtain them. You can download an appropriate Python release from the Downloads page at Python’s web site at http://python.org/download/. You may be tempted to download the most recent release. Before you choose the release, however, it is recommended that you visit the Python for MySQL page at http://sourceforge.net/projects/mysql-python/ to check what Python releases are supported by the current MySQLdb module that will be used to connect your Python installation with MySQL. MySQLdb is the Python DB API-2.0 interface for MySQL. You can pick up the latest MySQLdb package (version 1.2.2 at the time of writing) from the sourceforge.net’s Python for MySQL page at http://sourceforge.net/projects/mysql-python/. Before you can install it, though, make sure you have Python installed in your system. You can obtain the MySQL 5.1 distribution from the mysql.com web site at http://dev.mysql.com/downloads/mysql/5.1.html, picking up the package designed for your operating system. Setting up the Database Assuming you have all the software components that were outlined in the preceding section installed in your system, you can now start building the sample application. The first step is to create the posts and posttags tables in your underlying MySQL database. As mentioned earlier, a single post may be associated with more than one tag. What this means in practice is that the posts and posttags tables should have a foreign key relationship. In particular, you might create these tables as follows: CREATE TABLE posts ( title VARCHAR(256) PRIMARY KEY, guid VARCHAR(1000), pubDate VARCHAR(50) ) ENGINE = InnoDB; CREATE TABLE posttags ( title VARCHAR(256), tag VARCHAR(20), PRIMARY KEY(title,tag), FOREIGN KEY(title) REFERENCES posts(title) ) ENGINE = InnoDB; As you might guess, you don’t need to populate above tables with data now. This will be automatically done later when you launch the sample. Developing the Script Now that you have the underlying database ready, you can move on and develop the Python code to complete the sample. In particular, you’re going to need to write the following components in Python: tags nested list of tags that will be used to describe the posts obtained from the Packt Book Feed page. obtainPost function that will be used to obtain the information about the latest post from the Packt Book Feed page. determineTags function that will determine appropriate tags to be applied to the latest post obtained from the Packt Book Feed page. insertPost function that will insert the information about the post obtained into the underlying database tables: posts and posttags. execPr function that will make calls to the other, described above functions. You will call this function to launch the application. All the above components will reside in a single file, say, appsample.py that you can create in your favorite text editor, such as vi or Notepad. First, add the following import declarations to appsample.py: import MySQLdb import urllib2 import xml.dom.minidom As you might guess, the first module is required to connect Python with MySQL, providing the Python DB API-2.0 interface for MySQL. The other two are needed to obtain and then parse the Packt Book Feed page’s data. You will see them in action in the obtainPost function in a moment. But first let’s create a nested list of tags that will be used by the determineTags function that determines the tags appropriate for the post being analyzed. To save space here, the following list contains just a few tags. You may and should include more tags to this list, of course. tags=["Python","Java","Drupal","MySQL","Oracle","Open Source"] The next step is to add the obtainPost function responsible for getting the data from the Packt Book Feed page and generating the post dictionary that will be utilized in further processing: def obtainPost(): addr = "http://feeds.feedburner.com/packtpub/sDsa?format=xml" xmldoc = xml.dom.minidom.parseString(urllib2.urlopen(addr).read()) item = xmldoc.getElementsByTagName("item")[0] title = item.getElementsByTagName("title")[0].firstChild.data guid = item.getElementsByTagName("guid")[0].firstChild.data pubDate = item.getElementsByTagName("pubDate")[0].firstChild.data post ={"title": title, "guid": guid, "pubDate": pubDate} return post Now that you have obtained all the required information about the latest post on the Packt Book Feed page, you can analyze the post’s title to determine appropriate tags. For that, add the determineTags function to appsample.py: def determineTags(title, tagslist): curtags=[] for curtag in tagslist: if title.find(curtag)>-1:curtags.append(curtag) return curtags By now, you have both the post and tags to be persisted to the database. So, add the insertPost function that will handle this task (don’t forget to change the parameters specified to the MySQLdb.connect function for the actual ones): def insertPost(title, guid, pubDate, curtags): db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES(%s, %s,%s)""", (title, guid, pubDate)) db.commit() for tag in curtags: c.execute("""INSERT INTO posttags (title, tag) VALUES(%s,%s)""", (title, tag)) db.commit() db.close() All that is left to do is add the execPr function that brings all the pieces together, calling the above functions in the proper order: def execPr(): p = obtainPost() t = determineTags(p["title"],tags) insertPost(p["title"], p["guid"], p["pubDate"], t) Now let’s test the code we just wrote. The simplest way to do this is through Python’s interactive command line. To start an interactive Python session, you can type python at your system shell prompt. It’s important to realize that since the sample discussed here is going to obtain some data from the web, you must connect to the Internet before you launch the application. Once you’re connected, you can launch the execPr function in your Python session, as follows: >>>import appsample >>>appsample.execPr() If everything is okay, you should see no messages. To make sure that everything really went as planned, you can check the posts and posttags tables. To do this, you might connect to the database with the MySQL command-line tool and then issue the following SQL commands: SELECT * FROM posts; The above should generate the output that might look like this: |title |guid |pubDate ------------------------------------------------------------------ Open Source CMS Award Voting Now Closed | http://www.packtpub.com/ article/2008-award-voting-closed | Tue, 21 Oct 2008 09:29:54 +0100 Then, you might want to check out the posttags table: SELECT * FROM posttags; This might generate the following output: |title |tag Open Source CMS Award Voting Now Closed | Open Source Please note that you may see different results since you are working with live data. Another thing to note here is that if you want to re-run the sample, you first need to empty the posts and posttags tables. Otherwise, you will encounter the problem related to the primary key constraints. However, that won’t be a problem at all if you re-run the sample in a few days, when a new post or posts appear on the Packt Book Feed page. Conclusion In this article you looked at a simple Python application persisting data to an underlying MySQL database. Although, for the sake of simplicity, the sample discussed here doesn’t offer a web interface, it illustrates how you can obtain data from the Internet, and then utilize it within your application, and finally store that data in the database.
Read more
  • 0
  • 0
  • 4233
article-image-chatroom-application-using-dwr-java-framework
Packt
23 Oct 2009
19 min read
Save for later

Chatroom Application using DWR Java Framework

Packt
23 Oct 2009
19 min read
Starting the Project and Configuration We start by creating a new project for our chat room, with the project name DWRChatRoom. We also need to add the dwr.jar file to the lib directory and enable DWR in the web.xml file. The following is the source code of the dwr.xml file. <?xml ver sion="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="Login"> <param name="class" value="chatroom.Login" /> </create> <create creator="new" javascript="ChatRoomDatabase"> <param name="class" value="chatroom.ChatRoomDatabase" /> </create> </allow> </dwr> The source code for web.xml is as follows: <?xml ver sion="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java. sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_ 5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWRChatRoom</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>activeReverseAjaxEnabled</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list></web-app> Developing the User Interface The next step we do is to create files for presentation: style sheet and HTML/JSP files. The style sheet, loginFailed.html, and index.jsp files are required for the application. The source code of the style sheet is as follows: body{margin:0;padding:0;line-height: 1.5em;}b{font-size: 110%;}em{color: red;}#topsection{background: #EAEAEA;height: 90px; /*Height of top section*/}#topsection h1{margin: 0;padding-top: 15px;}#contentwrapper{float: left;width: 100%;}#contentcolumn{margin-left: 200px; /*Set left margin to LeftColumnWidth*/}#leftcolumn{float: left;width: 200px; /*Width of left column*/margin-left: -100%;background: #C8FC98;}#footer{clear: left;width: 100%;background: black;color: #FFF;text-align: center;padding: 4px 0;}#footer a{color: #FFFF80;}.innertube{margin: 10px; /*Margins for inner DIV inside each column (to provide padding)*/margin-top: 0;} Our first page is the login page. It is located in the WebContent directory and it is named index.jsp. The source code for the page is given as follows: <%@ page language="java" contentType="text/html; charset=ISO-8859-1"    pageEncoding="ISO-8859-1"%><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Book Authoring</title><script type='text/javascript' src='/DWRChatroom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/util.js'></script>  <script type="text/javascript">function login(){  var userNameInput=dwr.util.byId('userName');  var userName=userNameInput.value;  Login.doLogin(userName,loginResult);}function loginResult(newPage){  window.location.href=newPage;}</script></head><body><h1>Book Authoring Sample</h1><table cellpadding="0" cellspacing="0"><tr><td>User name:</td><td><input id="userName" type="text" size="30"></td></tr><tr><td>&nbsp;</td><td><input type="button" value="Login" onclick="login();return false;"></td></tr></table></body></html> The login screen uses the DWR functionality to process the user login (the Java classes are presented after the web pages). The loginResults function opens either the failure page or the main page based on the result of the login operation. If the login was unsuccessful, a very simple loginFailed.html page is shown to the user, the source code for which is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html;                                         charset=ISO-8859-1"><title>Login failed</title></head><body><h2>Login failed.</h2></body></html> The main page, mainpage.jsp, includes all the client-side logic of our ChatRoom application. The source code for the page is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html lang="en" xml_lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /><title>Chatroom</title><link href="styles.css" rel="stylesheet" type="text/css" /><%   if (session.getAttribute("username") == null         || session.getAttribute("username").equals("")) {      //if not logged in and trying to access this page      //do nothing, browser shows empty page      return;   }%><script type='text/javascript' src='/DWRChatRoom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/interface/ChatRoomDatabase.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/util.js'></script>  <script type="text/javascript">dwr.engine.setActiveReverseAjax(true);function logout(){  Login.doLogout(showLoginScreen);}function showLoginScreen(){  window.location.href='index.jsp';}function showUsersOnline(){  var cellFuncs = [          function(user) {             return '<i>'+user+'</i>';          }          ];    Login.getUsersOnline({    callback:function(users)     {      dwr.util.removeAllRows('usersOnline');      dwr.util.addRows( "usersOnline",users, cellFuncs,                                    { escapeHtml:false });    }    });}function getPreviousMessages(){    ChatRoomDatabase.getChatContent({    callback:function(messages)     {      var chatArea=dwr.util.byId('chatArea');      var html="";      for(index in messages)      {         var msg=messages[index];         html+=msg;      }      chatArea.innerHTML=html;      var chatAreaHeight = chatArea.scrollHeight;      chatArea.scrollTop = chatAreaHeight;    }    });}function newMessage(message){  var chatArea=dwr.util.byId('chatArea');  var oldMessages=chatArea.innerHTML;  chatArea.innerHTML=oldMessages+message;    var chatAreaHeight = chatArea.scrollHeight;  chatArea.scrollTop = chatAreaHeight;}function sendMessageIfEnter(event){  if(event.keyCode == 13)  {    sendMessage();  }}function sendMessage(){    var message=dwr.util.byId('messageText');    var messageText=message.value;    ChatRoomDatabase.postMessage(messageText);    message.value='';}</script></head><body onload="showUsersOnline();"><div id="maincontainer"><div id="topsection"><div class="innertube"><h1>Chatroom</h1><h4>Welcome <i><%=(String) session.getAttribute("username")%></i></h4></div></div><div id="contentwrapper"><div id="contentcolumn"><div id="chatArea" style="width: 600px; height: 300px; overflow: auto"></div><div id="inputArea"><h4>Send message</h4><input id="messageText" type="text" size="50"  onkeyup="sendMessageIfEnter(event);"><input type="button" value="Send msg"                                                      onclick="sendMessage();"></div></div></div><div id="leftcolumn"><div class="innertube"><table cellpadding="0" cellspacing="0">  <thead>    <tr>      <td><b>Users online</b></td>    </tr>  </thead>  <tbody id="usersOnline">  </tbody></table><input id="logoutButton" type="button" value="Logout"  onclick="logout();return false;"></div></div><div id="footer">Stylesheet by <a  href="http://www.dynamicdrive.com/style/">Dynamic Drive CSSLibrary</a></div></div><script type="text/javascript">getPreviousMessages();</script></body></html> The first chat-room-specific JavaScript function is getPreviousMessages(). This function is called at the end of mainpage.jsp, and it retrieves previous chat messages for this chat room. The newMessage() function is called by the server-side Java code when a new message is posted to the chat room. The function also scrolls the chat area automatically to show the latest message. The sendMessageIfEnter() and sendMessage() functions are used to send user messages to the server. There is the input field for the message text in the HTML code, and the sendMessageIfEnter() function listens to onkeyup events in the input field. If the user presses enter, the sendMessage() function is called to send the message to the server. The HTML code includes the chat area of specified size and with automatic scrolling. Developing the Java Code There are several Java classes in the application. The Login class handles the user login and logout and also keeps track of the logged-in users. The source code of the Login class is as follows: package chatroom;import java.util.Collection;import java.util.List;import javax.servlet.ServletContext;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class Login {   public Login() {   }      public String doLogin(String userName) {      UserDatabase userDb=UserDatabase.getInstance();      if(!userDb.isUserLogged(userName)) {         userDb.login(userName);         WebContext webContext= WebContextFactory.get();         HttpServletRequest request = webContext.getHttpServletRequest();         HttpSession session=request.getSession();         session.setAttribute("username", userName);         String scriptId = webContext.getScriptSession().getId();         session.setAttribute("scriptSessionId", scriptId);         updateUsersOnline();         return "mainpage.jsp";      }      else {         return "loginFailed.html";      }   }      public void doLogout() {      try {         WebContext ctx = WebContextFactory.get();         HttpServletRequest request = ctx.getHttpServletRequest();         HttpSession session = request.getSession();         Util util = new Util();         String userName = util.getCurrentUserName(session);         UserDatabase.getInstance().logout(userName);         session.removeAttribute("username");         session.removeAttribute("scriptSessionId");         session.invalidate();      } catch (Exception e) {         System.out.println(e.toString());      }      updateUsersOnline();   }      private void updateUsersOnline() {      WebContext webContext= WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions =                            serverContext.getScriptSessionsByPage                                            (contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         proxy.addFunctionCall("showUsersOnline");      }   }      public List<String> getUsersOnline() {      UserDatabase userDb=UserDatabase.getInstance();      return userDb.getLoggedInUsers();   }} The following is the source code of the UserDatabase class package chatroom;import java.util.List;import java.util.Vector;//this class holds currently logged in users//there is no persistencepublic class UserDatabase {      private static UserDatabase userDatabase=new UserDatabase();      private List<String> loggedInUsers=new Vector<String>();      private UserDatabase() {}      public static UserDatabase getInstance() {      return userDatabase;   }      public List<String> getLoggedInUsers() {      return loggedInUsers;   }      public boolean isUserLogged(String userName) {      return loggedInUsers.contains(userName);    }      public void login(String userName) {      loggedInUsers.add(userName);   }      public void logout(String userName) {      loggedInUsers.remove(userName);   }} The Util class is used by the Login class, and it provides helper methods for the sample application. The source code for the Util class is as follows: package chatroom;import java.util.Hashtable;import java.util.Map;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;public class Util {      public Util() {         }      public String getCurrentUserName() {      //get user name from session      WebContext ctx = WebContextFactory.get();      HttpServletRequest request = ctx.getHttpServletRequest();      HttpSession session=request.getSession();      return getCurrentUserName(session);   }   public String getCurrentUserName(HttpSession session) {      String userName=(String)session.getAttribute("username");      return userName;   }} The logic for the server-side chat room functionality is in the ChatRoomDatabase class. The source code for the ChatRoomDatabase is as follows: package chatroom;import java.util.Collection;import java.util.Date;import java.util.List;import java.util.Vector;import javax.servlet.ServletContext;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class ChatRoomDatabase {   private static List<String> chatContent = new Vector<String>();   public ChatRoomDatabase() {   }   public void postMessage(String message) {      String user = (new Util()).getCurrentUserName();      if (user != null) {         Date time = new Date();         StringBuffer sb = new StringBuffer();         sb.append(time.toString());         sb.append(" <b><i>");         sb.append(user);         sb.append("</i></b>:  ");         sb.append(message);         sb.append("<br/>");         String newMessage=sb.toString();         chatContent.add(newMessage);         postNewMessage(newMessage);      }   }   public List<String> getChatContent() {      return chatContent;   }   private ScriptProxy getScriptProxyForSessions() {      WebContext webContext = WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions = serverContext               .getScriptSessionsByPage(contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         return proxy;      }      return null;   }   public void postNewMessage(String newMessage) {      ScriptProxy proxy = getScriptProxyForSessions();      if (proxy != null) {         proxy.addFunctionCall("newMessage",newMessage);      }   }} The Chatroom code is surprisingly simple. The chat content is stored in a Vector of Strings. The getChatContent()method just returns the chat content Vector to the browser. The postMessage()method is called when the user sends a new chat message. The method verifies whether the user is logged in, and adds the current time and username to the chat message and then appends the message to the chat content. The method also calls the postNewMessage() method that is used to show new chat content to all logged-in users. Note that the postMessage() method does not return any value. We let DWR and reverse AJAX functionality show the chat message to all users, including the user who sent the message. The getScriptProxyForSessions() and postNewMessage() methods use reverse AJAX to update the chat areas of all logged-in users with the new message. And that is it! The chat room sample is very straightforward and basic functionality is already in place, and the application is ready for further development. Testing the Chat We test the chat room application with three users: Smith, Brown, and Jones. We have given some screenshots of a typical scenario in a chat room here. Both Smith and Brown log into the system and exchange some messages. Both users see empty chat rooms when they log in and start chatting. The empty area that is above the send message input field is reserved for chat content. Smith and Brown exchange some messages as is seen in the following screenshot: The third user, Jones, joins the chat and sees all the previous messages in the chat room. Jones then exchanges some messages with Smith and Brown. Smith and Brown log out from the system leaving Jones alone in the chat room (until she also logs out). This is visible in the following screenshot: Summary This sample application showed how to use DWR in a chat room application. This application makes it clear that DWR makes development of these kind of collaborative applications very easy. DWR itself does not even play a big part in the applications. DWR is just a transparent feature of the application. So developers can concentrate on the actual project and aspects such as persistence of data and a neat user interface, instead of the low-level details of AJAX.    
Read more
  • 0
  • 0
  • 2230

article-image-jboss-perspective
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS Perspective

Packt
23 Oct 2009
4 min read
As you know, Eclipse offers an ingenious system of perspectives that helps us to switch between different technologies and to keep the main-screen as clean as possible. Every perspective is made of a set of components that can be added/removed by the user. These components are known as views. The JBoss AS Perspective has a set of specific views as follows: JBoss Server View Project Archives View Console View Properties View For launching the JBoss AS Perspective (or any other perspective), follow these two simple steps: From the Window menu, select Open Perspective | Other article. In the Open Perspective window, select the JBoss AS option and click on OK button (as shown in the following screenshot). If everything works fine, you should see the JBoss AS perspective as shown in the following screenshot: If any of these views is not available by default in your JBoss AS perspective, then you can add it manually by selecting from the Window menu the Show View | Other option. In the Show View window (shown in the following screenshot), you just select the desired view and click on the OK button. JBoss Server View This view contains a simple toolbar known as JBoss Server View Toolbar and two panels that separate the list of servers (top part) from the list of additional information about the selected server (bottom part). Note that the quantity of additional information is directly related to the server type. Top part of JBoss Server View In the top part of the JBoss Server View, we can see a list of our servers, their states, and if they are running or if they have stopped. Starting the JBoss AS The simplest ways to start our JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Start the server button from the JBoss Server View Toolbar (as shown in the following screenshot). Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Start option (as shown in the following screenshot). In both cases, a detailed evolution of the startup process will be displayed in the Console View, as you can see in the following screenshot. Stopping the JBoss AS The simplest ways to stop JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Stop the server button from the JBoss Server View Toolbar. Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Stop option. In both cases, a detailed evolution of the stopping process will be displayed in the Console View, as you can see in the following screenshot. Additional operations on JBoss AS Beside Start and Stop operations, JBoss Server View allows us to: Add a new server (the New Server option from the contextual menu) Remove an existing server (the Delete option from the contextual menu) Start the server in debug mode (first button on the JBoss Server View Toolbar) Start the server in profiling mode (third button on the JBoss Server View Toolbar) Publish to the server or synching the publish information between the server and the workspace (the Publish option from the contextual menu or the last button on the JBoss Server View Toolbar) Discard all publish state and republish from scratch (the Clean option from the contextual menu) Twiddle server (the Twiddle Server option from the contextual menu) Edit launch configuration (the Edit Launch Configuration option from the contextual menu as shown in the following screenshot). Add/remove projects (the Add and Remove Projects option from the contextual menu) Double-click the server name and modify parts of that server in the Server Editor—if you have a username and a password to start the server, then you can specify those credentials here (as shown in the following screenshot). Twiddle is a JMX library that comes with JBoss, and it is used to access (any) variables that are exposed via the JBoss JMX interfaces. Server publish status A server may have one of the following statuses: Synchronized: Allows you to see if changes are sync (as shown in the follo wing screenshot) Publishing: Allows you to see if changes are being updated Republish: Allows you to see if changes are waiting
Read more
  • 0
  • 0
  • 1356
Modal Close icon
Modal Close icon