Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-process-driven-soa-development
Packt
13 Sep 2010
9 min read
Save for later

Process Driven SOA Development

Packt
13 Sep 2010
9 min read
(For more resources on Oracle, see here.) Business Process Management and SOA One of the major benefits of a Service-Oriented Architecture is its ability to align IT with business processes. Business processes are important because they define the way business activities are performed. Business processes change as the company evolves and improves its operations. They also change in order to make the company more competitive. Today, IT is an essential part of business operations. Companies are simply unable to do business without IT support. However, this places a high level of responsibility on IT. An important part of this responsibility is the ability of IT to react to changes in a quick and efficient manner. Ideally, IT must instantly respond to business process changes. In most cases, however, IT is not flexible enough to adapt application architecture to the changes in business processes quickly. Software developers require time to modify application behavior. In the meantime, the company is stuck with old processes. In a highly competitive marketplace such delays are dangerous, and the threat is exacerbated by a reliance on traditional software development to make quick changes within an increasingly complex IT architecture. The major problem with traditional approaches to software development is the huge semantic gap between IT and the process models. The traditional approach to software development has been focused on functionalities rather than on end-to-end support for business processes. It usually requires the definition of use cases, sequence diagrams, class diagrams, and other artifacts, which bring us to the actual code in a programming language such as Java, C#, C++, and so on. SOA reduces the semantic gap by introducing a development model that aligns the IT development cycle with the business process lifecycle. In SOA, business processes can be executed directly and integrated with existing applications through services. To understand this better, let's look at the four phases of the SOA lifecycle: Process modeling: This is the phase in which process analysts work with process owners to analyze the business process and define the process model. They define the activity flow, information flow, roles, and business documents. They also define business policies and constraints, business rules, and performance measures. Performance measures are often called Key Performance Indicators (KPIs). Examples of KPIs include activity turnaround time, activity cost, and so on. Usually Business Process Modeling Notation (BPMN) is used in this phase. Process implementation: This is the phase in which developers work with process analysts to implement the business process, with the objective of providing end-to-end support for the process. In an SOA approach, the process implementation phase includes process implementation with the Business Process Execution Language (BPEL) and process decomposition to the services, implementation or reuse of services, and integration. Process execution and control: This is the actual execution phase, in which the process participants execute various activities of the process. In the end-to-end support for business processes, it is very important that IT drives the process and directs process participants to execute activities, and not vice versa, where the actual process drivers are employees. In SOA, processes execute on a process server. Process control is an important part of this phase, during which process supervisors or process managers control whether the process is executing optimally. If delays occur, exceptions arise, resources are unavailable, or other problems develop, process supervisors or managers can take corrective actions. Process monitoring and optimization: This is the phase in which process owners monitor the KPIs of the process using Business Activity Monitoring (BAM). Process analysts, process owners, process supervisors, and key users examine the process and analyze the KPIs while taking into account changing business conditions. They examine business issues and make optimizations to the business process. The following figure shows how a process enters this cycle, and goes through the various stages: Once optimizations have been identified and selected, the process returns to the modeling phase, where optimizations are applied. Then the process is re-implemented and the whole lifecycle is repeated. This is referred to as an iterative-incremental lifecycle, because the process is improved at each stage. Organizational aspects of SOA development SOA development, as described in the previous section, differs considerably from traditional development. SOA development is process-centric and keeps the modeler and the developer focused on the business process and on end-to-end support for the process, thereby efficiently reducing the gap between business and IT. The success of the SOA development cycle relies on correct process modeling. Only when processes are modeled in detail can we develop end-to-end support that will work. Exceptional process fl ows also have to be considered. This can be a difficult task, one that is beyond the scope of the IT department (particularly when viewed from the traditional perspective). To make process-centric SOA projects successful, some organizational changes are required. Business users with a good understanding of the process must be motivated to actively participate in the process modeling. Their active participation must not be taken for granted, lest they find other work "more useful," particularly if they do not see the added value of process modeling. Therefore, a concise explanation as to why process modeling makes sense can be a very valuable time investment. A good strategy is to gain top management support. It makes enormous sense to explain two key factors to top management—first, why a process centric approach and end-to-end support for processes makes sense, and second, why the IT department cannot successfully complete the task without the participation of business users. Usually top management will understand the situation rather quickly and will instruct business users to participate. Obviously, the proposed process-centric development approach must become an ongoing activity. This will require the formalization of certain organizational structures. Otherwise, it will be necessary to seek approval for each and every project. We have already seen that the proposed approach outgrows the organizational limits of the IT department. Many organizations establish a BPM/SOA Competency Center, which includes business users and all the other profiles required for SOA development. This also includes the process analyst, process implementation, service development, and presentation layer groups, as well as SOA governance. Perhaps the greatest responsibility of SOA development is to orchestrate the aforementioned groups so that they work towards a common goal. This is the responsibility of the project manager, who must work in close connection with the governance group. Only in this way can SOA development be successful, both in the short term (developing end-to-end applications for business processes), and in the long term (developing a fl exible, agile IT architecture that is aligned with business needs). Technology aspects of SOA development SOA introduces technologies and languages that enable the SOA development approach. Particularly important is BPMN, which is used for business process modeling, and BPEL, which is used for business process execution. BPMN is the key technology for process modeling. The process analyst group must have in-depth knowledge of BPMN and process modeling concepts. When modeling processes for SOA, they must be modeled in detail. Using SOA, we model business processes with the objective of implementing them in BPEL and executing them on the process server. Process models can be made executable only if all the relevant information is captured that is needed for the actual execution. We must identify individual activities that are atomic from the perspective of the execution. We must model exceptional scenarios too. Exceptional scenarios define how the process behaves when something goes wrong—and in the real world, business processes can and do go wrong. We must model how to react to exceptional situations and how to recover appropriately. Next, we automate the process. This requires mapping of the BPMN process model into the executable representation in BPEL. This is the responsibility of the process implementation group. BPMN can be converted to BPEL almost automatically and vice versa, which guarantees that the process map is always in sync with the executable code. However, the executable BPEL process also has to be connected with the business services. Each process activity is connected with the corresponding business service. Business services are responsible for fulfilling the individual process activities. SOA development is most efficient if you have a portfolio of business services that can be reused, and which includes lower-level and intermediate technical services. Business services can be developed from scratch, exposed from existing systems, or outsourced. This task is the responsibility of the service development group. In theory, it makes sense for the service development group to first develop all business services. Only then would the process implementation group start to compose those services into the process. However, in the real world this is often not the case, because you will probably not have the luxury of time to develop the services first and only then start the processes. And even if you do have enough time, it would be difficult to know which business services will be required by processes. Therefore, both groups usually work in parallel, which is a great challenge. It requires interaction between them and strict, concise supervision of the SOA governance group and the project manager; otherwise, the results of both groups (the process implementation group and the service development group) will be incompatible. Once you have successfully implemented the process, it can be deployed on the process server. In addition to executing processes, a process server provides other valuable information, including a process audit trail, lists of successfully completed processes, and a list of terminated or failed processes. This information is helpful in controlling the process execution and in taking any necessary corrective measures. The services and processes communicate using the Enterprise Service Bus (ESB). The services and processes are registered in the UDDI-compliant service registry. Another part of the architecture is the rule engine, which serves as a central place for business rules. For processes with human tasks, user interaction is obviously important, and is connected to identity management. The SOA platform also provides BAM. BAM helps to measure the key performance indicators of the process, and provides valuable data that can be used to optimize processes. The ultimate goal of each BAM user is to optimize process execution, to improve process efficiency, and to sense and react to important events. BAM ensures that we start optimizing processes where it makes most sense. Traditionally, process optimization has been based on simulation results, or even worse, by guessing where bottlenecks might be. BAM, on the other hand, gives more reliable and accurate data, which leads to better decisions about where to start with optimizations. The following figure illustrates the SOA layers:
Read more
  • 0
  • 0
  • 1732

article-image-python-multimedia-fun-animations-using-pyglet
Packt
31 Aug 2010
8 min read
Save for later

Python Multimedia: Fun with Animations using Pyglet

Packt
31 Aug 2010
8 min read
(For more resources on Python, see here.) So let's get on with it. Installation prerequisites We will cover the prerequisites for the installation of Pyglet in this section. Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library, which works on multiple platforms. It is primarily used for developing gaming applications and other graphically-rich applications. Pyglet can be downloaded from http://www.pyglet.org/download.html. Install Pyglet version 1.1.4 or later. The Pyglet installation is pretty straightforward. Windows platform For Windows users, the Pyglet installation is straightforward—use the binary distribution Pyglet 1.1.4.msi or later. You should have Python 2.6 installed. For Python 2.4, there are some more dependencies. We won't discuss them in this article, because we are using Python 2.6 to build multimedia applications. If you install Pyglet from the source, see the instructions under the next sub-section, Other platforms. Other platforms The Pyglet website provides a binary distribution file for Mac OS X. Download and install pyglet-1.1.4.dmg or later. On Linux, install Pyglet 1.1.4 or later if it is available in the package repository of your operating system. Otherwise, it can be installed from source tarball as follows: Download and extractthetarballextractthetarball the tarball pyglet-1.1.4.tar.gz or a later version. Make sure that python is a recognizable command in shell. Otherwise, set the PYTHONPATH environment variable to the correct Python executable path. In a shell window, change to the mentioned extracted directory and then run the following command: python setup.py install Review the succeeding installation instructions using the readme/install instruction files in the Pyglet source tarball. If you have the package setuptools (http://pypi.python.org/pypi/setuptools) the Pyglet installation should be very easy. However, for this, you will need a runtime egg of Pyglet. But the egg file for Pyglet is not available at http://pypi.python.org. If you get hold of a Pyglet egg file, it can be installed by running the following command on Linux or Mac OS X. You will need administrator access to install the package: $sudo easy_install -U pyglet Summary of installation prerequisites Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution Install from binary; also install additional developer packages (For example, with python-devel in the package name in a rpm-based Linux distribution).   Build and install from the source tarball. Pyglet http://www.pyglet.org/download.html 1.1.4 or later Install using binary distribution (the .msi file) Mac: Install using disk image file (.dmg file). Linux: Build and install using the source tarball. Testing the installation Before proceeding further, ensure that Pyglet is installed properly. To test this, just start Python from the command line and type the following: >>>import pyglet If this import is successful, we are all set to go! A primer on Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library that works on multiple platforms. It is primarily used for developing gaming and other graphically-rich applications. We will cover some important aspects of Pyglet framework. Important components We will briefly discuss some of the important modules and packages of Pyglet that we will use. Note that this is just a tiny chunk of the Pyglet framework. Please review the Pyglet documentation to know more about its capabilities, as this is beyond the scope of this article. Window The pyglet.window.Window module provides the user interface. It is used to create a window with an OpenGL context. The Window class has API methods to handle various events such as mouse and keyboard events. The window can be viewed in normal or full screen mode. Here is a simple example of creating a Window instance. You can define a size by specifying width and height arguments in the constructor. win = pyglet.window.Window() The background color for the image can be set using OpenGL call glClearColor, as follows: pyglet.gl.glClearColor(1, 1, 1, 1) This sets a white background color. The first three arguments are the red, green, and blue color values. Whereas, the last value represents the alpha. The following code will set up a gray background color. pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1) The following illustration shows a screenshot of an empty window with a gray background color. Image The pyglet.image module enables the drawing of images on the screen. The following code snippet shows a way to create an image and display it at a specified position within the Pyglet window. img = pyglet.image.load('my_image.bmp')x, y, z = 0, 0, 0img.blit(x, y, z) A later section will cover some important operations supported by the pyglet.image module. Sprite This is another important module. It is used to display an image or an animation frame within a Pyglet window discussed earlier. It is an image instance that allows us to position an image anywhere within the Pyglet window. A sprite can also be rotated and scaled. It is possible to create multiple sprites of the same image and place them at different locations and with different orientations inside the window. Animation Animation module is a part of pyglet.image package. As the name indicates, pyglet.image.Animation is used to create an animation from one or more image frames. There are different ways to create an animation. For example, it can be created from a sequence of images or using AnimationFrame objects. An animation sprite can be created and displayed within the Pyglet window. AnimationFrame This creates a single frame of an animation from a given image. An animation can be created from such AnimationFrame objects. The following line of code shows an example. animation = pyglet.image.Animation(anim_frames) anim_frames is a list containing instances of AnimationFrame. Clock Among many other things, this module is used for scheduling functions to be called at a specified time. For example, the following code calls a method moveObjects ten times every second. pyglet.clock.schedule_interval(moveObjects, 1.0/10) Displaying an image In the Image sub-section, we learned how to load an image using image.blit. However, image blitting is a less efficient way of drawing images. There is a better and preferred way to display the image by creating an instance of Sprite. Multiple Sprite objects can be created for drawing the same image. For example, the same image might need to be displayed at various locations within the window. Each of these images should be represented by separate Sprite instances. The following simple program just loads an image and displays the Sprite instance representing this image on the screen. 1 import pyglet23 car_img= pyglet.image.load('images/car.png')4 carSprite = pyglet.sprite.Sprite(car_img)5 window = pyglet.window.Window()6 pyglet.gl.glClearColor(1, 1, 1, 1)78 @window.event9 def on_draw():10 window.clear()11 carSprite.draw()1213 pyglet.app.run() On line 3, the image is opened using pyglet.image.load call. A Sprite instance corresponding to this image is created on line 4. The code on line 6 sets white background for the window. The on_draw is an API method that is called when the window needs to be redrawn. Here, the image sprite is drawn on the screen. The next illustration shows a loaded image within a Pyglet window. In various examples in this article, the file path strings are hardcoded. We have used forward slashes for the file path. Although this works on Windows platform, the convention is to use backward slashes. For example, images/car.png is represented as imagescar.png. Additionally, you can also specify a complete path to the file by using the os.path.join method in Python. Regardless of what slashes you use, the os.path.normpath will make sure it modifies the slashes to fit to the ones used for the platform. The use of oos.path.normpath is illustrated in the following snippet: import osoriginal_path = 'C:/images/car.png"new_path = os.path.normpath(original_path) The preceding image illustrates Pyglet window showing a still image. Mouse and keyboard controls The Window module of Pyglet implements some API methods that enable user input to a playing animation. The API methods such as on_mouse_press and on_key_press are used to capture mouse and keyboard events during the animation. These methods can be overridden to perform a specific operation. Adding sound effects The media module of Pyglet supports audio and video playback. The following code loads a media file and plays it during the animation. 1 background_sound = pyglet.media.load(2 'C:/AudioFiles/background.mp3',3 streaming=False)4 background_sound.play() The second optional argument provided on line 3 decodes the media file completely in the memory at the time the media is loaded. This is important if the media needs to be played several times during the animation. The API method play() starts streaming the specified media file.
Read more
  • 0
  • 0
  • 6340

article-image-python-multimedia-animation-examples-using-pyglet
Packt
31 Aug 2010
7 min read
Save for later

Python Multimedia: Animation Examples using Pyglet

Packt
31 Aug 2010
7 min read
(For more resources on Python, see here.) Single image animation Imagine that you are creating a cartoon movie where you want to animate the motion of an arrow or a bullet hitting a target. In such cases, typically it is just a single image. The desired animation effect is accomplished by performing appropriate translation or rotation of the image. Time for action – bouncing ball animation Lets create a simple animation of a 'bouncing ball'. We will use a single image file, ball.png, which can be downloaded from the Packt website. The dimensions of this image in pixels are 200x200, created on a transparent background. The following screenshot shows this image opened in GIMP image editor. The three dots on the ball identify its side. We will see why this is needed. Imagine this as a ball used in a bowling game. The image of a ball opened in GIMP appears as shown in the preceding image. The ball size in pixels is 200x200. Download the files SingleImageAnimation.py and ball.png from the Packt website. Place the ball.png file in a sub-directory 'images' within the directory in which SingleImageAnimation.py is saved. The following code snippet shows the overall structure of the code. 1 import pyglet2 import time34 class SingleImageAnimation(pyglet.window.Window):5 def __init__(self, width=600, height=600):6 pass7 def createDrawableObjects(self):8 pass9 def adjustWindowSize(self):10 pass11 def moveObjects(self, t):12 pass13 def on_draw(self):14 pass15 win = SingleImageAnimation()16 # Set window background color to gray.17 pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1)1819 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)2021 pyglet.app.run() Although it is not required, we will encapsulate event handling and other functionality within a class SingleImageAnimation. The program to be developed is short, but in general, it is a good coding practice. It will also be good for any future extension to the code. An instance of SingleImageAnimation is created on line 14. This class is inherited from pyglet.window.Window. It encapsulates the functionality we need here. The API method on_draw is overridden by the class. on_draw is called when the window needs to be redrawn. Note that we no longer need a decorator statement such as @win.event above the on_draw method because the window API method is simply overridden by this inherited class. The constructor of the class SingleImageAnimation is as follows: 1 def __init__(self, width=None, height=None):2 pyglet.window.Window.__init__(self,3 width=width,4 height=height,5 resizable = True)6 self.drawableObjects = []7 self.rising = False8 self.ballSprite = None9 self.createDrawableObjects()10 self.adjustWindowSize() As mentioned earlier, the class SingleImageAnimation inherits pyglet.window.Window. However, its constructor doesn't take all the arguments supported by its super class. This is because we don't need to change most of the default argument values. If you want to extend this application further and need these arguments, you can do so by adding them as __init__ arguments. The constructor initializes some instance variables and then calls methods to create the animation sprite and resize the window respectively. The method createDrawableObjects creates a sprite instance using the ball.png image. 1 def createDrawableObjects(self):2 """3 Create sprite objects that will be drawn within the4 window.5 """6 ball_img= pyglet.image.load('images/ball.png')7 ball_img.anchor_x = ball_img.width / 28 ball_img.anchor_y = ball_img.height / 2910 self.ballSprite = pyglet.sprite.Sprite(ball_img)11 self.ballSprite.position = (12 self.ballSprite.width + 100,13 self.ballSprite.height*2 - 50)14 self.drawableObjects.append(self.ballSprite) The anchor_x and anchor_y properties of the image instance are set such that the image has an anchor exactly at its center. This will be useful while rotating the image later. On line 10, the sprite instance self.ballSprite is created. Later, we will be setting the width and height of the Pyglet window as twice of the sprite width and thrice of the sprite height. The position of the image within the window is set on line 11. The initial position is chosen as shown in the next screenshot. In this case, there is only one Sprite instance. However, to make the program more general, a list of drawable objects called self.drawableObjects is maintained. To continue the discussion from the previous step, we will now review the on_draw method. def on_draw(self): self.clear() for d in self.drawableObjects: d.draw() As mentioned previously, the on_draw function is an API method of class pyglet.window.Window that is called when a window needs to be redrawn. This method is overridden here. The self.clear() call clears the previously drawn contents within the window. Then, all the Sprite objects in the list self.drawableObjects are drawn in the for loop. The preceding image illustrates the initial ball position in the animation. The method adjustWindowSize sets the width and height parameters of the Pyglet window. The code is self-explanatory: def adjustWindowSize(self): w = self.ballSprite.width * 3 h = self.ballSprite.height * 3 self.width = w self.height = h So far, we have set up everything for the animation to play. Now comes the fun part. We will change the position of the sprite representing the image to achieve the animation effect. During the animation, the image will also be rotated, to give it the natural feel of a bouncing ball. 1 def moveObjects(self, t):2 if self.ballSprite.y - 100 < 0:3 self.rising = True4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:5 self.rising = False67 if not self.rising:8 self.ballSprite.y -= 59 self.ballSprite.rotation -= 610 else:11 self.ballSprite.y += 512 self.ballSprite.rotation += 5 This method is scheduled to be called 20 times per second using the following code in the program. pyglet.clock.schedule_interval(win.moveObjects, 1.0/20) To start with, the ball is placed near the top. The animation should be such that it gradually falls down, hits the bottom, and bounces back. After this, it continues its upward journey to hit a boundary somewhere near the top and again it begins its downward journey. The code block from lines 2 to 5 checks the current y position of self.ballSprite. If it has hit the upward limit, the flag self.rising is set to False. Likewise, when the lower limit is hit, the flag is set to True. The flag is then used by the next code snippet to increment or decrement the y position of self.ballSprite. The highlighted lines of code rotate the Sprite instance. The current rotation angle is incremented or decremented by the given value. This is the reason why we set the image anchors, anchor_x and anchor_y at the center of the image. The Sprite object honors these image anchors. If the anchors are not set this way, the ball will be seen wobbling in the resultant animation. Once all the pieces are in place, run the program from the command line as: $python SingleImageAnimation.py This will pop up a window that will play the bouncing ball animation. The next illustration shows some intermediate frames from the animation while the ball is falling down. What just happened? We learned how to create an animation using just a single image. The image of a ball was represented by a sprite instance. This sprite was then translated and rotated on the screen to accomplish a bouncing ball animation. The whole functionality, including the event handling, was encapsulated in the class SingleImageAnimation.
Read more
  • 0
  • 0
  • 5126
Banner background image

article-image-python-multimedia-working-audios
Packt
30 Aug 2010
14 min read
Save for later

Python Multimedia: Working with Audios

Packt
30 Aug 2010
14 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website http://www.gstreamer.net/. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: http://www.gstreamer-winbuild.ylatuya.es The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit http://code.google.com/p/ossbuild/ for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (http://www.cygwin.com/). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website http://www.gstreamer.net/. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit http://py26-gst-python.darwinports.com/. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: http://gst-python.darwinports.com/ PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at: http://www.pygtk.org/downloads.html. Windows platform The binary installer is available on the PyGTK website. The complete URL is: http://ftp.acc.umu.se/pub/GNOME/binaries/win32/pygobject/2.20/?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: http://ftp.gnome.org/pub/GNOME/sources/pygobject/2.21/ If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at http://py26-gobject.darwinports.com/. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Package Download location Version Windows platform Linux/Unix/OS X platforms GStreamer http://www.gstreamer.net/ 0.10.5 or later Install using binary distribution available on the Gstreamer WinBuild website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download Use GStreamerWinBuild-0.10.5.1.exe (or later version if available). Linux: Use GStreamer distribution in package repository. Mac OS X: Download and install by following instructions on the website: http://gstreamer.darwinports.com/. Python Bindings for GStreamer http://www.gstreamer.net/ 0.10.15 or later for Python 2.6 Use binary provided by GStreamer WinBuild project. See http://www.gstreamer-winbuild.ylatuya.es for details pertaining to Python 2.6. Linux: Use gst-python distribution in the package repository. Mac OS X: Use this package (if you are using Python2.6): http://py26-gst-python.darwinports.com/. Linux/Mac: Build and install from the source tarball. Python bindings for GObject "PyGObject" Source distribution: http://www.pygtk.org/downloads.html 2.14 or later for Python 2.6 Use binary package from pygobject-2.20.0.win32-py2.6.exe Linux: Install from source if pygobject is not available in the package repository. Mac: Use this package on darwinports (if you are using Python2.6) See http://py26-gobject.darwinports.com/ for details. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10")>>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst>>> pygst<module 'pygst' from 'C:Python26libsite-packagespygst.pyc'>>>> pygst.require('0.10')>>> import gst>>> gst<module 'gst' from 'C:Python26libsite-packagesgst-0.10gst__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article, we will be using GStreamer multimedia framework extensively. Before we move on to the topics that teach us various audio processing techniques, a primer on GStreamer is necessary. So what is GStreamer? It is a framework on top of which one can develop multimedia applications. The rich set of libraries it provides makes it easier to develop applications with complex audio/video processing capabilities. Fundamental components of GStreamer are briefly explained in the coming sub-sections. Comprehensive documentation is available on the GStreamer project website. GStreamer Application Development Manual is a very good starting point. In this section, we will briefly cover some of the important aspects of GStreamer. For further reading, you are recommended to visit the GStreamer project website: http://www.gstreamer.net/documentation/ gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website http://gstreamer.freedesktop.org. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section.   Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus()2 bus.add_signal_watch()3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python.
Read more
  • 0
  • 0
  • 3362

article-image-microsoft-lightswitch-application-using-sql-azure-database
Packt
30 Aug 2010
3 min read
Save for later

Microsoft LightSwitch Application using SQL Azure Database

Packt
30 Aug 2010
3 min read
(For more resources on Microsoft, see here.) Your computer has to satisfy the system requirements that you can look up at the product site (while downloading) and you should have an account on Microsoft Windows Azure Services. Although this article retrieves data from SQL Azure, you can retrieve database from a local server or other data sources as well. However it is presently limited to SQL Server databases. The article content was developed using Microsoft LightSwitch Beta 1, SQL Azure database, on an Acer 4810TZ-4011 notebook with Windows 7 Ultimate OS. Installing Microsoft LightSwitch The LightSwitch beta is now available at this site here, the file name is vs_vslsweb.exe: http://www.microsoft.com/visualstudio/en-us/lightswitch When you download and install the program you may get into the problem that some requirement not being present. While installing the program for this article there was an initial problem. The Microsoft LightSwitch requires Microsoft SQL Server Compact 3.5 SP2. While this was already present on the computer it did not recognize. However in addition to SP2 there were also present the Microsoft SQL Server Compact 3.5 SP1 as well as SQL Server Compact 4.0. After removing Microsoft SQL Server Compact SP1 and SP2 the program installed without further problems after installing SQL Server Compact 3.5 SP2 again. Please review this link (http://hodentek.blogspot.com/2010/08/are-you-ready-to-see-light-with.html) for more detailed information. The next image shows the Compact products presently installed on this machine. Creating a LightSwitch Program After installation you may not find a shortcut that displays an icon for Microsoft LightSwitch. But you may find a Visual Studio 2010 shortcut as shown. The Visual Studio 2010 Express is a different product which is free to install. You cannot create a LightSwitch application with Visual Studio 2010 Express. Click on Microsoft Visual Studio 2010 shown highlighted. This opens the program with a splash screen. After a while the user interface displays the Start Page as shown. You can have more than one instance open at a time. The Recent Projects is a catalog of all projects in the Visual Studio 2010 default project directory. Just as you cannot develop a LightSwitch application with VS 2010 Express, you cannot open a project developed in VS 2010 Express with the LightSwitch interface as you will encounter the message shown. This means that LightSwitch projects are isolated in the development environment although the same shell program is used. When you click File | New Project you will see the New Project window displayed as shown here. Make sure you set the target to .NET Framework 4.0 otherwise you may not see any projects. It is strictly .NET Framework 4.0 for now. Also trying to create, File | New web site will not show any templates no matter what the .NET Framework you have chosen. In order to see Team Project you must have a Team Foundation Server present. In what follows we will be creating LightSwitch application (default name is Application1 for both C# and VB). From what is displayed you will see more Silverlight project templates than LightSwitch project templates. In fact you have just one template either in C# or in VB. Highlight LightSwitch Application (VB) and change the default name from Application1 to something different. Herein it is named SwitchOn as shown. If you were to look at the project properties in the property window you will see that the filename of the project is SwitchOn.lsproj. This file type is exclusively used by LightSwitch. The folder structure of the project is deceptively simple consisting of Data Sources and Screens.
Read more
  • 0
  • 0
  • 2100

article-image-playback-audio-video-and-create-media-playback-component-using-javafx
Packt
26 Aug 2010
5 min read
Save for later

Playback Audio with Video and Create a Media Playback Component Using JavaFX

Packt
26 Aug 2010
5 min read
(For more resources on Java, see here.) Playing audio with MediaPlayer Playing audio is an important aspect of any rich client platform. One of the celebrated features of JavaFX is its ability to easily playback audio content. This recipe shows you how to create code that plays back audio resources using the MediaPlayer class. Getting ready This recipe uses classes from the Media API located in the javafx.scene.media package. As you will see in our example, using this API you are able to load, configure, and playback audio using the classes Media and MediaPlayer. For this recipe, we will build a simple audio player to illustrate the concepts presented here. Instead of using standard GUI controls, we will use button icons loaded as images. If you are not familiar with the concept of loading images, review the recipe Loading and displaying images with ImageView in the previous article. In this example we will use a JavaFX podcast from Oracle Technology Network TechCast series where Nandini Ramani discusses JavaFX. The stream can be found at http://streaming.oracle.com/ebn/podcasts/media/8576726_Nandini_Ramani_030210.mp3. How to do it... The code given next has been shortened to illustrate the essential portions involved in loading and playing an audio stream. You can get the full listing of the code in this recipe from ch05/source-code/src/media/AudioPlayerDemo.fx. def w = 400;def h = 200;var scene:Scene;def mediaSource = "http://streaming.oracle.com/ebn/podcasts/media/ 8576726_Nandini_Ramani_030210.mp3";def player = MediaPlayer {media:Media{source:mediaSource}}def controls = Group { layoutX:(w-110)/2 layoutY:(h-50)/2 effect:Reflection{ fraction:0.4 bottomOpacity:0.1 topOffset:3 } content:[ HBox{spacing:10 content:[ ImageView{id:"playCtrl" image:Image{url:"{__DIR__}play-large.png"} onMouseClicked:function(e:MouseEvent){ def playCtrl = e.source as ImageView; if(not(player.status == player.PLAYING)){ playCtrl.image = Image{url:"{__DIR__}pause-large.png"} player.play(); }else if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.pause(); } } } ImageView{id:"stopCtrl" image:Image{url:"{__DIR__}stop-large.png"} onMouseClicked:function(e){ def playCtrl = e.source as ImageView; if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.stop(); } } } ]} ]} When the variable controls is added to a scene object and the application is executed, it produces the screen shown in the following screenshot: How it works... The Media API is comprised of several components which, when put together, provides the mechanism to stream and playback the audio source. To playback audio requires two classes, including Media and MediaPlayer. Let's take a look at how these classes are used to playback audio in the previous example. The MediaPlayer—the first significant item in the code is the declaration and initialization of a MediaPlayer instance assigned to the variable player. To load the audio file, we assign an instance of Media to player.media. The Media class is used to specify the location of the audio. In our example, it is a URL that points to an MP3 file. The controls—the play, pause, and stop buttons are grouped in the Group object called controls. They are made of three separate image files: play-large.png, pause-large.png, and stop-large.png, loaded by two instances of the ImageView class. The ImageView objects serve to display the control icons and to control the playback of the audio: When the application starts, imgView displays image play-large.png. When the user clicks on the image, it invokes its action-handler function, which firsts detects the status of the MediaPlayer instance. If it is not playing, it starts playback of the audio source by calling player.play() and replaces the play-large.png with the image pause-large.png. If, however, audio is currently playing, then the audio is stopped and the image is replaced back with play-large.png. The other ImageView instance loads the stop-large.png icon. When the user clicks on it, it calls its action-handler to first stop the audio playback by calling player.stop(). Then it toggles the image for the "play" button back to icon play-large.png. As mentioned in the introduction, JavaFX will play the MP3 file format on any platform where the JavaFX format is supported. Anything other than MP3 must be supported natively by the OS's media engine where the file is played back. For instance, on my Mac OS, I can play MPEG-4, because it is a supported playback format by the OS's QuickTime engine. There's more... The Media class models the audio stream. It exposes properties to configure the location, resolves dimensions of the medium (if available; in the case of audio, that information is not available), and provides tracks and metadata about the resource to be played. The MediaPlayer class itself is a controller class responsible for controlling playback of the medium by offering control functions such as play(), pause(), and stop(). It also exposes valuable playback data including current position, volume level, and status. We will use these additional functions and properties to extend our playback capabilities in the recipe Controlling media playback in this article. See also Accessing media assets Loading and displaying images with ImageView  
Read more
  • 0
  • 0
  • 2616
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-manipulating-images-javafx
Packt
25 Aug 2010
4 min read
Save for later

Manipulating Images with JavaFX

Packt
25 Aug 2010
4 min read
(For more resources on Java, see here.) One of the most celebrated features of JavaFX is its inherent support for media playback. As of version 1.2, JavaFX has the ability to seamlessly load images in different formats, play audio, and play video in several formats using its built-in components. To achieve platform independence and performance, the support for media playback in JavaFX is implemented as a two-tiered strategy: Platform-independent APIs—the JavaFX SDK comes with a media API designed to provide a uniform set of interfaces to media functionalities. Part of the platform-independence offerings include a portable codec (On2's VP6), which will play on all platforms where JavaFX media playback is supported. Platform-dependent implementations—to boost media playback performance, JavaFX also has the ability to use the native media engine supported by the underlying OS. For instance, playback on the Windows platform may be rendered by the Windows DirectShow media engine (see next recipe). This two-part article shows you how to use the supported media rendering components, including ImageView, MediaPlayer, and MediaView. These components provide high-level APIs that let developers create applications with engaging and interactive media content. Accessing media assets You may have seen the use of variable __DIR__ when accessing local resources, but may not fully know about its purpose and how it works. So, what does that special variable store? In this recipe, we will explore how to use the __DIR__ special variable and other means of loading resources locally or remotely. Getting ready The concepts presented in this recipe are used widely throughout the JavaFX application framework when pointing to resources. In general, classes that point to a local or remote resource uses a string representation of a URL where the resource is stored. This is especially true for the ImageView and MediaPlayer classes discussed in this article. How to do it... This recipe shows you three ways of creating a URL to point to a local or remote resource used by a JavaFX application. The full listing of the code presented here can be found in ch05/source-code/src/UrlAccess.fx. Using the __DIR__ pseudo-variable to access assets as packaged resources: var resImage = "{__DIR__}image.png"; Using a direct reference to a local file: var localImage = "file:/users/home/vladimir/javafx/ch005/source-code/src/image.png"; Using a URL to access a remote file: var remoteImage = "http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg" How it works... Loading media assets in JavaFX requires the use of a well-formatted URL that points to the location of the resources. For instance, both the Image and the Media classes (covered later in this article series) require a URL string to locate and load the resource to be rendered. The URL must be an absolute path that specifies the fully-realized scheme, device, and resource location. The previous code snippets show the following three ways of accessing resources in JavaFX: __DIR__ pseudo-variable—often, you will see the use of JavaFX's pseudo variable __DIR__, used when specifying the location of a resource. It is a special variable that stores the String value of the directory where the executing class that referenced __DIR__ is located. This is valuable, especially when the resource is embedded in the application's JAR file. At runtime, __DIR__ stores the location of the resource in the JAR file, making it accessible for reading as a stream. In the previous code, for example, the expression {__DIR__}image.png explodes as jar:file:/users/home/vladimir/javafx/ch005/source-code/dist/source-code.jar!/image.png. Direct reference to local resources—when the application is deployed as a desktop application, you can specify the location of your resources using URLs that provides the absolute path to where the resources are located. In our code, we use file:/users/home/vladimir/javafx/ch005/source-code/src/image.png as the absolute fully qualified path to the image file image.png. Direct reference to remote resources—finally, when loading media assets, you are able to specify the path of a fully-qualified URL to a remote resource using HTTP. As long as there are no subsequent permissions required, classes such as Image and Media are able to pull down the resource with no problem. For our code, we use a URL to a Flickr image http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg. There's more... Besides __DIR__, JavaFX provides the __FILE__ pseudo variable as well. As you may well guess, __FILE__ resolves to the fully qualified path of the of the JavaFX script file that contains the __FILE__ pseudo variable. At runtime, when your application is compiled, this will be the script class that contains the __FILE__ reference.
Read more
  • 0
  • 0
  • 4106

article-image-troux-enterprise-architecture-managing-ea-function
Packt
25 Aug 2010
9 min read
Save for later

Troux Enterprise Architecture: Managing the EA function

Packt
25 Aug 2010
9 min read
(For more resources on Troux, see here.) Targeted charter Organizations need a mission statement and charter. What should the mission and charter be for EA? The answer to this question depends on how the CIO views the function and where the function resides on the maturity model. The CIO could believe that EA should be focused on setting standards and identifying cost reduction opportunities. Conversely, the CIO could believe the function should focus on evaluation of emerging technologies and innovation. These two extremes are polar opposites. Each would require a different staffing model and different success criteria. The leader of EA must understand how the CIO views the function, as well as what the culture of the business will accept. Are IT and the business familiar with top-down direction, or does the company normally follow a consensus style of management? Is there a market leadership mentality or is the company a fast follower regarding technical innovation? To run a successful EA operation, the head of Enterprise Architecture needs to understand these parameters and factor them into the overall direction of the department. The following diagram illustrates finding the correct position between the two extremes of being focused on standards or innovation: Using standards to enforce polices on a culture that normally works through consensus will not work very well. Also, why focus resources on developing a business strategy or evaluating emerging technology if the company is totally focused on the next quarter's financial results? Sometimes, with the appropriate support from the CIO and other upper management, EA can become the change agent to encourage long-term planning. If a company has been too focused on tactics, EA can be the only department in IT that has the time and resources available to evaluate emerging solutions. The leader of the architecture function must understand the overall context in which the department resides. This understanding will help to develop the best structure for the department and hire people with the correct skill set. Let us look at the organization structure of the EA function. How large should the department be, where should the department report, and what does the organization structure look like? In most cases, there are also other areas within IT that perform what might be considered EA department responsibilities. How should the structure account for "domain architects" or "application architects" who do not report to the head of Enterprise Architecture? As usual, the answer to these questions is "it depends". The architecture department can be sized appropriately with an understanding of the overall role Enterprise Architecture plays within the broader scope of IT. If EA also runs the project management office (PMO) for IT, then the department is likely to be as large as fifty or more resources. In the case where the PMO resides outside of architecture, the architecture staffing level is normally between fifteen and thirty people. To be effective in a large enterprise, (five hundred or more applications development personnel) the EA department should be no smaller than about fifteen people. The following diagram provides a sample organization chart that assumes a balance is required between being focused on technical governance and IT strategy: The sample organization chart shows the balance between resources applied to tactical work and strategic work. The left side of the chart shows the teams focused on governance. Responsibilities include managing the ARB and maintaining standards and the architecture website. An architecture website is critical to maintaining awareness of the standards and best practices developed by the EA department. The sample organizational model assumes that a team of Solution Architects is centralized. These are experienced resources who help project teams with major initiatives that span the enterprise. These resources act like internal consultants and, therefore, must possess a broad spectrum of skills. Depending on the overall philosophy of the CIO, the Domain Architects may also be centralized. These are people with a high degree of experience within specific major technical domains. The domains match to the overall architectural framework of the enterprise and include platforms, software (including middleware), network, data, and security. These resources could also be decentralized into various applications development or engineering groups within IT. If Domain Architects are decentralized, at least two resources are needed within EA to ensure that each area is coordinated with the others across technical disciplines. If EA is responsible for evaluation of emerging technologies, then a team is needed to focus on execution of proof-of-architecture projects and productivity tool evaluations. A service can be created to manage various contracts and relationships with outside consulting agencies. These are typically companies focused on providing research, tracking IT advancements, and, in some cases, monitoring technology evolution within the company's industry. There are leaders (management) in each functional area within the architecture organization. As the resources under each area are limited, a good practice is to assume the leadership positions are also working positions. Depending on the overall culture of the company, the leadership positions could be Director- or Manager-level positions. In either case, these leaders must work with senior leaders across IT, the business, and outside vendors. For this reason, to be effective, they must be people with senior titles granted the authority to make important recommendations and decisions on a daily basis. In most companies, there is considerable debate about whether standards are set by the respective domain areas or by the EA department. The leader of EA, working with the CIO or CTO, must be flexible and able to adapt to the culture. If there is a need to centralize, then the architecture team must take steps to ensure there is buy-in for standards and ensure that governance processes are followed. This is done by building partnerships with the business and IT areas that control the allocation of funds to important projects. If the culture believes in decentralized standards management, then the head of architecture must ensure that there is one, and only one, official place where standards are documented and managed. The ARB, in this case, becomes the place where various opinions and viewpoints are worked out. However, it must be clear that the ARB is a function of Enterprise Architecture, and those that do not follow the collaborative review processes will not be able to move forward without obtaining a management consensus. Staffing the function Staffing the EA function is a challenge. To be effective, the group must have people who are respected for their technical knowledge and are able to communicate well using consensus and collaboration techniques. Finding people with the right combination of skills is difficult. Enterprise Architects may require higher salaries as compared to other staff within IT. Winning the battle with the human resources department about salaries and reporting levels within the corporate hierarchy is possible through the use of industry benchmarks. Requesting that jobs be evaluated against similar roles in the same industry will help make the point about what type of people are needed within the architecture department. People working in the EA department are different and here's why. In baseball, professional scouts rate prospects according to a scale on five different dimensions. Players that score high on all five are called "five tool players." These include hitting, hitting for power, running speed, throwing strength, and fielding. In evaluating resources for EA, there are also five major dimensions to consider. These include program management, software architecture, data architecture, network architecture, and platform architecture. As the following figure shows, an experience scale can be established for each dimension, yielding a complete picture of a candidate. People with the highest level of attainment across all five dimensions would be "five tool players". To be the most flexible in meeting the needs of the business and IT, the head of EA should strive for a good mix of resources covering the five dimensions. Resources who have achieved level 4 or level 5 across all of these would be the best candidates for the Solution Architect positions. These resources can do almost anything technical and are valuable across a wide array of enterprise-wide projects and initiatives. Resources who have mastered a particular dimension, such as data architecture or network architecture, are the best candidates for the Domain Architect positions. Software architecture is a broad dimension that includes software design, industry best practices, and middleware. Included within this area would be resources skilled in application development using various programming languages and design styles like object-oriented programming and SOA. As already seen, the Business Architect role spans all IT domains. The best candidates for Business Architecture need not be proficient in the five disciplines of IT architecture, but they will do a better job if they have a good awareness of what IT Architects do. Business Architects may be centralized and report into the EA function, or they may be decentralized across IT or even reside within business units. They are typically people with deep knowledge of business functions, business processes, and applications. Business Architects must be good communicators and have strong analytical abilities. They should be able to work without a great deal of supervision, be good at planning work, and can be trusted to deliver results per a schedule. Following are some job descriptions for these resources. They are provided as samples because each company will have its own unique set. Vice President/Director of Enterprise Architecture The Vice President/Director of Enterprise Architecture would normally have more than 10 or 15 years of experience depending on the circumstances of the organization. He or she would have experience with, and probably has mastered, all five of the key architecture skill set dimensions. The best resource is one with superior communication skills who is able to effect change across large and diverse organizations. The resource will also have experience within the industry in which the company competes. Leadership qualities are the most important aspect of this role, but having a technical background is also important. This person must be able to translate complex ideas, technology, and programs into language upper management can relate to. This person is a key influencer on technical decisions that affect the business on a long-term basis.
Read more
  • 0
  • 0
  • 3069

article-image-python-multimedia-application-thumbnail-maker
Packt
12 Aug 2010
7 min read
Save for later

A Python Multimedia Application: Thumbnail Maker

Packt
12 Aug 2010
7 min read
(For more resources on Python, see here.) Project: Thumbnail Maker Let's take up a project now. We will apply some of the operations we learned in the previous article, to create a simple Thumbnail Maker utility. This application will accept an image as an input and will create a resized image of that image. Although we are calling it a thumbnail maker, it is a multi-purpose utility that implements some basic image-processing functionality. Before proceeding further, make sure that you have installed all the packages discussed at the beginning of the previous article. The screenshot of the Thumbnail Maker dialog is show in the following illustration. The Thumbnail Maker GUI has two components: The left panel is a 'control area', where you can specify certain image parameters along with options for input and output paths. A graphics area on the right-hand side where you can view the generated image. In short, this is how it works: The application takes an image file as an input. It accepts user input for image parameters such as dimensions in pixel, filter for re-sampling and rotation angle in degrees. When the user clicks the OK button in the dialog, the image is processed and saved at a location indicated by the user in the specified output image format. Time for action – play with Thumbnail Maker application First, we will run the Thumbnail Maker application as an end user. This warm-up exercise intends to give us a good understanding of how the application works. This, in turn, will help us develop/learn the involved code quickly. So get ready for action! Download the files ThumbnailMaker.py, ThumbnailMakeDialog.py, and Ui_ThumbnailMakerDialog.py from Packt website. Place these files in some directory. From the command prompt, change to this directory location and type the following command: python ThumbnailMakerDialog.py The Thumbnail Maker dialog that pops up was shown in the earlier screenshot. Next, we will specify the input-output paths and various image parameters. You can open any image file of your choice. Here, the flower image shown in some previous sections will be used as an input image. To specify an input image, click on the small button with three dots …. It will open a file dialog. The following illustration shows the dialog with all the parameters specified. If "Maintain Aspect Ratio" checkbox is checked, internally it will scale the image dimension so that the aspect ratio of the output image remains the same. When the OK button is clicked, the resultant image is saved at the location specified by the Output Location field and the saved image is displayed in the right-hand panel of the dialog. The following screenshot shows the dialog after clicking OK button. You can now try modifying different parameters such as output image format or rotation angle and save the resulting image. See what happens when the Maintain Aspect Ratio checkbox is unchecked. The aspect ratio of the resulting image will not be preserved and the image may appear distorted if the width and height dimensions are not properly specified. Experiment with different re-sampling filters; you can notice the difference between the quality of the resultant image and the earlier image. There are certain limitations to this basic utility. It is required to specify reasonable values for all the parameters fields in the dialog. The program will print an error if any of the parameters is not specified. What just happened? We got ourselves familiar with the user interface of the thumbnail maker dialog and saw how it works for processing an image with different dimensions and quality. This knowledge will make it easier to understand the Thumbnail Maker code. Generating the UI code The Thumbnail Maker GUI is written using PyQt4 (Python bindings for Qt4 GUI framework). Detailed discussion on how the GUI is generated and how the GUI elements are connected to the main functions is beyond the scope of this article. However, we will cover certain main aspects of this GUI to get you going. The GUI-related code in this application can simply be used 'as-is' and if this is something that interests you, go ahead and experiment with it further! In this section, we will briefly discuss how the UI code is generated using PyQt4. Time for action – generating the UI code PyQt4 comes with an application called QT Designer. It is a GUI designer for QT-based applications and provides a quick way to develop a graphical user interface containing some basic widgets. With this, let's see how the Thumbnail Maker dialog looks in QT Designer and then run a command to generate Python source code from the .ui file. Download the thumbnailMaker.ui file from the Packt website. Start the QT Designer application that comes with PyQt4 installation. Open the file thumbnailMaker.ui in QT Designer. Notice the red-colored borders around the UI elements in the dialog. These borders indicate a 'layout' in which the widgets are arranged. Without a layout in place, the UI elements may appear distorted when you run the application and, for instance, resize the dialog. Three types of QLayouts are used, namely Horizontal, Vertical, and Grid layout. You can add new UI elements, such as a QCheckbox or a QLabel, by dragging and dropping it from the 'Widget Box' of QT Designer. It is located in the left panel by default. Click on the field next to the label "Input file". In the right-hand panel of QT Designer, there is a Property Editor that displays the properties of the selected widget (in this case it's a QLineEdit). This is shown in the following illustration. The Property Editor allows us to assign values to various attributes such as the objectName, width, and height of the widget, and so on. Qt Designer shows the details of the selected widget in Property Editor. QT designer saves the file with extension .ui. To convert this into Python source code, PyQt4 provides a conversion utility called pyuic4. On Windows XP, for standard Python installation, it is present at the following location—C:Python26 Libsite-packagesPyQt4pyuic4.bat. Add this path to your environment variable. Alternatively specify the whole path each time you want to convert ui file to Python source file. The conversion utility can be run from the command prompt as: pyuic4 thumbnailMaker.ui -o Ui_ThumbnailMakerDialog.py This script will generate Ui_ThumbnailMakerDialog.py with all the GUI elements defined. You can further review this file to understand how the UI elements are defined. What just happened? We learned how to autogenerate the Python source code defining UI elements of Thumbnail Maker Dialog from a Qt designer file. Have a go hero – tweak UI of Thumbnail Maker dialog Modify the thumbnailMaker.ui file in QT Designer and implement the following list of things in the Thumbnail Maker dialog. Change the color of all the line edits in the left panel to pale yellow. Tweak the default file extension displayed in the Output file Format combobox such that the first option is .png instead of .jpeg Double click on this combobox to edit it. Add new option .tiff to the output format combobox. Align the OK and Cancel buttons to the right corner. You will need to break layouts, move the spacer around, and recreate the layouts. Set the range of rotation angle 0 to 360 degrees instead of the current -180 to +180 degrees. After this, create Ui_ThumbnailMakerDialog.py by running the pyuic4 script and then run the Thumbnail Maker application.
Read more
  • 0
  • 0
  • 2309

article-image-managing-it-portfolio-using-troux-enterprise-architecture
Packt
12 Aug 2010
16 min read
Save for later

Managing the IT Portfolio using Troux Enterprise Architecture

Packt
12 Aug 2010
16 min read
(For more resources on Troux, see here.) Managing the IT Portfolio using Troux Enterprise Architecture Almost every company today is totally dependent on IT for day-to-day operations. Large companies literally spend billions on IT-related personnel, software, equipment, and facilities. However, do business leaders really know what they get in return for these investments? Upper management knows that a successful business model depends on information technology. Whether the company is focused on delivery of services or development of products, management depends on its IT team to deliver solutions that meet or exceed customer expectations. However, even though companies continue to invest heavily in various technologies, for most companies, knowing the return-on-investment in technology is difficult or impossible. When upper management asks where the revenues are for the huge investments in software, servers, networks, and databases, few IT professionals are able to answer. There are questions that are almost impossible to answer without guessing, such as: Which IT projects in the portfolio of projects will actually generate revenue? What are we getting for spending millions on vendor software? When will our data center run out of capacity? This article will explore how IT professionals can be prepared when management asks the difficult questions. By being prepared, IT professionals can turn conversations with management about IT costs into discussions about the value IT provides. Using consolidated information about the majority of the IT portfolio, IT professionals can work with business leaders to select revenue-generating projects, decrease IT expenses, and develop realistic IT plans. The following sections will describe what IT professionals can do to be ready with accurate information in response to the most challenging questions business leaders might ask. Management repositories IT has done a fine job of delivering solutions for years. However, pressure to deliver business projects quickly has created a mentality in most IT organizations of "just put it in and we will go back and do the clean up later." This has led to a layering effect where older "legacy" technology remains in place, while new technology is adopted. With this complex mix of legacy solutions and emerging technology, business leaders have a hard time understanding how everything fits together and what value is provided from IT investments. Gone are the days when the Chief Information Officer (CIO) could say "just trust me" when business people asked questions about IT spending. In addition, new requirements for corporate compliance combined with the expanding use of web-based solutions makes managing technology more difficult than ever. With the advent of Software-as-a-Service (SaaS) or cloud computing, the technical footprint, or ecosystem, of IT has extended beyond the enterprise itself. Virtualization of platforms and service-orientation adds to the mind-numbing mix of technologies available to IT. However, there are many systems available to help companies manage their technological portfolio. Unfortunately, multiple teams within the business and within IT see the problem of managing the IT portfolio differently. In many companies, there is no centralized effort to gather and store IT portfolio information. Teams with a need for IT asset information tend to purchase or build a repository specific to their area of responsibility. Some examples of these include: Business goals repository Change management database Configuration management database Business process management database Fixed assets database Metadata repository Project portfolio management database Service catalog Service registry While each of these repositories provides valuable information about IT portfolios, they are each optimized to meet a specific set of requirements. The following table shows the main types of information stored in each of these repositories along with a brief statement about its functional purpose: Repository Main content Main purpose Business goals Goal statements and assignments Documents business goals and who is responsible Change management database Change request tickets, application owners Captures change requests and who can authorize change Configuration management database Identifies actual hardware and software in use across the enterprise Supports Information Technology Infrastructure Library (ITIL) processes Business process management database Business processes, information flows, and process owners Used to develop applications and document business processes Fixed assets database Asset identifiers for hardware and software, asset life, purchase cost, and depreciation amounts Documents cost and depreciable life of IT assets Metadata repository Data about the company databases and files Documents the names, definitions, data types, and locations of the company data Project portfolio management database Project names, classifications, assignments, business value and scope Used to manage IT workload and assess value of IT projects to the business Service catalog Defines hardware and compatible software available for project use Used to manage hardware and software implementations assigned to the IT department Service registry Names and details of reusable software services Used to manage, control, and report on reusable software It is easy to see that while each of these repositories serves a specific purpose, none supports an overarching view across the others. For example, one might ask: How many SQL Server databases do we have installed and what hardware do they run on? To answer this question, IT managers would have to extract data from the metadata repository and combine it with data from the Configuration Management Database (CMDB). The question could be extended: How much will it cost in early expense write-offs if we retire the SQL Server DB servers into a new virtual grid of servers? To answer this question, IT managers need to determine not only how many servers host SQL Server, but how old they are, what they cost at purchase time, and how much depreciation is left on them. Now the query must span at least three systems (CMDB, fixed assets, and metadata repository). The accuracy of the answer will also depend on the relative validity of the data in each repository. There could be overlapping data in some, and outright errors in others. Changing the conversation When upper management asks difficult questions, they are usually interested in cost, risk management, or IT agility. Not knowing a great deal about IT, they are curious about why they need to spend millions on technology and what they get for their investments. The conversation ends up being primarily about cost and how to reduce expenses. This is not a good position to be in if you are running a support function like Enterprise Architecture. How can you explain IT investments in a way that management can understand? If you are not prepared with facts, management has no choice but to assume that costs are out of control and they can be reduced, usually by dramatic amounts. As a good corporate citizen, it is your job to help reduce costs. Like everyone in management, getting the most out of the company's assets is your responsibility. However, as we in IT know, it's just as important to be ready for changes in technology and to be on top of technology trends. As technology leaders, it is our job to help the company stay current through investments that may pay off in the future rather than show an immediate return. The following diagram shows various management functions and technologies that are used to manage the business of IT: The dimensions of these tools and processes span systems that run the business to change the business and from the ones using operational information to using strategic information. Various technologies that support data about IT assets are shown. These include: Business process analytics and management information Service-oriented architecture governance Asset-liability management Information technology systems management Financial management information Project portfolio and management information The key to changing the conversation about IT is having the ability to bring the information of these disciplines into a single view. The single view provides the ability to actually discuss IT in a strategic way. Gathering data and reporting on the actual metrics of IT, in a way business leaders can understand, supports strategic planning. The strategic planning process combined with fact-based metrics establishes credibility with upper management and promotes improved decision making on a daily basis. Troux Technologies Solving the IT-business communication problem has been difficult until recently. Troux Technologies (www.troux.com) developed a new open-architected repository and software solution, called the Troux Transformation Platform, to help IT manage the vast array of technology deployed within the company. Troux customers use the suite of applications and advanced integration platform within the product architecture to deliver bottom-line results. By locating where IT expenses are redundant, or out-of-step with business strategy, Troux customers experience significant cost savings. When used properly, the platform also supports improved IT efficiency, quicker response to business requirements, and IT risk reduction. In today's globally-connected markets, where shocks and innovations happen at an unprecedented rate, antiquated approaches to Strategic IT Planning and Enterprise Architecture have become a major obstruction. The inability of IT to plan effectively has driven business leaders to seek solutions available outside the enterprise. Using SaaS or Application Service Providers (ASPs) to meet urgent business objectives can be an effective means to meet short-term goals. However, to be complete, even these solutions usually require integration with internal systems. IT finds itself dealing with unspecified service-level requirements, developing integration architectures, and cleaning up after poorly planned activities by business leaders who don't understand what capabilities exist within the software running inside the company. A global leader in Strategic IT Planning and Enterprise Architecture software, Troux has created an Enterprise Architecture repository that IT can use to put itself at the center of strategic planning. Troux has been successful in implementing its repository at a number of companies. A partial list of Troux's customers can be found on the website. There are other enterprise-level repository vendors on the market. However, leading analysts, such as The Gartner Group and Forrester Research, have published recent studies ranking Troux as a leader in the IT strategy planning tools space. Troux Transformation Platform Troux's sophisticated integration and collaboration capabilities support multiple business initiatives such as handling mergers, aligning business and IT plans, and consolidating IT assets. The business-driven platform provides new levels of visibility into the complex web of IT resources, programs, and business strategy so the business can see instantly where IT spending and programs are redundant or out-of-step with business strategy. The business suite of applications helps IT to plan and execute faster with data assimilated from various trusted sources within the company. The platform provides information necessary to relevant stakeholders such as Business Analysts, Enterprise Architects, The Program Management Office, Solutions Architects, and executives within the business and IT. The transformation platform is not only designed to address today's urgent cost-restructuring agendas, but it also introduces an ongoing IT management discipline, allowing EA and business users to drive strategic growth initiatives. The integration platform provides visibility and control to: Uncover and fix business/IT disconnects: This shows how IT directly supports business strategies and capabilities, and ensures that mismatched spending can be eliminated. Troux Alignment helps IT think like a CFO and demonstrate control and business purpose for the billions that are spent on IT assets, by ensuring that all stakeholders have valid and relevant IT information. Identify and eliminate redundant IT spending: This uncovers the many untapped opportunities with Troux Optimization to free up needless spend, and apply it either to the bottom line or to support new business initiatives. Speed business response and simplify IT: This speeds the creation and deployment of a set of standard, reusable building blocks that are proven to work in agile business cycles. Troux Standards enables the use of IT standards in real time, thereby streamlining the process of IT governance. Accelerate business transformation for government agencies: This helps federal agencies create an actionable Enterprise Architecture and comply with constantly changing mandates. Troux eaGov automatically identifies opportunities to reduce costs to business and IT risks, while fostering effective initiative planning and execution within or across agencies. Support EA methodology: Companies adopting The Open Group Architecture Framework (TOGAF™) can use the Troux for TOGAF solution to streamline their efforts. Unlock the full potential of IT portfolio investment: Unifies Strategic IT Planning, EA, and portfolio project management through a common IT governance process. The Troux CA Clarity Connection enables the first bi-directional integration in the market between CA Clarity Project Portfolio Management (PPM) and the Troux EA repository for enhanced IT investment portfolio planning, analysis, and control. Understand your deployed IT assets: Using the out-of-the-box connection to HP's Universal Configuration Management Database (uCMDB), link software and hardware with the applications they support. All of these capabilities are enabled through an open-architected platform that provides uncomplicated data integration tools. The platform provides Architecture-modeling capabilities for IT Architects, an extensible database schema (or meta-model), and integration interfaces that are simple to automate and bring online with minimal programming efforts. Enterprise Architecture repository The Troux Transformation Platform acts as the consolidation point across all the various IT management databases and even some management systems outside the control of IT. By collecting data from across various areas, new insights are possible, leading to reductions in operating costs and improvements in service levels to the business. While it is possible to combine these using other products on the market or even develop a home-grown EA repository, Troux has created a very easy-to-use API for data collection purposes. In addition, Troux provides a database meta-model for the repository that is extensible. Meta-model extensibility makes the product adaptable to the other management systems across the company. Troux also supports a configurable user interface allowing for a customized view into the repository. This capability makes the catalog appear as if it were a part of the other control systems already in place at the company. Additionally, Troux provides an optional set of applications that support a variety of roles, out of the box, with no meta-model extensions or user interface configurations required. These include: Troux Standards: This application supports the IT technology standards and lifecycle governance process usually conducted by the Enterprise Architecture department. Troux Optimization: This application supports the Application portfolio lifecycle management process conducted by the Enterprise Program Management Office (EPMO) and/or Enterprise Architecture. Troux Alignment: This application supports the business and IT assets and application-planning processes conducted by IT Engineering, Corporate Finance, and Enterprise Architecture. Even these three applications that are available out-of-the-box from Troux can be customized by extending their underlying meta-models and customizing the user interface. The EA repository provides output that is viewable online. Standard reports are provided or custom reports can be developed as per the specific needs of the user community. Departments within or even outside of IT can use the customized views, standard reports, and custom reports to perform analyses. For example, the Enterprise Program Management Office (EPMO) can produce reports that link projects with business goals. The EPMO can review the project portfolio of the company to identify projects that do not support company goals. Decisions can be made about these projects, thereby stopping them, slowing them down, or completing them faster. Resources can be moved from the stopped or completed low-value projects to the higher-value projects, leading to increased revenue or reduced costs for the company. In a similar fashion, the Internal Audit department can check on the level of compliance to company IT standards or use the list of applications stored within the catalog to determine the best audit schedule to follow. Less time can be spent auditing applications with minimal impact on company operations or on applications and projects targeted as low value. Application development can use data from the catalog to understand the current capabilities of the existing applications of the company. As staff changes or "off-shore" resources are applied to projects, knowing what existing systems do in advance of a new project can save many hours of work. Information can be extracted from the EA repository directly into requirements documentation, which is always the starting point for new applications, as well as maintenance projects on existing applications. One study performed at a major financial services company showed that over 40% of project development time was spent in the upfront work of documenting and explaining current application capabilities to business sponsors of projects. By supplying development teams with lists of application capabilities early in the project life cycle, time to gather and document requirements can be reduced significantly. Of course, one of the biggest benefactors of the repository is the EA group. In most companies, EA's main charter is to be the steward of information about applications, databases, hardware, software, and network architecture. EA can perform analyses using the data from the repository leading to recommendations for changes by middle and upper management. In addition, EA is responsible for collecting, setting, and managing the IT standards for the company. The repository supports a single source for IT standards, whether they are internal or external standards. The standards portion of the repository can be used as the centerpiece of IT governance. The function of the Architecture Review Board (ARB) is fully supported by Troux Standards. Capacity Planning and IT Engineering functions will also gain substantially through the use of an EA repository. The useful life of IT assets can be analyzed to create a master plan for technical refresh or reengineering efforts. The annual spend on IT expenses can be reduced dramatically through increased levels of virtualization of IT assets, consolidation of platforms, and even consolidation of whole data centers. IT Engineering can review what is currently running across the company and recommend changes to reduce software maintenance costs, eliminate underutilized hardware, and consolidate federated databases. Lastly, IT Operations can benefit from a consolidated view into the technical footprint running at any point in time. Even when system availability service levels call for near-real-time error correction, it may take hours for IT Operations personnel to diagnose problems. They tend not to have a full understanding of what applications run on what servers, which firewalls support which networks, and which databases support which applications. Problem determination time can be reduced by providing accurate technical architecture information to those focused on keeping systems running and meeting business service-level requirements. Summary This article identified the problem IT has with understanding what technologies it has under management. While many solutions are in place in many companies to gain a better view into the IT portfolio, none are designed to show the impact of IT assets in the aggregate. Without the capabilities provided by an EA repository, IT management has a difficult time answering tough questions asked by business leaders. Troux Technologies offers a solution to this problem using the Troux Transformation Platform. The platform acts as a master metadata repository and becomes the focus of many efforts that IT may run to reduce significant costs and improve business service levels. Further resources on this subject: Troux Enterprise Architecture: Managing the EA function [article]
Read more
  • 0
  • 0
  • 4641
article-image-python-image-manipulation
Packt
12 Aug 2010
5 min read
Save for later

Python Image Manipulation

Packt
12 Aug 2010
5 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Before we jump in to the main topic, it is necessary to install the following packages. Python In this article, we will use Python Version 2.6, or to be more specific, Version 2.6.4. It can be downloaded from the following location: http://python.org/download/releases/ Windows platform For Windows, just download and install the platform-specific binary distribution of Python 2.6.4. Other platforms For other platforms, such as Linux, Python is probably already installed on your machine. If the installed version is not 2.6, build and install it from the source distribution. If you are using a package manager on a Linux system, search for Python 2.6. It is likely that you will find the Python distribution there. Then, for instance, Ubuntu users can install Python from the command prompt as: $sudo apt-get python2.6 Note that for this, you must have administrative permission on the machine on which you are installing Python. Python Imaging Library (PIL) We will learn image-processing techniques by making extensive use of the Python Imaging Library (PIL) throughout this article. PIL is an open source library. You can download it from http://www.pythonware.com/products/pil/. Install the PIL Version 1.1.6 or later. Windows platform For Windows users, installation is straightforward—use the binary distribution PIL 1.1.6 for Python 2.6. Other platforms For other platforms, install PIL 1.1.6 from the source. Carefully review the README file in the source distribution for the platform-specific instructions. Libraries listed in the following table are required to be installed before installing PIL from the source. For some platforms like Linux, the libraries provided in the OS should work fine. However, if those do not work, install a pre-built "libraryName-devel" version of the library. For example, for JPEG support, the name will contain "jpeg-devel-", and something similar for the others. This is generally applicable to rpm-based distributions. For Linux flavors like Ubuntu, you can use the following command in a shell window. $sudo apt-get install python-imaging However, you should make sure that this installs Version 1.1.6 or later. Check PIL documentation for further platform-specific instructions. For Mac OSX, see if you can use fink to install these libraries. See http://www.finkproject.org/ for more details. You can also check the website http://pythonmac.org or Darwin ports website http://darwinports.com/ to see if a binary package installer is available. If such a pre-built version is not available for any library, install it from the source. The PIL prerequisites for installing PIL from source are listed in the following table: Library URL Version Installation options (a) or (b) libjpeg (JPEG support) http://www.ijg.org/files 7 or 6a or 6b (a) Pre-built version. For example: jpeg-devel-7 Check if you can do: sudo apt-install libjpeg (works on some flavors of Linux) (b) Source tarball. For example: jpegsrc.v7.tar.gz zib (PNG support) http://www.gzip.org/zlib/ 1.2.3 or later (a) Pre-built version. For example: zlib-devel-1.2.3.. (b) Install from the source. freetype2 (OpenType /TrueType support) http://www.freetype.org 2.1.3 or later (a) Pre-built version. For example: freetype2-devel-2.1.3.. (b) Install from the source. PyQt4 This package provides Python bindings for Qt libraries. We will use PyQt4 to generate GUI for the image-processing application that we will develop later in this article. The GPL version is available at: http://www.riverbankcomputing.co.uk/software/pyqt/download. Windows platform Download and install the binary distribution pertaining to Python 2.6. For example, the executable file's name could be 'PyQt-Py2.6-gpl-4.6.2-2.exe'. Other than Python, it includes everything needed for GUI development using PyQt. Other platforms Before building PyQt, you must install SIP Python binding generator. For further details, refer to the SIP homepage: http://www.riverbankcomputing.com/software/sip/. After installing SIP, download and install PyQt 4.6.2 or later, from the source tarball. For Linux/Unix source, the filename will start with PyQt-x11-gpl-.. and for Mac OS X, PyQt-mac-gpl-... Linux users should also check if PyQt4 distribution is already available through the package manager. Summary of installation prerequisites   Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution (a) Install from binary; Also install additional developer packages (For example, with python-devel in the package name in the rpm systems) OR (b) Build and install from the source tarball. (c) MAC users can also check websites such as http://darwinports.com/ or http://pythonmac.org/. PIL http://www.pythonware.com/products/pil/ 1.1.6 or later Install PIL 1.1.6 (binary) for Python 2.6 (a) Install prerequisites if needed. Refer to Table #1 and the README file in PIL source distribution. (b) Install PIL from source. (c) MAC users can also check websites like http://darwinports.com/ or http://pythonmac.org/. PyQt4 http://www.riverbankcomputing.co.uk/software/pyqt/download 4.6.2 or later Install using binary pertaining to Python 2.6 (a) First install SIP 4.9 or later. (b) Then install PyQt4.
Read more
  • 0
  • 0
  • 3545

article-image-coldfusion-9-power-cfcs-and-web-forms
Packt
05 Aug 2010
13 min read
Save for later

ColdFusion 9: Power CFCs and Web Forms

Packt
05 Aug 2010
13 min read
(For more resources on ColdFusion, see here.) There used to be long pages of what we called "spaghetti code" because the page would go on and on. You had to follow the conditional logic by going through the page up and down, and then had to understand how things worked. This made writing, updating, and debugging a diffcult task even for highly-skilled developers CFCs allow you to encapsulate some part of the logic of a page inside an object. Encapsulation simply means packaged for reuse inside something. CFCs are the object-packaging method used in ColdFusion. The practice of protecting access In CFC methods, there is an attribute called "access".Some methods within a CFC are more examples of reuse. The sample code for _product.cfc is shown here. It is an example of a power CFC. There is a method inside the CFC called setDefaults(). The variable variables.field.names comes from another location in our CFC: <cffunction name="setDefaults" access="private" output="false"> <cfset var iAttr = 0> <cfloop list="#listLen(variables.field.names)#" index="iAttr"> <cfscript> variables.attribute[#listGetAt(variables.field.names,iAttr)#] = setDefault(variables.field.names,iAttr); </cfscript> </cfloop></cffunction> The logic for this would actually be used in more than one place inside the object. When the object is created during the first run, it would call the setDefaults() method and set the defaults. When you use the load method to insert another record inside the CFC, it will run this method. This will become simpler as you use CFCs and methods more often. This is a concept called refactoring, where we take common features and wrap them for reuse. This takes place even inside a CFC. Again, the setDefaults() function is just another method inside the same CFC. Now, we look at the access attribute in the code example and note that it is set to private. This means that only this object can call the method. One of the benefits to CFCs is making code simpler. The interface to the outside world of the CFC is its methods. We can hide a method from the outside world, and also protect access to the method by setting the access attribute to private. If you want to make sure that only CFCs in the same directory can access these CFC's methods, then you will have to set the attribute to package. This is a value that is rarely used. The default value for the access attribute is public. This means that any code running on the web server can access the CFC. (Shared hosting companies block one account from being able to see the other accounts on the same server. If you are concerned about your hosting company, then you should either ask them about this issue or move to a dedicated or virtual hosting server.) The last value for the access attribute is remote. This is actually how you create a number of "cool power" uses of the CFC. There is a technology on the Web called web services. Setting the CFC to remote allows access to the CFC as a web service. You can also connect to this CFC through Flash applications, Flex, or AIR, using the remote access value. This method also allows the CFC to respond to AJAX calls. Now, we will learn to use more of the local power features. Web forms introduction Here, we will discuss web forms along with CFCs. Let us view our web form page. Web forms are the same in ColdFusion as they are in any other HTML scenario. You might even note that there is very little use for web forms until you have a server-side technology such as ColdFusion. This is because when the form is posted, you need some sort of program to handle the data posted back to the server. <!--- Example: 3_1.cfm ---><!--- Processing ---><!--- Content ---><form action="3_1.cfm" method="post"> <table> <tr> <td>Name:</td> <td><input type="text" name="name" id="idName" value="" /></td> </tr> <tr> <td>Description:</td> <td><input type="text" name="description" id="idDescription" value="" /></td> </tr> <tr> <td>Price:</td> <td><input type="text" name="price" id="idPrice" value="" /></td> </tr> <tr> <td>&nbsp;</td> <td><input type="submit" name="submit" value="submit" /></td> </tr> </table></form> First, notice that all of the information on the page is in the content section. Anything that goes from the server to the browser is considered as content. You can fll in and submit the form, and you will observe that all of the form fields get cleared out. This is because this form posts back to the same page. Self-posting forms are a valid method of handling page fow on websites. The reason why nothing seems to be happening is because the server is not doing anything with the data being sent back from the browser. Let us now add <cfdump var="#form#"/> to the bottom of the content, below the form tag, and observe what we get when we post the form: Now we see another common structure in ColdFusion. It is known as the form structure. There are two types of common structures that send data to the server. The first one is called get and the second one is called post. If you see the code, you will notice that the method of the form is post. The form post setting is the same as coding with the form variable in ColdFusion. You should also observe that there is one extra field in the form structure that is not shown in the URL structure variable. It is the FIELDNAMES variable. It returns a simple list of the field names that were returned with the form. Let us edit the code and change the form tag attribute to get. Then, refresh the page and click on the submit button: From the previous screenshot, it is evident that the browser looks at the get or post value of the form, and sends a get or post back to the server. Post is a "form method" belonging to the past and this is why ColdFusion translates posted variables to the form structure. Now change the dump tag to "URL" and observe the results. Fill out the form and submit it again with the new change. This displays the values in our structure as we would expect. This means you can either send URL-type data back to the server, or form-type data with forms. The advantage of sending form data is that form data can handle a larger volume of data being sent back to the server as compared to a get or URL request. Also, it is worth noting that this style of return prevents the form field values from being exposed in the URL. They can still be accessed, but are just not visible in the URL any more. So the method of choice for forms is post. Change both the method of the form attribute and the value of the cfdump var to form again. The Description box is not ideal for entering product descriptions. So, we are going to use a text area in its place. Use the following code to accommodate a text area box. You can change the size of form's objects using attributes and styles: <tr> <td>Description:</td> <td> <textArea name="description" id="idDescription"></textArea> </td></tr> Here, we see our form looking different. If you fill up the description with more content than the box can hold, it shows the scroll bars appropriately. Managing our product data Currently, we have a form that can be used for two purposes. It can be used to enter a new product as well as to edit existing ones. We are going to reuse this form. Reuse is the fastest path to make things easier. However, we must not think that it is the only way to do things. What we should think is that not reusing something requires a reason for doing it differently. In order to edit an existing product, we will have to create a page that shows the existing product records. Let us create the page: <!--- Example: product_list.cfm ---><!--- Processing ---><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); rsProducts = objProduct.getRecordset();</cfscript><!--- Content ---><h3>Select a product to edit.</h3><ul> <cfoutput query="rsProducts"> <li> <a href="product_edit.cfm?id=#rsProducts.id#">#rsProducts.name# </li> </cfoutput></ul> There is no new code here. This is the browser view that we get when we run this page. Here, we will post our edit page. Before you run the code, take the code from 3_1.cfm that we wrote at the beginning of the article and save a copy as product_edit.cfm to make the page work correctly when someone clicks on any of the products: Now, we will click on a product. Let us manage the Watermelon Plant for now and observe what happens on the next page: This is our edit page, and we will modify it so that it can get the data when we click through from our list page. Getting data to our edit page The current page looks similar to the page where we put the form. To get the data from our database onto the page, we need to do a few things here. First, let us change the action of the form tag to product_edit.cfm. We can modify the processing section of the page frst, which will make things simpler. Add the following code to your product_edit.cfm page: <!--- Processing ---><cfparam name="url.id" default="0"><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); objProduct.load(url.id);</cfscript> We need the default value set so that we do not receive an error message if the page is called without an id. After we set our default, we will see that we have created an object from our CFC object class. This time, we are passing the Data Source Name dsn into the object through the constructor method. This makes our code more portable, and ready for reuse. Once we have an instance, we set the current record using the load method and passing the id of the data record to the method. Let us look at the minor changes that we will make to the content section. We will add the values of the object's protected attributes. <!--- Content ---><cfoutput> <form action="product_edit.cfm" method="post"> <table> <tr> <td>Name:</td> <td> <input type="text" name="name" id="idName" value="#objProduct.get('name')#" /> </td> </tr> <tr> <td>Description:</td> <td> <textArea name="description" id="idDescription"> #objProduct.get('description')#</textArea> </td> </tr> <tr> <td>Price:</td> <td> <input type="text" name="price" id="idPrice" value="#objProduct.get('price')#" /> </td> </tr> <tr> <td>&nbsp;</td> <td> <input type="submit" name="submit" value="submit" /> </td> </tr> </table> </form></cfoutput> Now, we will refresh the form and see how the results differ: Doesn't this look better? We can go back to the list page and retrieve an existing product from the edit form. If we submit back the same form, browsers tend to empty out the form. It should not do that, but the form is not posting the ID of the record back to the server. This can lead to a problem because, if we do not send the ID of the record back, the database will have no idea as to which record's details should be changed. Let us solve these issues first, and then we will learn to use a new tag called the <cfinclude> tag along the way. The first problem that we are going to solve is where we are calling the page with the ID value in the URL structure; then, if we post the page we will be calling the page with the ID in the form structure. We are going to use a technique that has been widely used for years in the ColdFusion community. We are going to combine the two scopes into a new common structure. We will create a structure called attributes. First we will check if it exists. If it does not, then we will create the structure. After that, we will merge the URL structure, and then the FORM structure into the attributes structure. We will put that code in a common page called request_attributes.cfm, so we can include it on any page we want, reusing the code. Do remember that the form and URL scope always exist. <!--- request_attributes.cfm ---><cfscript> if(NOT isDefined("attributes")) { attributes = structNew(); } structAppend(attributes,url); structAppend(attributes,form);</cfscript> Let us modify our edit page in order to take care of a couple of issues. We need to include the script that we have just created. We will modify the processing section of our edit page as highlighted here: <!--- Processing ---><cfinclude template="request_attributes.cfm"><cfparam name="attributes.id" default="0"><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); objProduct.load(attributes.id);</cfscript> There is only one more thing we need now: We need our form to store the id value of the record that is being managed. We could just put it in a textbox like the other fields, but the user does not need to know that information. Let us use a hidden input field and add it after our form tag: <!--- Content ---><cfoutput> <form action="product_edit.cfm" method="post"> <input type="hidden" name="id" value="#objProduct.get('id')#"> Refresh the screen, and it will work when we use the form, or when we choose an item from the product list page. We have now created our edit/add page.
Read more
  • 0
  • 0
  • 1766

article-image-setting-glassfish-jms-and-working-message-queues
Packt
30 Jul 2010
4 min read
Save for later

Setting up GlassFish for JMS and Working with Message Queues

Packt
30 Jul 2010
4 min read
(For more resources on Java, see here.) Setting up GlassFish for JMS Before we start writing code to take advantage of the JMS API, we need to configure some GlassFish resources. Specifically, we need to set up a JMS connection factory, a message queue, and a message topic. Setting up a JMS connection factory The easiest way to set up a JMS connection factory is via GlassFish's web console. The web console can be accessed by starting our domain, by entering the following command in the command line: asadmin start-domain domain1 Then point the browser to http://localhost:4848 and log in: A connection factory can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Connection Factories node, then clicking on the New... button in the main area of the web console. For our purposes, we can take most of the defaults. The only thing we need to do is enter a Pool Name and pick a Resource Type for our connection factory. It is always a good idea to use a Pool Name starting with "jms/" when picking a name for JMS resources. This way JMS resources can be easily identified when browsing a JNDI tree. In the text field labeled Pool Name, enter jms/GlassFishBookConnectionFactory. Our code examples later in this article will use this JNDI name to obtain a reference to this connection factory. The Resource Type drop-down menu has three options: javax.jms.TopicConnectionFactory - used to create a connection factory that creates JMS topics for JMS clients using the pub/sub messaging domain javax.jms.QueueConnectionFactory - used to create a connection factory that creates JMS queues for JMS clients using the PTP messaging domain javax.jms.ConnectionFactory - used to create a connection factory that creates either JMS topics or JMS queues For our example, we will select javax.jms.ConnectionFactory. This way we can use the same connection factory for all our examples, those using the PTP messaging domain and those using the pub/sub messaging domain. After entering the Pool Name for our connection factory, selecting a connection factory type, and optionally entering a description for our connection factory, we must click on the OK button for the changes to take effect. We should then see our newly created connection factory listed in the main area of the GlassFish web console. Setting up a JMS message queue A JMS message queue can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Destination Resources node, then clicking on the New... button in the main area of the web console. In our example, the JNDI name of the message queue is jms/GlassFishBookQueue. The resource type for message queues must be javax.jms.Queue. Additionally, a Physical Destination Name must be entered. In this example, we use GlassFishBookQueue as the value for this field. After clicking on the New... button, entering the appropriate information for our message queue, and clicking on the OK button, we should see the newly created queue: Setting up a JMS message topic Setting up a JMS message topic in GlassFish is very similar to setting up a message queue. In the GlassFish web console, expand the Resources node in the tree at the left hand side, then expand the JMS Resouces node and click on the Destination Resources node, then click on the New... button in the main area of the web console. Our examples will use a JNDI Name of jms/GlassFishBookTopic. As this is a message topic, Resource Type must be javax.jms.Topic. The Description field is optional. The Physical Destination Name property is required. For our example, we will use GlassFishBookTopic as the value for this property. After clicking on the OK button, we can see our newly created message topic: Now that we have set up a connection factory, a message queue, and a message topic, we are ready to start writing code using the JMS API.
Read more
  • 0
  • 0
  • 8737
article-image-new-features-jpa-20
Packt
28 Jul 2010
9 min read
Save for later

New Features in JPA 2.0

Packt
28 Jul 2010
9 min read
(For more resources on Java, see here.) Version 2.0 of the JPA specification introduces some new features to make working with JPA even easier. In the following sections, we discuss some of these new features: Criteria API One of the main additions to JPA in the 2.0 specification is the introduction of the Criteria API. The Criteria API is meant as a complement to the Java Persistence Query Language (JPQL). Although JPQL is very flexible, it has some problems that make working with it more difficult than necessary. For starters, JPQL queries are stored as strings and the compiler has no way of validating JPQL syntax. Additionally, JPQL is not type safe. We could write a JPQL query in which our where clause could have a string value for a numeric property and our code would compile and deploy just fine. To get around the JPQL limitations described in the previous paragraph, the Criteria API was introduced to JPA in version 2.0 of the specification. The Criteria API allows us to write JPA queries programmatically, without having to rely on JPQL. The following code example illustrates how to use the Criteria API in our Java EE 6 applications: package net.ensode.glassfishbook.criteriaapi;import java.io.IOException;import java.io.PrintWriter;import java.util.List;import javax.persistence.EntityManager;import javax.persistence.EntityManagerFactory;import javax.persistence.PersistenceUnit;import javax.persistence.TypedQuery;import ja vax.persistence.criteria.CriteriaBuilder;import javax.persistence.criteria.CriteriaQuery;import javax.persistence.criteria.Path;import javax.persistence.criteria.Predicate;import javax.persistence.criteria.Root;import javax.persistence.metamodel.EntityType;import javax.persistence.metamodel.Metamodel;import javax.persistence.metamodel.SingularAttribute;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;@WebServlet(urlPatterns = {"/criteriaapi"})public class CriteriaApiDemoServlet extends HttpServlet{ @PersistenceUnit(unitName = "customerPersistenceUnit") private EntityManagerFactory entityManagerFactory; @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter printWriter = response.getWriter(); List<UsState> matchingStatesList; EntityManager entityManager = entityManagerFactory.createEntityManager(); CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<UsState> criteriaQuery = criteriaBuilder.createQuery(UsState.class); Root<UsState> root = criteriaQuery.from(UsState.class); Metamodel metamodel = entityManagerFactory.getMetamodel(); EntityType<UsState> usStateEntityType = metamodel.entity(UsState.class); SingularAttribute<UsState, String> usStateAttribute = usStateEntityType.getDeclaredSingularAttribute("usStateNm", String.class); Path<String> path = root.get(usStateAttribute); Predicate predicate = criteriaBuilder.like(path, "New%"); criteriaQuery = criteriaQuery.where(predicate); TypedQuery typedQuery = entityManager.createQuery(criteriaQuery); matchingStatesList = typedQuery.getResultList(); response.setContentType("text/html"); printWriter.println("The following states match the criteria:<br/>"); for (UsState state : matchingStatesList) { printWriter.println(state.getUsStateNm() + "<br/>"); } }} This example takes advantage of the Criteria APIc. When writing code using the Criteria API, the first thing we need to do is to obtain an instance of a class implementing the javax.persistence.criteria. CriteriaBuilder interface. As we can see in the previous example, we need to obtain said instance by invoking the getCriteriaBuilder() method on our EntityManager. From our CriteriaBuilder implementation, we need to obtain an instance of a class implementing the javax.persistence.criteria.CriteriaQuery interface. We do this by invoking the createQuery() method in our CriteriaBuilder implementation. Notice that CriteriaQuery is generically typed. The generic type argument dictates the type of result that our CriteriaQuery implementation will return upon execution. By taking advantage of generics in this way, the Criteria API allows us to write type safe code. Once we have obtained a CriteriaQuery implementation, from it we can obtain an instance of a class implementing the javax.persistence.criteria.Root interface. The Root implementation dictates what JPA entity we will be querying from. It is analogous to the FROM query in JPQL (and SQL). The next two lines in our example take advantage of another new addition to the JPA specification—the Metamodel API. In order to take advantage of the Metamodel API, we need to obtain an implementation of the javax.persistence. metamodel.Metamodel interface by invoking the getMetamodel() method on our EntityManagerFactory. From our Metamodel implementation, we can obtain a generically typed instance of the javax.persistence.metamodel.EntityType interface. The generic type argument indicates the JPA entity our EntityType implementation corresponds to. EntityType allows us to browse the persistent attributes of our JPA entities at runtime. This is exactly what we do in the next line in our example. In our case, we are getting an instance of SingularAttribute, which maps to a simple, singular attribute in our JPA entity. EntityType has methods to obtain attributes that map to collections, sets, lists, and maps. Obtaining these types of attributes is very similar to obtaining a SingularAttribute, therefore we won't be covering those directly. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/ docs/api/ for more information. As we can see in our example, SingularAttribute contains two generic type arguments. The first argument dictates the JPA entity we are working with and the second one indicates the type of attribute. We obtain our SingularAttribute by invoking the getDeclaredSingularAttribute() method on our EntityType implementation and passing the attribute name (as declared in our JPA entity) as a String. Once we have obtained our SingularAttribute implementation, we need to obtain an import javax.persistence.criteria.Path implementation by invoking the get() method in our Root instance and passing our SingularAttribute as a parameter. In our example, we will get a list of all the "new" states in the United States (that is, all states whose names start with "New"). Of course, this is the job of a "like" condition. We can do this with the Criteria API by invoking the like() method on our CriteriaBuilder implementation. The like() method takes our Path implementation as its first parameter and the value to search for as its second parameter. CriteriaBuilder has a number of methods that are analogous to SQL and JPQL clauses such as equals(), greaterThan(), lessThan(), and(), or(), and so on and so forth (for the complete list, refer to the Java EE 6 documentation at http://java.sun.com/javaee/6/docs/api/). These methods can be combined to create complex queries via the Criteria API. The like() method in CriteriaBuilder returns an implementation of the javax.persistence.criteria.Predicate interface, which we need to pass to the where() method in our CriteriaQuery implementation. This method returns a new instance of CriteriaBuilder which we assign to our criteriaBuilder variable. At this point, we are ready to build our query. When working with the Criteria API, we deal with the javax.persistence.TypedQuery interface, which can be thought of as a type-safe version of the Query interface we use with JPQL. We obtain an instance of TypedQuery by invoking the createQuery() method in EntityManager and passing our CriteriaQuery implementation as a parameter. To obtain our query results as a list, we simply invoke getResultList() on our TypedQuery implementation. It is worth reiterating that the Criteria API is type safe. Therefore, attempting to assign the results of getResultList() to a list of the wrong type would result in a compilation error. After building, packaging, and deploying our code, then pointing the browser to our servlet's URL, we should see all the "New" states displayed in the browser. Bean Validation support Another new feature introduced in JPA 2.0 is support for JSR 303, Bean Validation. Bean Validation support allows us to annotate our JPA entities with Bean Validation annotations. These annotations allow us to easily validate user input and perform data sanitation. Taking advantage of Bean Validation is very simple, all we need to do is annotate our JPA entity fields or getter methods with any of the validation annotations defined in the javax.validation.constraints package. Once our fields are annotated as appropriate, the EntityManager will prevent non-validating data from being persisted. The following code example is a modified version of the Customer JPA entity. It has been modifed to take advantage of Bean Validation in some of its fields. package net.ensode.glassfishbook.jpa.beanvalidation;import java.io.Serializable;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.Id;import javax.persistence.Table;import javax.validation.constraints.NotNull;import javax.validation.constraints.Size;@Entity@Table(name = "CUSTOMERS")public class Customer implements Serializable{ @Id @Column(name = "CUSTOMER_ID") private Long customerId; @Column(name = "FIRST_NAME") @NotNull @Size(min=2, max=20) private String firstName; @Column(name = "LAST_NAME") @NotNull @Size(min=2, max=20) private String lastName; private String email; public Long getCustomerId() { return customerId; } public void setCustomerId(Long customerId) { this.customerId = customerId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; }} In this example, we used the @NotNull annotation to prevent the firstName and lastName of our entity from being persisted with null values. We also used the @Size annotation to restrict the minimum and maximum length of these fields. This is all we need to do to take advantage of Bean Validation in JPA. If our code attempts to persist or update an instance of our entity that does not pass the declared validation, an exception of type javax.validation.ConstraintViolationException will be thrown and the entity will not be persisted. As we can see, Bean Validation pretty much automates data validation, freeing us from having to manually write validation code. In addition to the two annotations discussed in the previous example, the javax.validation.constraints package contains several additional annotations that we can use to automate validation on our JPA entities. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/docs/api/ for the complete list. Summary In this article, we discussed some new JPA 2.0 features such as the Criteria API that allows us to build JPA queries programmatically, the Metamodel API that allows us to take advantage of Java's type safety when working with JPA, and Bean Validation that allows us to easily validate input by simply annotating our JPA entity fields. Further resources on this subject: Interacting with Databases through the Java Persistence API [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 2132

article-image-application-session-and-request-scope-coldfusion-9
Packt
27 Jul 2010
8 min read
Save for later

Application, Session, and Request Scope in ColdFusion 9

Packt
27 Jul 2010
8 min read
(For more resources on ColdFusion, see here.) The start methods We will have a look at the start methods and make some observations now. Each method has its own set of arguments. All Application.cfc methods return a Boolean value of true or false to declare if they completed correctly or not. Any code you place inside a method will execute when the start event occurs. These are the events that match with the name of the method. We will also include some basic code that will help you build an application core that is good for reuse and discuss what those features provide. Application start method—onApplicationStart() The following is the code structure of the application start method. You could actually place these methods in any order in the CFC, as the order does not matter. Code that uses CFCs only require the methods to exist. If they exist, then it will call them. We place them in our code so that it helps us to read and understand the structure from a human perspective. <cffunction name="onApplicationStart" output="false"> <cfscript> // create default stat structure and pre-request values application._stat = structNew(); application._stat.started = now(); application._stat.thisHit = now(); application._stat.hits = 0; application._stat.sessions = 0; </cfscript></cffunction> There are no arguments for the onApplicationStart() method. We have included some extra code to show you an example of what can be done in this function. Please note that if we change the code in this method, it will only run at the very first time when an application running in ColdFusion is hit. To hit it again, we need to either change the application name or restart the ColdFusion server. The Application variables section that was previously explained shows how to change the application's name. From the start methods, we can see that we can access the variable scopes that allow persistence of key information. To understand the power of this object, we will be creating some statistics that can be used in most situations. We could use them for debugging, logging, or in any other appropriate use case. Again, we have to be aware that this only gets hit the first time a request is made to a ColdFusion server for that application. We will be updating many of our statistics in the request methods. We will also be updating one of our variables in the session end method. Session start method—onSessionStart() The session start method only gets called when a request is made for a new session. It is good that ColdFusion can keep track of these things. The following is example code that allows us to keep a record of the session-based statistics that is similar to the application-based statistics: <cffunction name="onSessionStart" output="false"> <cfscript> // create default session stat structure and pre-request values session._stat.started = now(); session._stat.thisHit = now(); session._stat.hits = 0; // at start of each session update count for application stat application._stat.sessions += 1; </cfscript></cffunction> You might have noticed that in the previous code we used +=. In ColdFusion prior to version 8, you had to type that particular line in a different way. The following two examples are the same in functionality (example one works in all versions and two works only in version 8 and higher): Example 1: myTotal = myTotal +3 Example 2: myTotal += 3 This is common in JavaScript, ActionScript, and many other languages. This syntax was added in ColdFusion version 8. We change the application-based setting because sessions are hidden from one another and cannot see each other. Therefore, we use the application CFC to either count or add a count every time a new session starts. Request start method—onRequestStart() This is one of the longest methods in the article. The first thing you will notice is that the script that is called is passed to the onRequestStart() method by ColdFusion. In this example, we will instruct ColdFusion to block any scripts from execution that begin with an underscore when called remotely. This means that you can call the server and request any .cfm page or .cfc page with an underscore at the start, and this protects it from being called outside the local server. The files can still be run if called from pages inside the server. This makes all these files locally accessible: <cffunction name="onRequestStart" output="false"> <cfargument name="thePage" type="string" required="true"> <cfscript> var myReturn = true; //fancy code to block pages that start with underscore if(left(listLast(arguments.thePage,"/"),1) EQ "_") { myReturn = false; } // update application stat on each request application._stat.lastHit = application._stat.thisHit; application._stat.thisHit = now(); application._stat.hits += 1; // update session stat on each request session._stat.lastHit = session._stat.thisHit; session._stat.thisHit = now(); session._stat.hits += 1; </cfscript> <cfreturn myReturn></cffunction> The methods in the following sections are used to update all the application and session statistics variables that need to be updated with each request. You should also notice that we are recording the last time the application or session was requested. The end methods Previously, some of the methods in this object were impossible to achieve with the earlier versions of ColdFusion. It was possible to code an end request function, but only a few programmers made use of it. We find that by using this object many more people are taking advantage of these features. The new methods that are added have the ability to run code specifically when a session ends, and when an application ends. This allows us to do things that we could not do previously. We can keep a record of how long a user is online without having to access the database with each request. When the session starts, you can store it in the session scope. When the session ends, you can take all that information and store it in the session log table if logging is desired in your site. Request end method—onRequestEnd() We are not going to use every method that is available to us. As we have the concept from the other sections, this would be redundant. The concepts of this method are very similar to the onRequestStart() method with the exception that it occurs after the requested page has been called. If you create content in this method and set the output attribute to true, then it will be sent back to browser requests. Here you can place the code that logs information about our requests: <cffunction name="onRequestEnd" returnType="void" output="false"> <cfargument name="thePage" type="string" required="true"></cffunction> Session end method—onSessionEnd() In the session end method, we can perform logging functions for analytical statistics that are specific to the end of a session if desired for your site. You need to use the argument's scope variables to read both the application and session variables. If you are changing application variables as in our example code, then you must use the argument's scope for that. <cffunction name="onSessionEnd" returnType="void" output="false"> <cfargument name="SessionScope" type="struct" required="true"> <cfargument name="ApplicationScope" type="struct" required="false"> <cfscript> // NOTE: You must use the variable scope below to access the // application structure inside this method. arguments.ApplicationScope._stat.sessions -= 1; </cfscript></cffunction> Application end method—onApplicationEnd This is our end method for applications. Here is where you can do the logging activity. As in the session method, you need to use the argument's scope in order to read variables for the application. It is also good to note that at this point, you can no longer access the session scope. <cffunction name="onApplicationEnd" returnType="void" output="false"> <cfargument name="applicationScope" required="true"></cffunction> On Error method—onError() The following code demonstrates how we can be flexible in managing errors sent to this method. If the error comes from Application.cfc, then the event (or method that had an issue) will be contained in the value of the arguments.eventname variable. Otherwise, it will be an empty string. In our code, we change the label on our dump statement, so that it is a bit more obvious where it was generated. <cffunction name="onError" returnType="void" output="true"> <cfargument name="exception" required="true"> <cfargument name="eventname" type="string" required="true"> <cfif arguments.eventName NEQ ""> <cfdump var="#arguments.exception#" label="Application core exception"> <cfelse> <cfdump var="#arguments.exception#" label="Application exception"> </cfif></cffunction>
Read more
  • 0
  • 0
  • 4012