Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-working-colors-scribus
Packt
10 Dec 2010
10 min read
Save for later

Working with Colors in Scribus

Packt
10 Dec 2010
10 min read
Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents. Applying colors in Scribus Applying color is as basic as creating a frame or writing text. In this article we will often give color values. Each time we need to, we will use the first letter of the color followed by its value. For example, C75 will mean 75 percent of cyan. K will be used for black and B for blue. There are five main things you could apply colors to: Frame or Shape fill Frame or Shape border Line Text Text border You'd like to colorize pictures too. It's a very different method, using duotone or any equivalent image effect. Applying a color to a frame means that you will use the Colors tab of the PP, whereas applying color to text will require you to go to the Color & Effects expander in the Text tab. In both cases you'll find what's needed to apply color to the fill and the border, but the user interfaces are a bit different. (Move the mouse over the image to enlarge.) Time for action – applying colors to a Text Frame's text Colors on frames will use the same color list. Let's follow some steps to see how this is done. Draw a Text Frame where you want it on a page. Type some text inside like "colors of the world" or use Insert | Sample Text. Go to the Colors tab of the PP (F2). Click on the second button placed above the color list to specify that you want to apply the changes to the fill. Then click on the color you want in the list below, for example, Magenta. Click on the paintbrush button, and apply a black color that will be applied to the border (we could call it stroke too). Don't forget that applying a stroke color will need some border refinements in the Line tab to set the width and style of the border. If you need more information about these options. Now, you can select the text or some part of it and go to the Colors & Effects expander of the Text tab. Here you will again see the same icon we used previously. Each has its own color list. Let's choose Yellow for the text color. The stroke color cannot be changed. To change this, click on the Shadow button placed below, and now choose black as the stroke color. The text shadow should be black. What just happened? Color on text is quicker than frame colors in some ways because each has its own list. So, there is no need to click on any button, and you can see both at a glance. Just remember that text has no stroke color activated when setting it first. You need to add the stroke or shadow to a selection to activate the border color for that selection. Quick apply in story editor If, like me, you like the Story Editor (Edit | Edit Text), notice that colors can be applied from there. They are not displayed in the editor but will be displayed when the changes will be applied to the layout. This is much faster, but you need to know exactly what you're doing and need to be precise in your selection. If you need to apply the same color setting to a word in the overall document, you can alternatively use the Edit | Search/Replace window. You can set there the word you're looking for in the first field, and in the right-hand side, replace with the same word, and choose the Fill and Stroke color that you want to apply. Of course, it would be nice if this window could let us apply character styles to make future changes easier. The Scribus new document receives a default color list, which is the same all over your document. In this article, we will deal with many ways of adapting existing colors or creating new ones. Applying shade or transparency Shade and transparency are two ways of setting more precisely how a specific color will be applied on your items. Shades and transparencies are fake effects that will be interpreted by some later elements of the printing workflow, such as Raster Image Processors, to know how the set color can be rendered with pure colors. This is the key point of reproducing colors: if you want a gray, you'll generally have a black color for that. In offset printing which is the reference, the size of the point will vary relatively to the darkness of the black you chose. This will be optically interpreted by the reader. Using shades Each color property has a Shade value. The default is set to 100 percent, meaning that the color will be printed fully saturated. Reducing the shade value will produce a lighter color. When at 0 percent, the color, whatever it may be, will be interpreted as white. On a pure color item like any primary or spot, changing the shade won't affect the color composition. However, on processed colors that are made by mixing several primary colors, modifying the shade will proportionally change the amount of each ink used in the process. Our C75 M49 Y7 K12 at a 50 percent shade value will become a C37 M25 Y4 K6 color in the final PDF. Less color means less ink on the paper and more white (or paper color), which results in a lighter color. You should remember that Shade is a frame property and not a color property. So, if you apply a new color to the frame, the shade value will be kept and applied immediately. To change the shade of the color applied to some characters, it will be a bit different: we don't have a field to fill but a drop-down list with predefined values of 10 percent increments. If you need another value, just choose Other to display a window in which you'll add the amount that you exactly need. You can do the same in the Text Editor. Using transparency While shade is used to lighten a color, the Opacity value will tell you how the color will be less solid. Once again, the range goes from 0%, meaning the object is completely transparent and invisible, to 100% to make it opaque. The latter value is the default. When two objects overlap, the top object hides the bottom object. But when Opacity is decreased, the object at the bottom will become more and more visible. One difference to notice is that Opacity won't affect only the color rendering but the content too (if there is some). As for Shade, Opacity too is applied separately to the fill and to the stroke. So you'll need to set both if needed. One important aspect is that Shade and Opacity can both be applied on the frame and a value 50% of each will give a lighter color than if only one was used. Several opacity values applied to objects show how they can act and add to each other: The background for the text in the title, in the following screenshot, is done in the same color as the background at the top of the page. Using transparency or shade can help create this background and decrease the number of used colors. Time for action – transparency and layers Let's now use transparency and layers to create some custom effects over a picture, as can often be done for covers. Create a new document and display the Layers window from the Windows menu. This window will already contain a line called Background. You can add a layer by clicking on the + button at the bottom left-hand side of the window: it will be called New Layer 1. You can rename it by double-clicking on its name. On the first page of it, add an Image Frame that covers the entire page. Then draw a rectangular shape that covers almost half of the page height. Duplicate this layer by clicking on the middle button placed at the bottom of the Layers window. Select the visibility checkbox (it is the first column headed with an eye icon) of this layer to hide it. We'll modify the transparency of each object. Click on New Layer 1 to specify that you want to work on this layer; otherwise you won't be able to select its frames. The frames or shapes you'll create from now on will be added to this layer called New Layer 1. Select the black shape and decrease the Opacity value of the Colors tab of the PP to 50%. Do the same for the Image Frame. Now, hide this layer by clicking on its visibility icon and show the top layer. In the Layers window, verify if this layer is selected and decrease its opacity. What just happened? If there is a need to make several objects transparent at once, an idea would be to put them on a layer and set the layer Opacity. This way, the same amount of transparency will be applied to the whole. You can open the Layer window from the Window menu. When working with layers, it's important to have the right layer selected to work on it. Basically, any new layer will be added at the top of the stack and will be activated once created. When a layer is selected, you can change the Opacity of this layer by using the field on the top right-hand side of the Layer window. Since it is applied to the layer itself, all the objects placed on it will be affected, but their own opacity values won't be changed. If you look at the differences between the two layers we have made, you'll see that the area in the first black rectangle explicitly becomes transparent by itself because you can see the photo through it. This is not seen in the second. So using layer, as we have seen, can help us work faster when we need to apply the same opacity setting to several objects, but we have to take care, because the result is slightly different. Using layers to blend colors More rarely, layers can be used to mix colors. Blend Mode is originally set to Normal, which does no blending. But if you use any other mode on a layer, its colors will be mixed with colors of the item placed on a lower layer, relatively to the chosen mode. This can be very creative. If you need a more precise action, Blend Mode can be set to Single Object from the Colors tab of the PP. Just give it a try. Layers are still most commonly used to organize a document: a layer for text, a layer for images, a layer for each language for a multi-lingual document, and so on. They are a smart way to work, but are not necessary in your documents and really we can work without them in a layout program.
Read more
  • 0
  • 0
  • 7849

article-image-scribus-managing-colors
Packt
10 Dec 2010
7 min read
Save for later

Scribus: Managing Colors

Packt
10 Dec 2010
7 min read
  Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents. Time for action – managing new colors To define your own color set, you'll need to go to Edit | Colors. Here you will have several options. The most import will be the New button, which displays a window that will give you all that you need to define your color perfectly. Give a unique and meaningful name to your color; it will help you recognize it in the color lists later. For the color model, you'll need to choose between CMYK, RGB, or Web safe RGB. If you intend to print the document, choose CMYK. If you need to put it on a website, you can choose the RGB model. Web safe, will be more restricted but you'll be sure that the chosen colors will have a similar render on every kind of monitor. Old and New show an overview of the previous state of a color when editing an existing color and the state of the actual, chosen color. It's very practical to compare. To choose your color, everything is placed on the right-hand side. You can click in the color spectrum area, drag the primary sliders, or enter the value of each primary in the field if you already know exactly which color you want. The HSV Color Map on top is the setting that gives you the spectrum. If you choose another, you'll see predefined swatches. Most of them are RGB and should not be used directly for printed documents. Click on OK to validate it in the Edit Color window and in the Colors window too. (Move the mouse over the image to enlarge.) If no document is opened, the Colors window will have some more buttons that will be very helpful. The Current Color Set should be set to Scribus Basic, which is the simplest color set. You can choose any other set but they contain RGB colors only. Then you can add your own colors, if you haven't already done so. Click on Save Color Set and give it a name. Your set will now be listed in the list and will be available for every new document. What just happened? Creating colors is very simple and can be done in few steps. In fact, creating some colors is much faster than having to choose the same color from a long, default color list. My advice would be: don't lose your time looking for a color in a predefined swatch unless you really need this color (like a Pantone or any other spot). Consider the following points: You should know the average color you need before looking for it It will take some time to take a look at all the available colors The color might not be in a predefined swatch Don't use the set everybody uses, it will help you make your document recognizable If no document is opened, the color will be added to the default swatch unless you create your own color name. If a Scribus document is open, even empty, the color will be saved in the document. Let's see how to reuse it if needed. Reusing colors from other files If you already have the colors somewhere, there might be a way to pick it without having to create it again. If the color is defined in an imported vector (mainly EPS or SVG) file, the colors will automatically be added in the color list with a name beginning with FromEPS or FromSVG followed by hexadecimal values of the color. In an EPS, colors can be CMYK or spot, but in SVG they will be RGB. CMYK between Inkscape and Scribus Inkscape colors are RGB but this software is color managed, so you can have an accurate on screen-rendering and you can add a 5-digit color-profile value to the color style property. Actually, no software adds this automatically. Doing it manually in Inkscape through the XML editor will require some knowledge of SVG and CSS. It will be easier to simply get your RGB colors and then go, after import, to the Edit | Colors window and refine the colors by clicking on the Edit button. If your color is in an imported picture or is placed somewhere else, you can use the Eye Dropper tool (the last icon of the toolbar). When you click on a color, you will be asked for a name and the color will be added as RGB in the color list. If you want to use it in CMYK, just edit the color and change the color model. The last important use case is an internal Scribus case. The color list swatch defined in a document is available only in that document and saved within it. The bad point of this is that they won't automatically be available for future documents. But the good point is that you can send your file to anyone and your colors will still be there. You have several ways of doing this. Time for action – importing from a Scribus document We have already seen how to import style and master pages from other existing Scribus documents; importing colors will be very similar. The simplest method to reuse existing already defined colors is to go to Edit | Colors. Click on the Import button. Browse your directories to find the Scribus file that contains the colors you want and select it. All the colors of this document will be added to your new document swatch. If you don't need some colors, just select them in the Edit | Colors list and click on the Delete button. Scribus will ask you which color will replace this deleted color. If this color is unused in your new document, it doesn't matter. What just happened? The Edit Colors window provides a simple way to import the colors from another Scribus document: if the colors are already set in it, you just have to choose it. But there are many other ways to do it, especially because colors are considered as frame options and can be imported with them. In fact, if you really need the same colors, you certainly won't like importing them each time you create a new document. The best you can do is create a file with your master pages, styles, and colors defined and save it as a model. Each new document will be created from this model, so you'll get them easily each time. The same will happen if you use a scrapbook. Performing those steps can help you get in few seconds everything you have already defined in another similar document. Finally, you may need to reuse those colors but not in the same kind of document. You can create a swatch in GIMP .gpl format or use any EPS or AI file. GIMP .gpl format is very simple but can be only RGB. Give the value of each RGB color. Press the Tab key and write the name of the color (for example, medium grey would be: 127 127 127 grey50). Each color has to be alone on its line. GPL, EPS, and AI files have to be placed in the Scribus swatch install directory (on Linux /usr/lib/scribus/swatches, on Macs Applications/Scribus/Contents/lib/scribus/swatches, and on Microsoft Windows Programs/scribus/lib/scribus/swatches). When using an EPS file you might get too many colors. Create as many sample shapes as needed on a page and apply a color that you want to keep on each. Then go to Edit | Colors and click on Remove Unused. Then close this window and delete the shapes. The best way will be the one you'll prefer. Test them all and maybe find your own.
Read more
  • 0
  • 0
  • 4321

article-image-python-multimedia-video-format-conversion-manipulations-and-effects
Packt
10 Dec 2010
11 min read
Save for later

Python Multimedia: Video Format Conversion, Manipulations and Effects

Packt
10 Dec 2010
11 min read
  Python Multimedia Learn how to develop Multimedia applications using Python with this practical step-by-step guide Use Python Imaging Library for digital image processing. Create exciting 2D cartoon characters using Pyglet multimedia framework Create GUI-based audio and video players using QT Phonon framework. Get to grips with the primer on GStreamer multimedia framework and use this API for audio and video processing.       Installation prerequisites We will use Python bindings of GStreamer multimedia framework to process video data. See Python Multimedia: Working with Audios for the installation instructions to install GStreamer and other dependencies. For video processing, we will be using several GStreamer plugins not introduced earlier. Make sure that these plugins are available in your GStreamer installation by running the gst-inspect-0.10 command from the console (gst-inspect-0.10.exe for Windows XP users). Otherwise, you will need to install these plugins or use an alternative if available. Following is a list of additional plugins we will use in this article: autoconvert: Determines an appropriate converter based on the capabilities. It will be used extensively used throughout this article. autovideosink: Automatically selects a video sink to display a streaming video. ffmpegcolorspace: Transforms the color space into a color space format that can be displayed by the video sink. capsfilter: It's the capabilities filter—used to restrict the type of media data passing down stream, discussed extensively in this article. textoverlay: Overlays a text string on the streaming video. timeoverlay: Adds a timestamp on top of the video buffer. clockoverlay: Puts current clock time on the streaming video. videobalance: Used to adjust brightness, contrast, and saturation of the images. It is used in the Video manipulations and effects section. videobox: Crops the video frames by specified number of pixels—used in the Cropping section. ffmux_mp4: Provides muxer element for MP4 video muxing. ffenc_mpeg4: Encodes data into MPEG4 format. ffenc_png: Encodes data in PNG format. Playing a video Earlier, we saw how to play an audio. Like audio, there are different ways in which a video can be streamed. The simplest of these methods is to use the playbin plugin. Another method is to go by the basics, where we create a conventional pipeline and create and link the required pipeline elements. If we only want to play the 'video' track of a video file, then the latter technique is very similar to the one illustrated for audio playback. However, almost always, one would like to hear the audio track for the video being streamed. There is additional work involved to accomplish this. The following diagram is a representative GStreamer pipeline that shows how the data flows in case of a video playback. In this illustration, the decodebin uses an appropriate decoder to decode the media data from the source element. Depending on the type of data (audio or video), it is then further streamed to the audio or video processing elements through the queue elements. The two queue elements, queue1 and queue2, act as media data buffer for audio and video data respectively. When the queue elements are added and linked in the pipeline, the thread creation within the pipeline is handled internally by the GStreamer. Time for action – video player! Let's write a simple video player utility. Here we will not use the playbin plugin. The use of playbin will be illustrated in a later sub-section. We will develop this utility by constructing a GStreamer pipeline. The key here is to use the queue as a data buffer. The audio and video data needs to be directed so that this 'flows' through audio or video processing sections of the pipeline respectively. Download the file PlayingVidio.py from the Packt website. The file has the source code for this video player utility. The following code gives an overview of the Video player class and its methods. import time import thread import gobject import pygst pygst.require("0.10") import gst import os class VideoPlayer: def __init__(self): pass def constructPipeline(self): pass def connectSignals(self): pass def decodebin_pad_added(self, decodebin, pad): pass def play(self): pass def message_handler(self, bus, message): pass # Run the program player = VideoPlayer() thread.start_new_thread(player.play, ()) gobject.threads_init() evt_loop = gobject.MainLoop() evt_loop.run() As you can see, the overall structure of the code and the main program execution code remains the same as in the audio processing examples. The thread module is used to create a new thread for playing the video. The method VideoPlayer.play is sent on this thread. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. The main event loop for executing this program is created using gobject and this loop is started by the call evt_loop.run(). Instead of using thread module you can make use of threading module as well. The code to use it will be something like: import threading threading.Thread(target=player.play).start() You will need to replace the line thread.start_new_thread(player.play, ()) in earlier code snippet with line 2 illustrated in the code snippet within this note. Try it yourself! Now let's discuss a few of the important methods, starting with self.contructPipeline: 1 def constructPipeline(self): 2 # Create the pipeline instance 3 self.player = gst.Pipeline() 4 5 # Define pipeline elements 6 self.filesrc = gst.element_factory_make("filesrc") 7 self.filesrc.set_property("location", 8 self.inFileLocation) 9 self.decodebin = gst.element_factory_make("decodebin") 10 11 # audioconvert for audio processing pipeline 12 self.audioconvert = gst.element_factory_make( 13 "audioconvert") 14 # Autoconvert element for video processing 15 self.autoconvert = gst.element_factory_make( 16 "autoconvert") 17 self.audiosink = gst.element_factory_make( 18 "autoaudiosink") 19 20 self.videosink = gst.element_factory_make( 21 "autovideosink") 22 23 # As a precaution add videio capability filter 24 # in the video processing pipeline. 25 videocap = gst.Caps("video/x-raw-yuv") 26 self.filter = gst.element_factory_make("capsfilter") 27 self.filter.set_property("caps", videocap) 28 # Converts the video from one colorspace to another 29 self.colorSpace = gst.element_factory_make( 30 "ffmpegcolorspace") 31 32 self.videoQueue = gst.element_factory_make("queue") 33 self.audioQueue = gst.element_factory_make("queue") 34 35 # Add elements to the pipeline 36 self.player.add(self.filesrc, 37 self.decodebin, 38 self.autoconvert, 39 self.audioconvert, 40 self.videoQueue, 41 self.audioQueue, 42 self.filter, 43 self.colorSpace, 44 self.audiosink, 45 self.videosink) 46 47 # Link elements in the pipeline. 48 gst.element_link_many(self.filesrc, self.decodebin) 49 50 gst.element_link_many(self.videoQueue, self.autoconvert, 51 self.filter, self.colorSpace, 52 self.videosink) 53 54 gst.element_link_many(self.audioQueue,self.audioconvert, 55 self.audiosink) In various audio processing applications, we have used several of the elements defined in this method. First, the pipeline object, self.player, is created. The self.filesrc element specifies the input video file. This element is connected to a decodebin. On line 15, autoconvert element is created. It is a GStreamer bin that automatically selects a converter based on the capabilities (caps). It translates the decoded data coming out of the decodebin in a format playable by the video device. Note that before reaching the video sink, this data travels through a capsfilter and ffmpegcolorspace converter. The capsfilter element is defined on line 26. It is a filter that restricts the allowed capabilities, that is, the type of media data that will pass through it. In this case, the videoCap object defined on line 25 instructs the filter to only allow video-xraw-yuv capabilities. The ffmpegcolorspace is a plugin that has the ability to convert video frames to a different color space format. At this time, it is necessary to explain what a color space is. A variety of colors can be created by use of basic colors. Such colors form, what we call, a color space. A common example is an rgb color space where a range of colors can be created using a combination of red, green, and blue colors. The color space conversion is a representation of a video frame or an image from one color space into the other. The conversion is done in such a way that the converted video frame or image is a closer representation of the original one. The video can be streamed even without using the combination of capsfilter and the ffmpegcolorspace. However, the video may appear distorted. So it is recommended to use capsfilter and ffmpegcolorspace converter. Try linking the autoconvert element directly to the autovideosink to see if it makes any difference. Notice that we have created two sinks, one for audio output and the other for the video. The two queue elements are created on lines 32 and 33. As mentioned earlier, these act as media data buffers and are used to send the data to audio and video processing portions of the GStreamer pipeline. The code block 35-45 adds all the required elements to the pipeline. Next, the various elements in the pipeline are linked. As we already know, the decodebin is a plugin that determines the right type of decoder to use. This element uses dynamic pads. While developing audio processing utilities, we connected the pad-added signal from decodebin to a method decodebin_pad_added. We will do the same thing here; however, the contents of this method will be different. We will discuss that later. On lines 50-52, the video processing portion of the pipeline is linked. The self.videoQueue receives the video data from the decodebin. It is linked to an autoconvert element discussed earlier. The capsfilter allows only video-xraw-yuv data to stream further. The capsfilter is linked to a ffmpegcolorspace element, which converts the data into a different color space. Finally, the data is streamed to the videosink, which, in this case, is an autovideosink element. This enables the 'viewing' of the input video. Now we will review the decodebin_pad_added method. 1 def decodebin_pad_added(self, decodebin, pad): 2 compatible_pad = None 3 caps = pad.get_caps() 4 name = caps[0].get_name() 5 print "n cap name is =%s"%name 6 if name[:5] == 'video': 7 compatible_pad = ( 8 self.videoQueue.get_compatible_pad(pad, caps) ) 9 elif name[:5] == 'audio': 10 compatible_pad = ( 11 self.audioQueue.get_compatible_pad(pad, caps) ) 12 13 if compatible_pad: 14 pad.link(compatible_pad) This method captures the pad-added signal, emitted when the decodebin creates a dynamic pad. Here the media data can either represent an audio or video data. Thus, when a dynamic pad is created on the decodebin, we must check what caps this pad has. The name of the get_name method of caps object returns the type of media data handled. For example, the name can be of the form video/x-raw-rgb when it is a video data or audio/x-raw-int for audio data. We just check the first five characters to see if it is video or audio media type. This is done by the code block 4-11 in the code snippet. The decodebin pad with video media type is linked with the compatible pad on self.videoQueue element. Similarly, the pad with audio caps is linked with the one on self.audioQueue. Review the rest of the code from the PlayingVideo.py. Make sure you specify an appropriate video file path for the variable self.inFileLocation and then run this program from the command prompt as: $python PlayingVideo.py This should open a GUI window where the video will be streamed. The audio output will be synchronized with the playing video. What just happened? We created a command-line video player utility. We learned how to create a GStreamer pipeline that can play synchronized audio and video streams. It explained how the queue element can be used to process the audio and video data in a pipeline. In this example, the use of GStreamer plugins such as capsfilter and ffmpegcolorspace was illustrated. The knowledge gained in this section will be applied in the upcoming sections in this article. Playing video using 'playbin' The goal of the previous section was to introduce you to the fundamental method of processing input video streams. We will use that method one way or another in the future discussions. If just video playback is all that you want, then the simplest way to accomplish this is by means of playbin plugin. The video can be played just by replacing the VideoPlayer.constructPipeline method in file PlayingVideo.py with the following code. Here, self.player is a playbin element. The uri property of playbin is set as the input video file path. def constructPipeline(self): self.player = gst.element_factory_make("playbin") self.player.set_property("uri", "file:///" + self.inFileLocation)
Read more
  • 0
  • 0
  • 4002
Visually different images

article-image-wxpython-design-approaches-and-techniques
Packt
09 Dec 2010
11 min read
Save for later

wxPython: Design Approaches and Techniques

Packt
09 Dec 2010
11 min read
wxPython 2.8 Application Development Cookbook Over 80 practical recipes for developing feature-rich applications using wxPython Develop flexible applications in wxPython. Create interface translatable applications that will run on Windows, Macintosh OSX, Linux, and other UNIX like environments. Learn basic and advanced user interface controls. Packed with practical, hands-on cookbook recipes and plenty of example code, illustrating the techniques to develop feature rich applications using wxPython.     Introduction Programming is all about patterns. There are patterns at every level, from the programming language itself, to the toolkit, to the application. Being able to discern and choose the optimal approach to use to solve the problem at hand can at times be a difficult task. The more patterns you know, the bigger your toolbox, and the easier it will become to be able to choose the right tool for the job. Different programming languages and toolkits often lend themselves to certain patterns and approaches to problem solving. The Python programming language and wxPython are no different, so let's jump in and take a look at how to apply some common design approaches and techniques to wxPython applications. Creating Singletons In object oriented programming, the Singleton pattern is a fairly simple concept of only allowing exactly one instance of a given object to exist at a given time. This means that it only allows for only one instance of the object to be in memory at any given time, so that all references to the object are shared throughout the application. Singletons are often used to maintain a global state in an application since all occurrences of one in an application reference the same exact instance of the object. Within the core wxPython library, there are a number of singleton objects, such as ArtProvider , ColourDatabase , and SystemSettings . This recipe shows how to make a singleton Dialog class, which can be useful for creating non-modal dialogs that should only have a single instance present at a given time, such as a settings dialog or a special tool window. How to do it... To get started, we will define a metaclass that can be reused on any class that needs to be turned into a singleton. We will get into more detail later in the How it works section. A metaclass is a class that creates a class. It is passed a class to it's __init__ and __call__ methods when someone tries to create an instance of the class. class Singleton(type): def __init__(cls, name, bases, dict): super(Singleton, cls).__init__(name, bases, dict) cls.instance = None def __call__(cls, *args, **kw): if not cls.instance: # Not created or has been Destroyed obj = super(Singleton, cls).__call__(*args, **kw) cls.instance = obj cls.instance.SetupWindow() return cls.instance Here we have an example of the use of our metaclass, which shows how easy it is to turn the following class into a singleton class by simply assigning the Singleton class as the __metaclass__ of SingletonDialog. The only other requirement is to define the SetupWindow method that the Singleton metaclass uses as an initialization hook to set up the window the first time an instance of the class is created. Note that in Python 3+ the __metaclass__ attribute has been replaced with a metaclass keyword argument in the class definition. class SingletonDialog(wx.Dialog): __metaclass__ = Singleton def SetupWindow(self): """Hook method for initializing window""" self.field = wx.TextCtrl(self) self.check = wx.CheckBox(self, label="Enable Foo") # Layout vsizer = wx.BoxSizer(wx.VERTICAL) label = wx.StaticText(self, label="FooBar") hsizer = wx.BoxSizer(wx.HORIZONTAL) hsizer.AddMany([(label, 0, wx.ALIGN_CENTER_VERTICAL), ((5, 5), 0), (self.field, 0, wx.EXPAND)]) btnsz = self.CreateButtonSizer(wx.OK) vsizer.AddMany([(hsizer, 0, wx.ALL|wx.EXPAND, 10), (self.check, 0, wx.ALL, 10), (btnsz, 0, wx.EXPAND|wx.ALL, 10)]) self.SetSizer(vsizer) self.SetInitialSize() How it works... There are a number of ways to implement a Singleton in Python. In this recipe, we used a metaclass to accomplish the task. This is a nicely contained and easily reusable pattern to accomplish this task. The Singleton class that we defined can be used by any class that has a SetupWindow method defined for it. So now that we have done it, let's take a quick look at how a singleton works. The Singleton metaclass dynamically creates and adds a class variable called instance to the passed in class. So just to get a picture of what is going on, the metaclass would generate the following code in our example: class SingletonDialog(wx.Dialog): instance = None Then the first time the metaclass's __call__ method is invoked, it will then assign the instance of the class object returned by the super class's __call__ method, which in this recipe is an instance of our SingletonDialog. So basically, it is the equivalent of the following: SingletonDialog.instance = SingletonDialog(*args,**kwargs) Any subsequent initializations will cause the previously-created one to be returned, instead of creating a new one since the class definition maintains the lifetime of the object and not an individual reference created in the user code. Our SingletonDialog class is a very simple Dialog that has TextCtrl, CheckBox, and Ok Button objects on it. Instead of invoking initialization in the dialog's __init__ method, we instead defined an interface method called SetupWindow that will be called by the Singleton metaclass when the object is initially created. In this method, we just perform a simple layout of our controls in the dialog. If you run the sample application that accompanies this topic, you can see that no matter how many times the show dialog button is clicked, it will only cause the existing instance of the dialog to be brought to the front. Also, if you make changes in the dialog's TextCtrl or CheckBox, and then close and reopen the dialog, the changes will be retained since the same instance of the dialog will be re-shown instead of creating a new one. Implementing an observer pattern The observer pattern is a design approach where objects can subscribe as observers of events that other objects are publishing. The publisher(s) of the events then just broadcasts the events to all of the subscribers. This allows the creation of an extensible, loosely-coupled framework of notifications, since the publisher(s) don't require any specific knowledge of the observers. The pubsub module provided by the wx.lib package provides an easy-to-use implementation of the observer pattern through a publisher/subscriber approach. Any arbitrary number of objects can subscribe their own callback methods to messages that the publishers will send to make their notifications. This recipe shows how to use the pubsub module to send configuration notifications in an application. How to do it... Here, we will create our application configuration object that stores runtime configuration variables for an application and provides a notification mechanism for whenever a value is added or modified in the configuration, through an interface that uses the observer pattern: import wx from wx.lib.pubsub import Publisher # PubSub message classification MSG_CONFIG_ROOT = ('config',) class Configuration(object): """Configuration object that provides notifications. """ def __init__(self): super(Configuration, self).__init__() # Attributes self._data = dict() def SetValue(self, key, value): self._data[key] = value # Notify all observers of config change Publisher.sendMessage(MSG_CONFIG_ROOT + (key,), value) def GetValue(self, key): """Get a value from the configuration""" return self._data.get(key, None) Now, we will create a very simple application to show how to subscribe observers to configuration changes in the Configuration class: class ObserverApp(wx.App): def OnInit(self): self.config = Configuration() self.frame = ObserverFrame(None, title="Observer Pattern") self.frame.Show() self.configdlg = ConfigDialog(self.frame, title="Config Dialog") self.configdlg.Show() return True def GetConfig(self): return self.config This dialog will have one configuration option on it to allow the user to change the applications font: class ConfigDialog(wx.Dialog): """Simple setting dialog""" def __init__(self, *args, **kwargs): super(ConfigDialog, self).__init__(*args, **kwargs) # Attributes self.panel = ConfigPanel(self) # Layout sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.panel, 1, wx.EXPAND) self.SetSizer(sizer) self.SetInitialSize((300, 300)) class ConfigPanel(wx.Panel): def __init__(self, parent): super(ConfigPanel, self).__init__(parent) # Attributes self.picker = wx.FontPickerCtrl(self) # Setup self.__DoLayout() # Event Handlers self.Bind(wx.EVT_FONTPICKER_CHANGED, self.OnFontPicker) def __DoLayout(self): vsizer = wx.BoxSizer(wx.VERTICAL) hsizer = wx.BoxSizer(wx.HORIZONTAL) vsizer.AddStretchSpacer() hsizer.AddStretchSpacer() hsizer.AddWindow(self.picker) hsizer.AddStretchSpacer() vsizer.Add(hsizer, 0, wx.EXPAND) vsizer.AddStretchSpacer() self.SetSizer(vsizer) Here, in the FontPicker's event handler, we get the newly-selected font and call SetValue on the Configuration object owned by the App object in order to change the configuration, which will then cause the ('config', 'font') message to be published: def OnFontPicker(self, event): """Event handler for the font picker control""" font = self.picker.GetSelectedFont() # Update the configuration config = wx.GetApp().GetConfig() config.SetValue('font', font) Now, here, we define the application's main window that will subscribe it's OnConfigMsg method as an observer of all ('config',) messages, so that it will be called whenever the configuration is modified: class ObserverFrame(wx.Frame): """Window that observes configuration messages""" def __init__(self, *args, **kwargs): super(ObserverFrame, self).__init__(*args, **kwargs) # Attributes self.txt = wx.TextCtrl(self, style=wx.TE_MULTILINE) self.txt.SetValue("Change the font in the config " "dialog and see it update here.") # Observer of configuration changes Publisher.subscribe(self.OnConfigMsg, MSG_CONFIG_ROOT) def __del__(self): # Unsubscribe when deleted Publisher.unsubscribe(self.OnConfigMsg) Here is the observer method that will be called when any message beginning with 'config' is sent by the pubsub Publisher. In this sample application, we just check for the ('config', 'font') message and update the font of the TextCtrl object to use the newly-configured one: def OnConfigMsg(self, msg): """Observer method for config change messages""" if msg.topic[-1] == 'font': # font has changed so update controls self.SetFont(msg.data) self.txt.SetFont(msg.data) if __name__ == '__main__': app = ObserverApp(False) app.MainLoop() How it works... This recipe shows a convenient way to manage an application's configuration by allowing the interested parts of an application to subscribe to updates when certain parts of the configuration are modified. Let's start with a quick walkthrough of how pubsub works. Pubsub messages use a tree structure to organize the categories of different messages. A message type can be defined either as a tuple ('root', 'child1', 'grandchild1') or as a dot-separated string ('root.child1.grandchild1'). Subscribing a callback to ('root',) will cause your callback method to be called for all messages that start with ('root',). This means that if a component publishes ('root', 'child1', 'grandchild1') or ('root', 'child1'), then any method that is subscribed to ('root',) will also be called Pubsub basically works by storing the mapping of message types to callbacks in static memory in the pubsub module. In Python, modules are only imported once any other part of your application that uses the pubsub module shares the same singleton Publisher object. In our recipe, the Configuration object is a simple object for storing data about the configuration of our application. Its SetValue method is the important part to look at. This is the method that will be called whenever a configuration change is made in the application. In turn, when this is called, it will send a pubsub message of ('config',) + (key,) that will allow any observers to subscribe to either the root item or more specific topics determined by the exact configuration item. Next, we have our simple ConfigDialog class. This is just a simple example that only has an option for configuring the application's font. When a change is made in the FontPickerCtrl in the ConfigPanel, the Configuration object will be retrieved from the App and will be updated to store the newly-selected Font. When this happens, the Configuration object will publish an update message to all subscribed observers. Our ObserverFrame is an observer of all ('config',) messages by subscribing its OnConfigMsg method to MSG_CONFIG_ROOT. OnConfigMsg will be called any time the Configuration object's SetValue method is called. The msg parameter of the callback will contain a Message object that has a topic and data attribute. The topic attribute will contain the tuple that represents the message that triggered the callback and the data attribute will contain any data that was associated with the topic by the publisher of the message. In the case of a ('config', 'font') message, our handler will update the Font of the Frame and its TextCtrl.
Read more
  • 0
  • 0
  • 2655

article-image-introduction-jboss-clustering
Packt
09 Dec 2010
6 min read
Save for later

Introduction to JBoss Clustering

Packt
09 Dec 2010
6 min read
Clustering plays an important role in Enterprise applications as it lets you split the load of your application across several nodes, granting robustness to your applications. As we discussed earlier, for optimal results it's better to limit the size of your JVM to a maximum of 2-2.5GB, otherwise the dynamics of the garbage collector will decrease your application's performance. Combining relatively smaller Java heaps with a solid clustering configuration can lead to a better, scalable configuration plus significant hardware savings. The only drawback to scaling out your applications is an increased complexity in the programming model, which needs to be correctly understood by aspiring architects. JBoss AS comes out of the box with clustering support. There is no all-in-one library that deals with clustering but rather a set of libraries, which cover different kinds of aspects. The following picture shows how these libraries are arranged: The backbone of JBoss Clustering is the JGroups library, which provides the communication between members of the cluster. Built upon JGroups we meet two building blocks, the JBoss Cache framework and the HAPartition service. JBoss Cache handles the consistency of your application across the cluster by means of a replicated and transactional cache. On the other hand, HAPartition is an abstraction built on top of a JGroups Channel that provides support for making and receiving RPC invocations from one or more cluster members. For example HA-JNDI (High Availability JNDI) or HA Singleton (High Availability Singleton) both use HAPartition to share a single Channel and multiplex RPC invocations over it, eliminating the configuration complexity and runtime overhead of having each service create its own Channel. If you need more information about the HAPartition service you can consult the JBoss AS documentation https://developer.jboss.org/wiki/jBossAS5ClusteringGuide. In the next section we will learn more about the JGroups library and how to configure it to reach the best performance for clustering communication. Configuring JGroups transport Clustering requires communication between nodes to synchronize the state of running applications or to notify changes in the cluster definition. JGroups (http://jgroups.org/manual/html/index.html) is a reliable group communication toolkit written entirely in Java. It is based on IP multicast, but extends by providing reliability and group membership. Member processes of a group can be located on the same host, within the same Local Area Network (LAN), or across a Wide Area Network (WAN). A member can be in turn part of multiple groups. The following picture illustrates a detailed view of JGroups architecture: A JGroups process consists basically of three parts, namely the Channel, Building blocks, and the Protocol stack. The Channel is a simple socket-like interface used by application programmers to build reliable group communication applications. Building blocks are an abstraction interface layered on top of Channels, which can be used instead of Channels whenever a higher-level interface is required. Finally we have the Protocol stack, which implements the properties specified for a given channel. In theory, you could configure every service to bind to a different Channel. However this would require a complex thread infrastructure with too many thread context switches. For this reason, JBoss AS is configured by default to use a single Channel to multiplex all the traffic across the cluster. The Protocol stack contains a number of layers in a bi-directional list. All messages sent and received over the channel have to pass through all protocols. Every layer may modify, reorder, pass or drop a message, or add a header to a message. A fragmentation layer might break up a message into several smaller messages, adding a header with an ID to each fragment, and re-assemble the fragments on the receiver's side. The composition of the Protocol stack (that is, its layers) is determined by the creator of the channel: an XML file defines the layers to be used (and the parameters for each layer). Knowledge about the Protocol stack is not necessary when just using Channels in an application. However, when an application wishes to ignore the default properties for a Protocol stack, and configure their own stack, then knowledge about what the individual layers are supposed to do is needed. In JBoss AS, the configuration of the Protocol stack is located in the file, <server> deployclusterjgroups-channelfactory.sarMETA-INFjgroupschannelfactory- stacks.xml. The file is quite large to fit here, however, in a nutshell, it contains the following basic elements: The first part of the file includes the UDP transport configuration. UDP is the default protocol for JGroups and uses multicast (or, if not available, multiple unicast messages) to send and receive messages. A multicast UDP socket can send and receive datagrams from multiple clients. The interesting and useful feature of multicast is that a client can contact multiple servers with a single packet, without knowing the specific IP address of any of the hosts. Next to the UDP transport configuration, three protocol stacks are defined: udp: The default IP multicast based stack, with flow control udp-async: The protocol stack optimized for high-volume asynchronous RPCs udp-sync: The stack optimized for low-volume synchronous RPCs Thereafter, the TCP transport configuration is defined . TCP stacks are typically used when IP multicasting cannot be used in a network (for example, because it is disabled) or because you want to create a network over a WAN (that's conceivably possible but sharing data across remote geographical sites is a scary option from the performance point of view). You can opt for two TCP protocol stacks: tcp: Addresses the default TCP Protocol stack which is best suited to high-volume asynchronous calls. tcp-async: Addresses the TCP Protocol stack which can be used for low-volume synchronous calls. If you need to switch to TCP stack, you can simply include the following in your command line args that you pass to JBoss: -Djboss.default.jgroups.stack=tcp Since you are not using multicast in your TCP communication, this requires configuring the addresses/ports of all the possible nodes in the cluster. You can do this by using the property -Djgroups.tcpping. initial_hosts. For example: -Djgroups.tcpping.initial_hosts=host1[7600],host2[7600] Ultimately, the configuration file contains two stacks which can be used for optimising JBoss Messaging Control Channel (jbm-control) and Data Channel (jbm-data).
Read more
  • 0
  • 0
  • 1704

article-image-microsoft-enterprise-library-authorization-and-security-cache
Packt
09 Dec 2010
6 min read
Save for later

Microsoft Enterprise Library: Authorization and Security Cache

Packt
09 Dec 2010
6 min read
  Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application           Read more about this book       (For more resources on Microsoft Enterprise Library, see here.) Understanding Authorization Providers An Authorization Provider is simply a class that provides authorization logic; technically it implements either an IAuthorizationProvider interface or an abstract class named AuthorizationProvider and provides authorization logic in the Authorize method. As mentioned previously, the Security Application Block provides two Authorization Providers out of the box, AuthorizationRuleProvider and AzManAuthorizationProvider both implementing the abstract class AuthorizationProvider available in the Microsoft.Practices.EnterpriseLibrary.Security namespace. This abstract class in turn implements the IAuthorizationProvider interface, which defines the basic functionality of an Authorization Provider; it exposes a single method named Authorize, which accepts an instance of the IPrincipal object and the name of the rule to evaluate. Custom providers can be implemented either by implementing the IAuthorizationProvider interface or an abstract class named AuthorizationProvider. An IPrincipal instance (GenericPrincipal, WindowsPrincipal, PassportPrincipal, and so on) represents the security context of the user on whose behalf the code is running; it also includes the user's identity represented as an instance of IIdentity (GenericIdentity, FormsIdentity, WindowsIdentity, PassportIdentity, and so on). The following diagram shows the members and inheritance hierarchy of the respective class and interface: Authorization Rule Provider The AuthorizationRuleProvider class is an implementation that evaluates Boolean expressions to determine whether the objects are authorized; these expressions or rules are stored in the configuration file. We can create authorization rules using the Rule Expression Editor part of the Enterprise Library configuration tool and validate them using the Authorize method of the Authorization Provider. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace. Authorizing using Authorization Rule Provider Authorization Rule Provider stores authorization rules in the configuration and this is one of the simplest ways to perform authorization. Basically, we need to configure to use the Authorization Rule Provider and provide authorization rules based on which the authorization will be performed. Let us add Authorization Rule Provider as our Authorization Provider; click on the plus symbol on the right side of the Authorization Providers and navigate to the Add Authorization Rule Provider menu item. The following screenshot shows the configuration options of the Add Authorization Rule Provider menu item: The following screenshot shows the default configuration of the newly added Authorization Provider; in this case, it is Authorization Rule Provider: Now we have the Authorization Rule Provider added to the configuration but we still need to add the authorization rules. Imagine that we have a business scenario where: We have to allow only users belonging to the administrator's role to add or delete products. We should allow all authenticated customers to view the products. This scenario is quite common where certain operations can be performed only by specific roles, basically role-based authorization. To fulfill this requirement, we will have to add three different rules for add, delete, and view operations. Right-click on the Authorization Rule Provider and click on the Add Authorization Rule menu item as shown on the following screenshot. The following screenshot shows the newly added Authorization Rule: Let us update the name of the rule to "Product.Add" to represent the operation for which the rule is configured. We will provide the rule using the Rule Expression Editor; click on the right corner button to open the Rule Expression Editor. The requirement is to allow only the administrator role to perform this action. The following action needs to be performed to configure the rule: Click on the Role button to add the Role expression: R. Enter the role name next to the role expression: R:Admin. Select the checkbox Is Authenticated to allow only authenticated users. The following screenshot displays the Rule Expression Editor dialog box with the expression configured to R:Admin. The following screenshot shows the Rule Expression property set to R:Admin. Now let us add the rule for the product delete operation. This rule is configured in a similar fashion. The resulting configuration will be similar to the configuration shown. The following screenshot displays the added authorization rule named Product.Delete with the configured Rule Expression: Alright, we now have to allow all authenticated customers to view the products. Basically we want the authorization to pass if the user is either of role Customer; also Admin role should have permission, only then the user will be able to view products. We will add another rule called Product.View and configure the rule expression using the Rule Expression Editor as given next. While configuring the rule, use the OR operator to specify that either Admin or Customer can perform this operation. The following screenshot displays the added authorization rule named Product.View with the configured Rule Expression: Now that we have the configuration ready, let us get our hands dirty with some code. Before authorizing we need to authenticate the user; based on the authentication requirement we could be using either out-of-the-box authentication mechanism or we might use custom authentication. Assuming that we are using the current Windows identity, the following steps will allow us to authorize specific operations by passing the Windows principal while invoking the Authorize method of the Authorization Provider. The first step is to get the IIdentity and IPrincipal based on the authentication mechanism. We are using current Windows identity for this sample. WindowsIdentity windowsIdentity = WindowsIdentity.GetCurrent();WindowsPrincipal windowsPrincipal = new WindowsPrincipal(windowsIdentity); Create an instance of the configured Authorization Provider using the AuthorizationFactory.GetAuthorizationProvider method; in our case we will get an instance of Authorization Rule Provider. IAuthorizationProvider authzProvider = AuthorizationFactory.GetAuthorizationProvider("Authorization Rule Provider"); Now use the instance of Authorization Provider to authorize the operation by passing the IPrincipal instance and the rule name. bool result = authzProvider.Authorize(windowsPrincipal, "Product.Add"); AuthorizationFactory.GetAuthorizationProvider also has an overloaded alternative without any parameter, which gets the default authorization provider configured in the configuration. AzMan Authorization Provider The AzManAuthorizationProvider class provides us the ability to define individual operations of an application, which then can be grouped together to form a task. Each individual operation or task can then be assigned roles to perform those operations or tasks. The best part of Authorization Manager is that it provides an administration tool as a Microsoft Management Console (MMC) snap-in to manage users, roles, operations, and tasks. Policy administrators can configure an Authorization Manager Policy store in an Active Directory, Active Directory Application Mode (ADAM) store, or in an XML file. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace.
Read more
  • 0
  • 0
  • 1920
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-microsoft-enterprise-library-security-application-block
Packt
09 Dec 2010
5 min read
Save for later

Microsoft Enterprise Library: Security Application Block

Packt
09 Dec 2010
5 min read
Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application The first step is the process of validating an identity against a store (Active Directory, Database, and so on); this is commonly called as Authentication. The second step is the process of verifying whether the validated identity is allowed to perform certain actions; this is commonly known Authorization. These two security mechanisms take care of allowing only known identities to access the application and perform their respective actions. Although, with the advent of new tools and technologies, it is not difficult to safeguard the application, utilizing these authentication and authorization mechanisms and implementing security correctly across different types of applications, or across different layers and in a consistent manner is pretty challenging for developers. Also, while security is an important factor, it's of no use if the application's performance is dismal. So, a good design should also consider performance and cache the outcome of authentication and authorization for repeated use. The Security Application Block provides a very simple and consistent way to implement authorization and credential caching functionality in our applications. Authorization doesn't belong to one particular layer; it is a best practice to authorize user action not only in the UI layer but also in the business logic layer. As Enterprise Library application blocks are layer-agnostic, we can leverage the same authorization rules and expect the same outcome across different layers bringing consistency. Authorization of user actions can be performed using an Authorization Provider; the block provides Authorization Rule Provider or AzMan Authorization Provider; it also provides the flexibility of implementing a custom authorization provider. Caching of security credentials is provided by the SecurityCacheProvider by leveraging the Caching Application Block and a custom caching provider can also be implemented using extension points. Both Authorization and Security cache providers are configured in the configuration file; this allows changing of provider any time without re-compilation. The following are the key features of the Security block: The Security Application Block provides a simple and consistent API to implement authorization. It abstracts the application code from security providers through configuration. It provides the Authorization Rule Provider to store rules in a configuration file and Windows Authorization Manager (AzMan) Authorization Provider to authorize against Active Directory, XML file, or database. Flexibility to implement custom Authorization Providers. It provides token generation and caching of authenticated IIdentity, IPrincipal and Profile objects. It provides User identity cache management, which improves performance while repeatedly authenticating users using cached security credentials. Flexibility to extend and implement custom Security Cache Providers. Developing an application We will explore each individual Security block feature and along the way we will understand the concepts behind the individual elements. This will help us to get up to speed with the basics. To get started, we will do the following: Reference the Validation block assemblies Add the required Namespaces Set up the initial configuration To complement the concepts and allow you to gain quick hands-on experience of different features of the Security Application Block, we have created a sample web application project with three additional projects, DataProvider, BusinessLayer, and BusinessEntities, to demonstrate the features. The application leverages SQL Membership, Role, and Profile provider for authentication, role management, and profiling needs. Before running the web application you will have to run the database generation script provided in the DBScript folder of the solution, and update the connection string in web.config appropriately. You might have to open the solution in "Administrator" mode based on your development environment. Also, create an application pool with an identity that has the required privileges to access the development SQL Server database, and map the application pool to the website. A screenshot of the sample application is shown as follows: (Move the mouse over the image to enlarge.) Referencing required/optional assemblies For the purposes of this demonstration we will be referencing non-strong-named assemblies but based on individual requirements Microsoft strong-named assemblies, or a modified set of custom assemblies can be referenced as well. The list of Enterprise Library assemblies that are required to leverage the Security Application Block functionality is given next. A few assemblies are optional based on the Authorization Provider and cache storage mechanism used. Use the Microsoft strong-named, or the non-strong-named, or a modified set of custom assemblies based on your referencing needs. The following table lists the required/optional assemblies: AssemblyRequired/OptionalMicrosoft.Practices.EnterpriseLibrary.Common.dllRequiredMicrosoft.Practices.ServiceLocation.dllRequiredMicrosoft.Practices.Unity.dllRequiredMicrosoft.Practices.Unity.Interception.dllRequiredMicrosoft.Practices.Unity.Configuration.dll Optional Useful while utilizing Unity configuration classes in our code Microsoft.Practices.EnterpriseLibrary.Security.dllRequiredMicrosoft.Practices.EnterpriseLibrary.Security.AzMan.dll Optional Used for Windows Authorization Manager Provider Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll Optional Used for caching the User identity Microsoft.Practices.EnterpriseLibrary.Data.dll Optional Used for caching in Database Cache Storage Open Visual Studio 2008/2010 and create a new ASP.NET Web Application Project by selecting File | New | Project | ASP.NET Web Application; provide the appropriate name for the solution and the desired project location. Currently, the application will have a default web form and assembly references. In the Solution Explorer, right-click on the References section and click on Add Reference and go to the Browse tab. Next, navigate to the Enterprise Library 5.0 installation location; the default install location is %Program Files%Microsoft Enterprise Library 5.0Bin. Now select all the assemblies listed in the previous table, excluding the AzMan-related assembly (Microsoft.Practices.EnterpriseLibrary.Security.AzMan.dll). The final assembly selection will look similar to the following screenshot:
Read more
  • 0
  • 0
  • 2315

article-image-python-3-object-oriented-design
Packt
08 Dec 2010
8 min read
Save for later

Python 3: Object-Oriented Design

Packt
08 Dec 2010
8 min read
Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples      Object-oriented? Everyone knows what an object is: a tangible "something" that we can sense, feel, and manipulate. The earliest objects we interact with are typically baby toys. Wooden blocks, plastic shapes, and over-sized puzzle pieces are common first objects. Babies learn quickly that certain objects do certain things. Triangles fit in triangle-shaped holes. Bells ring, buttons press, and levers pull. The definition of an object in software development is not so very different. Objects are not typically tangible somethings that you can pick up, sense, or feel, but they are models of somethings that can do certain things and have certain things done to them. Formally, an object is a collection of data and associated behaviors. So knowing what an object is, what does it mean to be object-oriented? Oriented simply means directed toward. So object-oriented simply means, "functionally directed toward modeling objects". It is one of many techniques used for modeling complex systems by describing a collection of interacting objects via their data and behavior. If you've read any hype, you've probably come across the terms object-oriented analysis, object-oriented design, object-oriented analysis and design, and object-oriented programming. These are all highly related concepts under the general object-oriented umbrella. In fact, analysis, design, and programming are all stages of software development. Calling them object-oriented simply specifies what style of software development is being pursued. Object-oriented Analysis (OOA) is the process of looking at a problem, system, or task that somebody wants to turn into an application and identifying the objects and interactions between those objects. The analysis stage is all about what needs to be done. The output of the analysis stage is a set of requirements. If we were to complete the analysis stage in one step, we would have turned a task, such as, "I need a website", into a set of requirements, such as: Visitors to the website need to be able to (italic represents actions, bold represents objects): review our history apply for jobs browse, compare, and order our products Object-oriented Design (OOD) is the process of converting such requirements into an implementation specification. The designer must name the objects, define the behaviors, and formally specify what objects can activate specific behaviors on other objects. The design stage is all about how things should be done. The output of the design stage is an implementation specification. If we were to complete the design stage in one step, we would have turned the requirements into a set of classes and interfaces that could be implemented in (ideally) any object-oriented programming language. Object-oriented Programming (OOP) is the process of converting this perfectly defined design into a working program that does exactly what the CEO originally requested. Yeah, right! It would be lovely if the world met this ideal and we could follow these stages one by one, in perfect order like all the old textbooks told us to. As usual, the real world is much murkier. No matter how hard we try to separate these stages, we'll always find things that need further analysis while we're designing. When we're programming, we find features that need clarification in the design. In the fast-paced modern world, most development happens in an iterative development model. In iterative development, a small part of the task is modeled, designed, and programmed, then the program is reviewed and expanded to improve each feature and include new features in a series of short cycles. In this article we will cover the basic object-oriented principles in the context of design. This allows us to understand these rather simple concepts without having to argue with software syntax or interpreters. Objects and classes So, an object is a collection of data with associated behaviors. How do we tell two types of objects apart? Apples and oranges are both objects, but it is a common adage that they cannot be compared. Apples and oranges aren't modeled very often in computer programming, but let's pretend we're doing an inventory application for a fruit farm! As an example, we can assume that apples go in barrels and oranges go in baskets. Now, we have four kinds of objects: apples, oranges, baskets, and barrels. In object-oriented modeling, the term used for kinds of objects is class. So, in technical terms, we now have four classes of objects. What's the difference between an object and a class? Classes describe objects. They are like blueprints for creating an object. You might have three oranges sitting on the table in front of you. Each orange is a distinct object, but all three have the attributes and behaviors associated with one class: the general class of oranges. The relationship between the four classes of objects in our inventory system can be described using a Unified Modeling Language (invariably referred to as UML, because three letter acronyms are cool) class diagram. Here is our first class diagram: This diagram simply shows that an Orange is somehow associated with a Basket and that an Apple is also somehow associated with a Barrel. Association is the most basic way for two classes to be related. UML is very popular among managers, and occasionally disparaged by programmers. The syntax of a UML diagram is generally pretty obvious; you don't have to read a tutorial to (mostly) understand what is going on when you see one. UML is also fairly easy to draw, and quite intuitive. After all, many people, when describing classes and their relationships, will naturally draw boxes with lines between them. Having a standard based on these intuitive diagrams makes it easy for programmers to communicate with designers, managers, and each other. However, some programmers think UML is a waste of time. Citing iterative development, they will argue that formal specifications done up in fancy UML diagrams are going to be redundant before they're implemented, and that maintaining those formal diagrams will only waste time and not benefit anyone. This is true of some organizations, and hogwash in other corporate cultures. However, every programming team consisting of more than one person will occasionally have to sit down and hash out the details of part of the system they are currently working on. UML is extremely useful, in these brainstorming sessions, for quick and easy communication. Even those organizations that scoff at formal class diagrams tend to use some informal version of UML in their design meetings, or team discussions. Further, the most important person you ever have to communicate with is yourself. We all think we can remember the design decisions we've made, but there are always, "Why did I do that?" moments hiding in our future. If we keep the scraps of paper we did our initial diagramming on when we started a design, we'll eventually find that they are a useful reference. UML covers far more than class and object diagrams; it also has syntax for use cases, deployment, state changes, and activities. We'll be dealing with some common class diagram syntax in this discussion of object-oriented design. You'll find you can pick up the structure by example, and you'll subconsciously choose UML-inspired syntax in your own team or personal design sessions. Our initial diagram, while correct, does not remind us that apples go in barrels or how many barrels a single apple can go in. It only tells us that apples are somehow associated with barrels. The association between classes is often obvious and needs no further explanation, but the option to add further clarification is always there. The beauty of UML is that most things are optional. We only need to specify as much information in a diagram as makes sense for the current situation. In a quick whiteboard session, we might just quickly draw lines between boxes. In a formal document that needs to make sense in six months, we might go into more detail. In the case of apples and barrels, we can be fairly confident that the association is, "many apples go in one barrel", but just to make sure nobody confuses it with, "one apple spoils one barrel", we can enhance the diagram as shown: This diagram tells us that oranges go in baskets with a little arrow showing what goes in what. It also tells us the multiplicity (number of that object that can be used in the association) on both sides of the relationship. One Basket can hold many (represented by a *) Orange objects. Any one Orange can go in exactly one Basket. It can be easy to confuse which side of a relationship the multiplicity goes on. The multiplicity is the number of objects of that class that can be associated with any one object at the other end of the association. For the apple goes in barrel association, reading from left to right, many instances of the Apple class (that is many Apple objects) can go in any one Barrel. Reading from right to left, exactly one Barrel can be associated with any one Apple.
Read more
  • 0
  • 0
  • 2699

article-image-data-modeling-and-scalability-google-app
Packt
30 Nov 2010
12 min read
Save for later

Data Modeling and Scalability in Google App

Packt
30 Nov 2010
12 min read
Google App Engine Java and GWT Application Development Build powerful, scalable, and interactive web applications in the cloud Comprehensive coverage of building scalable, modular, and maintainable applications with GWT and GAE using Java Leverage the Google App Engine services and enhance your app functionality and performance Integrate your application with Google Accounts, Facebook, and Twitter Safely deploy, monitor, and maintain your GAE applications A practical guide with a step-by-step approach that helps you build an application in stages         Read more about this book       In deciding how to design your application's data models, there are a number of ways in which your approach can increase the app's scalability and responsiveness. Here, we discuss several such approaches and how they are applied in the Connectr app. In particular, we describe how the Datastore access latency can sometimes be reduced; ways to split data models across entities to increase the efficiency of data object access and use; and how property lists can be used to support "join-like" behavior with Datastore entities. Reducing latency—read consistency and Datastore access deadlines By default, when an entity is updated in the Datastore, all subsequent reads of that entity will see the update at the same time; this is called strong consistency . To achieve it, each entity has a primary storage location, and with a strongly consistent read, the read waits for a machine at that location to become available. Strong consistency is the default in App Engine. However, App Engine allows you to change this default and use eventual consistency for a given Datastore read. With eventual consistency, the query may access a copy of the data from a secondary location if the primary location is temporarily unavailable. Changes to data will propagate to the secondary locations fairly quickly, but it is possible that an "eventually consistent" read may access a secondary location before the changes have been incorporated. However, eventually consistent reads are faster on average, so they trade consistency for availability. In many contexts, for example, with web apps such as Connectr that display "activity stream" information, this is an acceptable tradeoff—completely up-to-date freshness of information is not required. See http://googleappengine.blogspot.com/2010/03/ read-consistency-deadlines-more-control.html, http://googleappengine.blogspot.com/2009/09/migrationto- better-datastore.html, and http://code.google.com/ events/io/2009/sessions/TransactionsAcrossDatacenters. html for more background on this and related topics. In Connectr, we will add the use of eventual consistency to some of our feed object reads; specifically, those for feed content updates. We are willing to take the small chance that a feed object is slightly out-of-date in order to have the advantage of quicker reads on these objects. The following code shows how to set eventual read consistency for a query, using server.servlets.FeedUpdateFriendServlet as an example. Query q = pm.newQuery("select from " + FeedInfo.class.getName() + "where urlstring == :keys");//Use eventual read consistency for this queryq.addExtension("datanucleus.appengine.datastoreReadConsistency", "EVENTUAL"); App Engine also allows you to change the default Datastore access deadline. By default, the Datastore will retry access automatically for up to about 30 seconds. You can set this deadline to a smaller amount of time. It can often be appropriate to set a shorter deadline if you are concerned with response latency, and are willing to use a cached version of the data for which you got the timeout, or are willing to do without it. The following code shows how to set an access timeout interval (in milliseconds) for a given JDO query. Query q = pm.newQuery("...");// Set a Datastore access timeoutq.setTimeoutMillis(10000); Splitting big data models into multiple entities to make access more efficient Often, the fields in a data model can be divided into two groups: main and/or summary information that you need often/first, and details—the data that you might not need or tend not to need immediately. If this is the case, then it can be productive to split the data model into multiple entities and set the details entity to be a child of the summary entity, for instance, by using JDO owned relationships. The child field will be fetched lazily, and so the child entity won't be pulled in from the Datastore unless needed. In our app, the Friend model can be viewed like this: initially, only a certain amount of summary information about each Friend is sent over RPC to the app's frontend (the Friend's name). Only if there is a request to view details of or edit a particular Friend, is more information needed. So, we can make retrieval more efficient by defining a parent summary entity, and a child details entity. We do this by keeping the "summary" information in Friend, and placing "details" in a FriendDetails object , which is set as a child of Friend via a JDO bidirectional, one-to-one owned relationship, as shown in Figure 1. We store the Friend's e-mail address and its list of associated URLs in FriendDetails. We'll keep the name information in Friend. That way, when we construct the initial 'FriendSummaries' list displayed on application load, and send it over RPC, we only need to access the summary object. Splitting Friend data between a "main" Friend persistent class and a FriendDetails child class. A details field of Friend points to the FriendDetails child, which we create when we create a Friend. In this way, the details will always be transparently available when we need them, but they will be lazily fetched—the details child object won't be initially retrieved from the database when we query Friend, and won't be fetched unless we need that information. As you may have noticed, the Friend model is already set up in this manner—this is the rationale for that design. Discussion When splitting a data model like this, consider the queries your app will perform and how the design of the data objects will support those queries. For example, if your app often needs to query for property1 == x and property2 == y, and especially if both individual filters can produce large result sets, you are probably better off keeping both those properties on the same entity (for example, retaining both fields on the "main" entity, rather than moving one to a "details" entity). For persistent classes (that is, "data classes") that you often access and update, it is also worth considering whether any of its fields do not require indexes. This would be the case if you never perform a query which includes that field. The fewer the indexed fields of a persistent class, the quicker are the writes of objects of that cl ass. Splitting a model by creating an "index" and a "data" entity You can also consider splitting a model if you identify fields that you access only when performing queries, but don't require once you've actually retrieved the object. Often, this is the case with multi-valued properties. For example, in the Connectr app, this is the case with the friendKeys list of the server.domain.FeedIndex class. This multi-valued property is used to find relevant feed objects but is not used when displaying feed content information. With App Engine, there is no way for a query to retrieve only the fields that you need, so the full object must always be pulled in. If the multi-valued property lists are long, this is inefficient. To avoid this inefficiency, we can split up such a model into two parts, and put each one in a different entity—an index entity and a data entity. The index entity holds only the multi-valued properties (or other data) used only for querying, and the data entity holds the information that we actually want to use once we've identified the relevant objects. The trick to this new design is that the data entity key is defined to be the parent of the index entity key. More specifically, when an entity is created, its key can be defined as a "child" of another entity's key, which becomes its parent. The child is then in the same entity group as the parent. Because such a child key is based on the path of its parent key, it is possible to derive the parent key given only the child key, using the getParent() method of Key, without requiring the child to be instantiated. So with this design, we can first do a keys-only query on the index kind (which is faster than full object retrieval) to get a list of the keys of the relevant index entities. With that list, even though we've not actually retrieved the index objects themselves, we can derive the parent data entity keys from the index entity keys. We can then do a batch fetch with the list of relevant parent keys to grab all the data entities at once. This lets us retrieve the information we're interested in, without having to retrieve the properties that we do not need. See Brett Slatkin's presentation, Building scalable, complex apps on App Engine (http://code.google.com/events/ io/2009/sessions/BuildingScalableComplexApps. html) for more on this index/data design. Splitting the feed model into an "index" part (server.domain.FeedIndex) and a "data" part (server.domain.FeedInfo) Our feed model maps well to this design—we filter on the FeedIndex.friendKeys multi-valued property (which contains the list of keys of Friends that point to this feed) when we query for the feeds associated with a given Friend. But, once we have retrieved those feeds, we don't need the friendKeys list further. So, we would like to avoid retrieving them along with the feed content. With our app's sample data, these property lists will not comprise a lot of data, but they would be likely to do so if the app was scaled up. For example, many users might have the same friends, or many different contacts might include the same company blog in their associated feeds. So, we split up the feed model into an index part and a parent data part, as shown in Figure 2. The index class is server.domain.FeedIndex; it contains the friendKeys list for a feed. The data part, containing the actual feed content, is server.domain. FeedInfo. When a new FeedIndex object is created, its key will be constructed so that its corresponding FeedInfo object 's key is its parent key. This construction must of course take place at object creation, as Datastore entity keys cannot be changed. For a small-scale app, the payoff from this split model would perhaps not be worth it. But for the sake of example, let's assume that we expect our app to grow significantly. The FeedInfo persistent class —the parent class—simply uses an app-assigned String primary key, urlstring (the feed URL string). The server.domain. FeedIndex constructor, shown in the code below, uses the key of its FeedInfo parent—the URL string—to construct its key. This places the two entities into the same entity group and allows the parent FeedInfo key to be derived from the FeedIndex entity's key. @PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true")public class FeedIndex implements Serializable { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; ... public FeedIndex(String fkey, String url) { this.friendKeys = new HashSet<String>(); this.friendKeys.add(fkey); KeyFactory.Builder keyBuilder = new KeyFactory.Builder(FeedInfo.class.getSimpleName(), url); keyBuilder.addChild(FeedIndex.class.getSimpleName(), url); Key ckey = keyBuilder.getKey(); this.key= ckey; } The following code, from server.servlets.FeedUpdateFriendServlet, shows how this model is used to efficiently retrieve the FeedInfo objects associated with a given Friend. Given a Friend key, a query is performed for the keys of the FeedIndex entities that contain this Friend key in their friendKeys list. Because this is a keys-only query, it is much more efficient than returning the actual objects. Then, each FeedIndex key is used to derive the parent (FeedInfo) key. Using that list of parent keys, a batch fetch is performed to fetch the FeedInfo objects associated with the given Friend. We did this without needing to actually fetch the FeedIndex objects. ... imports...@SuppressWarnings("serial")public class FeedUpdateFriendServlet extends HttpServlet{ private static Logger logger = Logger.getLogger(FeedUpdateFriendServlet.class.getName()); public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException { PersistenceManager pm = PMF.get().getPersistenceManager(); Query q = null; try { String fkey = req.getParameter("fkey"); if (fkey != null) { logger.info("in FeedUpdateFriendServlet, updating feeds for:" +fkey); // query for matching FeedIndex keys q = pm.newQuery("select key from "+FeedIndex.class.getName()+" where friendKeys == :id"); List ids=(List)q.execute(fkey); if (ids.size()==0) { return; } // else, get the parent keys of the ids Key k = null; List<Key>parent list = new ArrayList<Key>(); for (Object id : ids) { // cast to key k = (Key)id; parentlist.add(k.getParent()); } // fetch the parents using the keys Query q2 = pm.newQuery("select from +FeedInfo.class.getName()+ "where urlstring == :keys"); // allow eventual consistency on read q2.addExtension( "datanucleus.appengine.datastoreReadConsistency", "EVENTUAL"); List<FeedInfo>results = (List<FeedInfo>)q2.execute(parentlist); if(results.iterator().hasNext()){ for(FeedInfo fi : results){ fi.updateRequestedFeed(pm); } } } } catch (Exception e) { logger.warning(e.getMessage()); } finally { if q!=null) { q.closeAll(); } pm.close(); } }}//end class
Read more
  • 0
  • 0
  • 1515

article-image-ordered-and-generic-tests-visual-studio-2010
Packt
30 Nov 2010
5 min read
Save for later

Ordered and Generic Tests in Visual Studio 2010

Packt
30 Nov 2010
5 min read
  Software Testing using Visual Studio 2010 Ordered tests The following screenshot shows the list of all the tests. You can see that the tests are independent and there is no link between the tests. We have different types of tests like Unit Test, Web Performance Test, and Load Test under the test project. Let's try to create an ordered test and place some of the dependent tests in an order so that the test execution happens in an order without breaking. Creating an ordered test There are different ways of creating ordered tests similar to the other tests: Select the test project from Solution Explorer, right-click and select Add Ordered Test, and then select ordered test from the list of different types of tests. Save the ordered test by choosing the File | Save option. Select the menu option Test then select New Test..., which opens a dialog with different test types. Select the test type and choose the test project from the Add to Test Project List drop-down and click on OK. Now the ordered test is created under the test project and the ordered test window is shown to select the existing tests from the test project and set the order. The preceding window shows different options for ordering the tests. The first line is the status bar, which shows the number of tests selected for the ordered test. The Select test list to view dropdown has the option to choose the display of tests in the available Test Lists. This dropdown has the default All Loaded Tests, which displays all available tests under the project. The other options in the dropdown are Lists of Tests and Tests Not in a List. The List of Tests will display the test lists created using the Test List Editor. It is easier to include the number of tests grouped together and order them. The next option, Tests Not in a List, displays the available tests, which are not part of any Test Lists. The Available tests list displays all the tests from the test project based on the option chosen in the dropdown. Selected tests contains the tests that are selected from the available tests list to be placed in order. The two right and left arrows are used for selecting and unselecting the tests from the Available tests list to the Selected Tests list. We can also select multiple tests by pressing the Ctrl key and selecting the tests. The up-down arrows on the right of the selected tests list are used for moving up or down the tests and setting the order for the testing in the Selected tests list. The last option, the Continue after failure checkbox at the bottom of the window, is to override the default behavior of the ordered tests, aborting the execution after the failure of any test. If the option Continue after failure is unchecked, and if any test in the order fails, then all remaining tests will get aborted. In case the tests are not dependent, we can check this option and override the default behavior to allow the application to continue running the remaining tests in order. Properties of an ordered test Ordered tests have properties similar to the other test types, in addition to some specific properties. To view the properties, select the ordered test in the Test View or Test List Editor window, right-click and select the Properties option. The Properties dialog box displays the available properties for the ordered test. The preceding screenshot shows that most of the properties are the same as the properties of the other test types. We can associate this test with the TFS work items, iterations, and area. Executing an ordered test An ordered test can be run like any other test. Open the Test View window or the Test List Editor and select the ordered test from the list, then right-click and choose the Run Selection option from Test View or Run Checked Tests from the Test List Editor. Once the option is selected, we can see the tests running one after the other in the same order in which they are placed in the ordered test. After the execution of the ordered tests, the Test Results window will show the status of the ordered test. If any of the tests in the list fails, then the ordered test status will be Failed. The summary of statuses of all the tests in the ordered test is shown in the following screenshot in the toolbar. The sample ordered test application had four tests in the ordered tests, but two of them failed and one had an error. Clicking the Test run failed hyperlink in the status bar shows a detailed view of the test run summary: The Test Results window also provides detailed information about the tests run so far. To get these details, choose the test from the Test Results window and then right-click and choose the option, View Test Results Details, which opens the details window and displays the common results information such as Test Name, Result, Duration of the test run, Start Time, End Time, and so on. The details window also displays the status of each and every test run within the ordered test. In addition it displays the duration for each test run, name, owner, and type of test in the list. Even though the second test in the list fails, the other tests continue to execute as if the Continue after failure option was checked.
Read more
  • 0
  • 0
  • 2772
article-image-using-datastore-transactions-google-app
Packt
30 Nov 2010
12 min read
Save for later

Using Datastore Transactions in Google App

Packt
30 Nov 2010
12 min read
Google App Engine Java and GWT Application Development Build powerful, scalable, and interactive web applications in the cloud Comprehensive coverage of building scalable, modular, and maintainable applications with GWT and GAE using Java Leverage the Google App Engine services and enhance your app functionality and performance Integrate your application with Google Accounts, Facebook, and Twitter Safely deploy, monitor, and maintain your GAE applications A practical guide with a step-by-step approach that helps you build an application in stages        As the App Engine documentation states, A transaction is a Datastore operation or a set of Datastore operations that either succeed completely, or fail completely. If the transaction succeeds, then all of its intended effects are applied to the Datastore. If the transaction fails, then none of the effects are applied. The use of transactions can be the key to the stability of a multiprocess application (such as a web app) whose different processes share the same persistent Datastore. Without transactional control, the processes can overwrite each other's data updates midstream, essentially stomping all over each other's toes. Many database implementations support some form of transactions, and you may be familiar with RDBMS transactions. App Engine Datastore transactions have a different set of requirements and usage model than you may be used to. First, it is important to understand that a "regular" Datastore write on a given entity is atomic—in the sense that if you are updating multiple fields in that entity, they will either all be updated, or the write will fail and none of the fields will be updated. Thus, a single update can essentially be considered a (small, implicit) transaction— one that you as the developer do not explicitly declare. If one single update is initiated while another update on that entity is in progress, this can generate a "concurrency failure" exception. In the more recent versions of App Engine, such failures on single writes are now retried transparently by App Engine, so that you rarely need to deal with them in application-level code. However, often your application needs stronger control over the atomicity and isolation of its operations, as multiple processes may be trying to read and write to the same objects at the same time. Transactions provide this control. For example, suppose we are keeping a count of some value in a "counter" field of an object, which various methods can increment. It is important to ensure that if one Servlet reads the "counter" field and then updates it based on its current value, no other request has updated the same field between the time that its value is read and when it is updated. Transactions let you ensure that this is the case: if a transaction succeeds, it is as if it were done in isolation, with no other concurrent processes 'dirtying' its data. Another common scenario: you may be making multiple changes to the Datastore, and you may want to ensure that the changes either all go through atomically, or none do. For example, when adding a new Friend to a UserAccount, we want to make sure that if the Friend is created, any related UserAcount object changes are also performed. While a Datastore transaction is ongoing, no other transactions or operations can see the work being done in that transaction; it becomes visible only if the transaction succeeds. Additionally, queries inside a transaction see a consistent "snapshot" of the Datastore as it was when the transaction was initiated. This consistent snapshot is preserved even after the in-transaction writes are performed. Unlike some other transaction models, with App Engine, a within-transaction read after a write will still show the Datastore as it was at the beginning of the transaction. Datastore transactions can operate only on entities that are in the same entity group. We discuss entity groups later in this article. Transaction commits and rollbacks To specify a transaction, we need the concepts of a transaction commit and rollback. A transaction must make an explicit "commit" call when all of its actions have been completed. On successful transaction commit, all of the create, update, and delete operations performed during the transaction are effected atomically. If a transaction is rolled back, none of its Datastore modifications will be performed. If you do not commit a transaction, it will be rolled back automatically when its Servlet exits. However, it is good practice to wrap a transaction in a try/finally block, and explicitly perform a rollback if the commit was not performed for some reason. This could occur, for example, if an exception was thrown. If a transaction commit fails, as would be the case if the objects under its control had been modified by some other process since the transaction was started the transaction is automatically rolled back. Example—a JDO transaction With JDO, a transaction is initiated and terminated as follows: import javax.jdo.PersistenceManager; import javax.jdo.Transaction; ... PersistenceManager pm = PMF.get().getPersistenceManager(); Transaction tx; ... try { tx = pm.currentTransaction(); tx.begin(); // Do the transaction work tx.commit(); } finally { if (tx.isActive()) { tx.rollback(); } } A transaction is obtained by calling the currentTransaction() method of the PersistenceManager. Then, initiate the transaction by calling its begin() method . To commit the transaction, call its commit() method . The finally clause in the example above checks to see if the transaction is still active, and does a rollback if that is the case. While the preceding code is correct as far as it goes, it does not check to see if the commit was successful, and retry if it was not. We will add that next. App Engine transactions use optimistic concurrency In contrast to some other transactional models, the initiation of an App Engine transaction is never blocked. However, when the transaction attempts to commit, if there has been a modification in the meantime (by some other process) of any objects in the same entity group as the objects involved in the transaction, the transaction commit will fail. That is, the commit not only fails if the objects in the transaction have been modified by some other process, but also if any objects in its entity group have been modified. For example, if one request were to modify a FeedInfo object while its FeedIndex child was involved in a transaction as part of another request, that transaction would not successfully commit, as those two objects share an entity group. App Engine uses an optimistic concurrency model. This means that there is no check when the transaction initiates, as to whether the transaction's resources are currently involved in some other transaction, and no blocking on transaction start. The commit simply fails if it turns out that these resources have been modified elsewhere after initiating the transaction. Optimistic concurrency tends to work well in scenarios where quick response is valuable (as is the case with web apps) but contention is rare, and thus, transaction failures are relatively rare. Transaction retries With optimistic concurrency, a commit can fail simply due to concurrent activity on the shared resource. In that case, if the transaction is retried, it is likely to succeed. So, one thing missing from the previous example is that it does not take any action if the transaction commit did not succeed. Typically, if a commit fails, it is worth simply retrying the transaction. If there is some contention for the objects in the transaction, it will probably be resolved when it is retried. PersistenceManager pm = PMF.get().getPersistenceManager(); // ... try { for (int i =0; i < NUM_RETRIES; i++) { pm.currentTransaction().begin(); // ...do the transaction work ... try { pm.currentTransaction().commit(); break; } catch (JDOCanRetryException e1) { if (i == (NUM_RETRIES - 1)) { throw e1; } } } } finally { if (pm.currentTransaction().isActive()) { pm.currentTransaction().rollback(); } pm.close(); } As shown in the example above, you can wrap a transaction in a retry loop, where NUM_RETRIES is set to the number of times you want to re-attempt the transaction. If a commit fails, a JDOCanRetryException will be thrown. If the commit succeeds, the for loop will be terminated. If a transaction commit fails, this likely means that the Datastore has changed in the interim. So, next time through the retry loop, be sure to start over in gathering any information required to perform the transaction. Transactions and entity groups An entity's entity group is determined by its key. When an entity is created, its key can be defined as a child of another entity's key, which becomes its parent. The child is then in the same entity group as the parent. That child's key could in turn be used to define another entity's key, which becomes its child, and so on. An entity's key can be viewed as a path of ancestor relationships, traced back to a root entity with no parent. Every entity with the same root is in the same entity group. If an entity has no parent, it is its own root. Because entity group membership is determined by an entity's key, and the key cannot be changed after the object is created, this means that entity group membership cannot be changed either. As introduced earlier, a transaction can only operate on entities from the same entity group. If you try to access entities from different groups within the same transaction, an error will occur and the transaction will fail. In App Engine, JDO owned relationships place the parent and child entities in the same entity group. That is why, when constructing an owned relationship, you cannot explicitly persist the children ahead of time, but must let the JDO implementation create them for you when the parent is made persistent. JDO will define the keys of the children in an owned relationship such that they are the child keys of the parent object key. This means that the parent and children in a JDO owned relationship can always be safely used in the same transaction. (The same holds with JPA owned relationships). So in the Connectr app, for example, you could create a transaction that encompasses work on a UserAccount object and its list of Friends—they will all be in the same entity group. But, you could not include a Friend from a different UserAccount in that same transaction—it will not be in the same entity group. This App Engine constraint on transactions—that they can only encompass members of the same entity group—is enforced in order to allow transactions to be handled in a scalable way across App Engine's distributed Datastores. Entity group members are always stored together, not distributed. Creating entities in the same entity group As discussed earlier, one way to place entities in the same entity group is to create a JDO owned relationship between them; JDO will manage the child key creation so that the parent and children are in the same entity group. To explicitly create an entity with an entity group parent, you can use the App Engine KeyFactory.Builder class . This is the approach used in the FeedIndex constructor example shown previously. Recall that you cannot change an object's key after it is created, so you have to make this decision when you are creating the object. Your "child" entity must use a primary key of type Key or String-encoded Key; these key types allow parent path information to be encoded in them. As you may recall, it is required to use one of these two types of keys for JDO owned relationship children, for the same reason. If the data class of the object for which you want to create an entity group parent uses an app-assigned string ID, you can build its key as follows: // you can construct a Builder as follows: KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Class1.class.getSimpleName(), parentIDString); // alternatively, pass the parent Key object: Key pkey = KeyFactory.Builder keyBuilder = new KeyFactory.Builder(pkey); // Then construct the child key keyBuilder.addChild(Class2.class.getSimpleName(), childIDString); Key ckey = keyBuilder.getKey(); Create a new KeyFactory.Builder using the key of the desired parent. You may specify the parent key as either a Key object or via its entity name (the simple name of its class) and its app-assigned (String) or system-assigned (numeric) ID, as appropriate. Then, call the addChild method of the Builder with its arguments—the entity name and the app-assigned ID string that you want to use. Then, call the getKey() method of Builder. The generated child key encodes parent path information. Assign the result to the child entity's key field. When the entity is persisted, its entity group parent will be that entity whose key was used as the parent. This is the approach we showed previously in the constructor of FeedIndex, creating its key using its parent FeedInfo key . See http://code.google.com/appengine/docs/java/javadoc/ com/google/appengine/api/datastore/KeyFactory.Builder. html for more information on key construction. If the data class of the object for which you want to create an entity group parent uses a system-assigned ID, then (because you don't know this ID ahead of time), you must go about creating the key in a different way. Create an additional field in your data class for the parent key, of the appropriate type for the parent key, as shown in the following code: @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; ... @Persistent @Extension(vendorName="datanucleus", key="gae.parent-pk", value="true") private String parentKey; Assign the parent key to this field prior to creating the object. When the object is persisted, the data object's primary key field will be populated using the parent key as the entity group parent. You can use this technique with any child key type.
Read more
  • 0
  • 0
  • 1855

article-image-web-services-microsoft-azure
Packt
29 Nov 2010
8 min read
Save for later

Web Services in Microsoft Azure

Packt
29 Nov 2010
8 min read
A web service is not one single entity and consists of three distinct parts: An endpoint, which is the URL (and related information) where client applications will find our service A host environment, which in our case will be Azure A service class, which is the code that implements the methods called by the client application A web service endpoint is more than just a URL. An endpoint also includes: The bindings, or communication and security protocols The contract (or promise) that certain methods exist, how these methods should be called, and what the data will look like when returned A simple way to remember the components of an endpoint is A/B/C, that is, address/bindings/contract. Web services can fill many roles in our Azure applications—from serving as a simple way to place messages into a queue, to being a complete replacement for a data access layer in a web application (also known as a Service Oriented Architecture or SOA). In Azure, web services serve as HTTP/HTTPS endpoints, which can be accessed by any application that supports REST, regardless of language or operating system. The intrinsic web services libraries in .NET are called Windows Communication Foundation (WCF). As WCF is designed specifically for programming web services, it's referred to as a service-oriented programming model. We are not limited to using WCF libraries in Azure development, but we expect it to be a popular choice for constructing web services being part of the .NET framework. A complete introduction to WCF can be found at http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. When adding WCF services to an Azure web role, we can either create a separate web role instance, or add the web services to an existing web role. Using separate instances allows us to scale the web services independently of the web forms, but multiple instances increase our operating costs. Separate instances also allow us to use different technologies for each Azure instance; for example, the web form may be written in PHP and hosted on Apache, while the web services may be written in Java and hosted using Tomcat. Using the same instance helps keep our costs much lower, but in that case we have to scale both the web forms and the web services together. Depending on our application's architecture, this may not be desirable. Securing WCF Stored data are only as secure as the application used for accessing it. The Internet is stateless, and REST has no sense of security, so security information must be passed as part of the data in each request. If the credentials are not encrypted, then all requests should be forced to use HTTPS. If we control the consuming client applications, we can also control the encryption of the user credentials. Otherwise, our only choice may be to use clear text credentials via HTTPS. For an application with a wide or uncontrolled distribution (like most commercial applications want to be), or if we are to support a number of home-brewed applications, the authorization information must be unique to the user. Part of the behind-the-services code should check to see if the user making the request can be authenticated, and if the user is authorized to perform the action. This adds additional coding overhead, but it's easier to plan for this up front. There are a number of ways to secure web services—from using HTTPS and passing credentials with each request, to using authentication tokens in each request. As it happens, using authentication tokens is part of the AppFabric Access Control, and we'll look more into the security for WCF when we dive deeper into Access Control. Jupiter Motors web service In our corporate portal for Jupiter Motors, we included a design for a client application, which our delivery personnel will use to update the status of an order and to decide which customers will accept delivery of their vehicle. For accounting and insurance reasons, the order status needs to be updated immediately after a customer accepts their vehicle. To do so, the client application will call a web service to update the order status as soon as the Accepted button is clicked. Our WCF service is interconnected to other parts of our Jupiter Motors application, so we won't see it completely in action until it all comes together. In the meantime, it will seem like we're developing blind. In reality, all the components would probably be developed and tested simultaneously. Creating a new WCF service web role When creating a web service, we have a choice to add the web service to an existing web role or create a new web role. This helps us deploy and maintain our website application separately from our web services. And in order for us to scale the web role independently from the worker role, we'll create our web service in a role separate from our web application. Creating a new WCF service web role is very simple—Visual Studio will do the "hard work" for us and allow us to start coding our services. First, open the JupiterMotors project. Create the new web role by right-clicking on the Roles folder in our project, choosing Add, and then select the New Web Role Project… option. When we do this, we will be asked what type of web role we want to create. We will choose a WCF Service Web Role, call it JupiterMotorsWCFRole, and click on the Add button. Because different services must have unique names in our project, a good naming convention to use is the project name concatenated with the type of role. This makes the different roles and instances easily discernable and complies with the unique naming requirement. This is where Visual Studio does its magic. It creates the new role in the cloud project, creates a new web role for our WCF web services, and creates some template code for us. The template service created is called "Service1". You will see both, a Service1.svc file as well as an IService1.vb file. Also, a web.config file (as we would expect to see in any web role) is created in the web role and is already wired up for our Service1 web service. All of the generated code is very helpful if you are learning WCF web services. This is what we should see once Visual Studio finishes creating the new project: We are going to start afresh with our own services—we can delete Service1.svc and IService1.vb. Also, in the web.config file, the following boilerplate code can be deleted (we'll add our own code as needed): <system.serviceModel> <services> <service name="JupiterMotorsWCFRole.Service1" behaviorConfiguration="JupiterMotorsWCFRole. Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" contract="JupiterMotorsWCFRole.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="JupiterMotorsWCFRole.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Let's now add a WCF service to the JupiterMotorsWCFRole project. To do so, right-click on the project, then Add, and select the New Item... option. We now choose a WCF service and will name it as ERPService.svc: Just like the generated code when we created the web role, ERPService.svc as well as IERPService.vb files were created for us, and these are now wired into the web.config file. There is some generated code in the ERPService.svc and IERPService.vb files, but we will replace this with our code in the next section. When we create a web service, the actual service class is created with the name we specify. Additionally, an interface class is automatically created. We can specify the name for the class; however, being an interface class, it will always have its name beginning with letter I. This is a special type of interface class, called a service contract. The service contract provides a description of what methods and return types are available in our web service.
Read more
  • 0
  • 0
  • 7099

article-image-overview-oracle-advanced-pricing
Packt
23 Nov 2010
4 min read
Save for later

An Overview of Oracle Advanced Pricing

Packt
23 Nov 2010
4 min read
  Oracle E-Business Suite R12 Supply Chain Management Drive your supply chain processes with Oracle E-Business R12 Supply Chain Management to achieve measurable business gains Put supply chain management principles to practice with Oracle EBS SCM Develop insight into the process and business flow of supply chain management Set up all of the Oracle EBS SCM modules to automate your supply chain processes Case study to learn how Oracle EBS implementation takes place Oracle Advanced Pricing is the pricing engine for the Oracle E-Business Suite. This pricing engine works using the following scenario: What: This talks about "what" the context of the product is that is finalized by product attribute—all items, item category, or item code. Who: This tells us "who" the qualifier is that tells us who will be charged. At this step, the qualifier decides which modifier will give the price. How: This shows "how" the modifiers will be applicable for the selected qualifier. These modifiers can be used to avail the discounts at sales, promotions, special duties, and charges for special customers of special locations, and so on. After these three steps, prices for an item are finalized by the pricing engine. The key functionalities of Oracle Advanced Pricing The key functionalities of Oracle Advanced Pricing include the following: Defining and assigning rules for pricing products. Applying different types of discounts and surcharges to pricing. Creating a price list for different pricing criteria. Creating formulas to calculate pricing. Creating conversion rates for the usage of multiple currencies. Integration with different EBS modules for optimized pricing Supporting TCA party hierarchy for price list. Using Oracle Advanced Pricing, with the efficient use of qualifiers, modifiers, and formulas, we can efficiently manage all business scenarios. Targeting the specific item definition with the help of the pricing attribute. Making our own rules using the qualifier. For example, if today is Saturday then there will be 15 percent discount on the product. Multi-level responsibility available, such as pricing administrator, manager, and pricing user. Oracle Advanced Pricing process The Oracle Advanced Pricing process normally initiates when a price for an item is created in the price list; the price for the item is called by the application. The qualifier and pricing attribute are used to select the eligible price or modifier. The price or the modified price adjustment, in the form of discount or surcharge, will be applied and final price is obtained. This final price is then applied against the item on the requested application. (Move the mouse over the image to enlarge.) Price list The price list is the list of prices for different items and products. Each price list can have one or more price lines for an item. It contains the qualifier and pricing attributes. The prices of items in a price list can be constant values that can be picked up at the time of ordering. These prices can also be derived using formulas and percentages. Qualifier Qualifiers are rules that control who will be priced. Qualifiers contain the qualifier context and qualifier attribute that creates a logical grouping and explains who is eligible for these prices. Qualifier attributes can be order type, source type, order category, customer PO, and so on. In qualifiers we have operators that can create a condition such as equal to, between, not equal to, and so on. Modifiers Modifiers allow us to adjust the prices. Using a modifier, we can either increase or decrease the current price list for price adjustment surcharges, promotions, and discounts that are available to us these values are from list. Type code with a system access level. Formulas In Oracle Advanced Pricing, formulas are used to price items. These formulas actually contain the arithmetic and mathematical expressions used by the pricing process. Using these formulas, arithmetic equations provide us with the final price of items. If a formula is associated with any price list then we cannot use the constant and absolute values for that particular item. Integration of Oracle Advanced Pricing with other modules Oracle Advanced Pricing is fully integrated with other Oracle E-Business Suite modules. The following are the modules that are integrated with Oracle Advanced Pricing: Oracle Purchasing Oracle Order Management Oracle Service Contract Oracle Sales Contract Oracle iStore Oracle Transportation
Read more
  • 0
  • 0
  • 4059
article-image-apache-felix-gogo
Packt
10 Nov 2010
10 min read
Save for later

Apache Felix Gogo

Packt
10 Nov 2010
10 min read
  OSGi and Apache Felix 3.0 Beginner's Guide Build your very own OSGi applications using the flexible and powerful Felix Framework Build a completely operational real-life application composed of multiple bundles and a web front end using Felix Get yourself acquainted with the OSGi concepts, in an easy-to-follow progressive manner Learn everything needed about the Felix Framework and get familiar with Gogo, its command-line shell to start developing your OSGi applications Simplify your OSGi development experience by learning about Felix iPOJO A relentlessly practical beginner's guide that will walk you through making real-life OSGi applications while showing you the development tools (Maven, Eclipse, and so on) that will make the journey more enjoyable The Tiny Shell Language The command syntax for the shell interface is based on the Tiny Shell Language (TSL) . It is simple enough to allow a lightweight implementation, yet provides features such as pipes, closures, variable setting and referencing, collection types such as lists and maps, and so on. The TSL syntax allows the creation of scripts that can be executed by the shell runtime service. The introduction you will get here does not cover the complete syntax; instead, you will see the basic parts of it. For a review of the proposal in its initial state, (http://www.osgi.org/download/osgi-4.2-early-draft.pdf). You may also refer to the RFC 147 Overview on the Felix documentation pages (http://felix.apache.org/site/rfc-147-overview.html) for potential differences with the initial draft . Chained execution A program is a set of chained execution blocks. Blocks are executed in parallel, and the output of a block is streamed as input to the next. Blocks are separated by the pipe character ( | ). Each block is made up of a sequence of statements, separated by a semicolon ( ; ). For example, as we'll see in the next section, the bundles command lists the currently installed bundles and the grep command takes a parameter that it uses to filter the input. The program below: bundles | grep gogo is made of two statement blocks, namely, bundles and grep gogo. The output of the bundles statement will be connected to the input of the grep gogo statement (here each the statement block contains one statement). Running this program on your Felix installation, in the state it is now, will produce: g! bundles | grep gogo 2|Active | 1|org.apache.felix.gogo.command (0.6.0) 3|Active | 1|org.apache.felix.gogo.runtime (0.6.0) 4|Active | 1|org.apache.felix.gogo.shell (0.6.0) true The grep statement has filtered the output of the bundles statement for lines containing the filter string gogo. In this case, the grep statement outputs the results of its execution to the shell which prints it. Executing the statement grep gogo on its own, without a piped block that feeds it input, will connect its input to the user command line. In that case, use Ctrl-Z to terminate your input: g! grep gogo line 1 line 2 gogo line 2 gogo line 3 ^Z true Notice that line 2 gogo is repeated right after you have entered it, showing that the grep statement is running in parallel. It receives the input and processes it right after you enter it. Variable assignment and referencing A session variable is assigned a value using the equal character ( = ) and referenced using its name preceded with a dollar character ( $ ). For example: g! var1 = 'this is a string' this is a string g! echo $var1 this is a string The assignment operation returns the assigned value. Value types We've seen the string type previously, which is indicated by surrounding text with single quotes ( ' ). A list is a sequence of terms separated by whitespace characters and is delimited by an opening and a closing square bracket. For example: g! days = [ mon tue wed thu fri sat sun ] mon tue wed thu fri sat sun Here the variable, days, was created, assigned the list as a value, and stored in the session. A map is a list of assignments, the value is assigned to the key using the equal character ( = ) . For example: For example: g! sounds = [ dog=bark cat=meow lion=roar ] dog bark cat meow lion roar Here, the variable sounds is assigned a map with the preceding key value pairs. Object properties and operations The shell uses a mapping process that involves reflection to find the best operation to perform for a request. We're not going to go into the details of how this happens; instead, we'll give a few examples of the operations that can be performed. We'll see a few others as we go along. In the same session, days and sounds are defined previously to retrieve an entry in the $days list: g! $days get 1 tue To retrieve an entry in the sounds map g! $sounds get dog bark An example we've seen earlier is the bundles command used when illustrating the piping. Bundles was mapped to the method getBundles() from the Gogo Runtime bundle BundleContext instance. Another property of this object that we'll use in the next section is bundle &ltid> to get a bundle object instance using getBundle(long). Execution quotes Similar to the UNIX back-quote syntax, but providing one that's simpler for a lightweight parser, the execution quotes are used to return the output of an executed program. For example: g!(bundle 1) location file:/C:/felix/bundle/org.apache.felix.bundlerepository-1.6.2.jar Here, (bundle 1) has returned the bundle with ID 1, which we've re-used to retrieve the property location making use of Gogo's reflexion on beans (location is mapped to getLocation() on the Bundle object ). Commands and scopes The Gogo Runti me command processor is extensible and allows any bundle to register the commands it needs to expose to the user. Then, when the user types a command, the processor will attempt to find the method that's best fit to be executed, based on the command name and passed arguments. However, there are potential cases where two bundles would need to register the same command name. To avoid this clash, commands are registered with an opti onal scope. When there is no ambiguity as to which scope the command belongs to, the command can be used without a scope; otherwise, the scope must be included. The scope of a command is specified by pre-pending it to the command, separated from the command with a colon ( : ). In the previous examples, we've used the grep command, which is in the gogo scope. In this case, grep and gogo:grep achieve the same result. We will look closer at the command registration mechanism. Let's take a tour of some of the commands available in the Felix distribution. At the time of writing of this article, the Gogo bundles are at version 0.6.0, which means that they are not yet finalized and may change by the time they are released with version 1.0. felix scope commands One of the many powerful features of Felix (and OSGi-compliant applications in general) is that many actions can be applied on bundles without needing to restart the framework. Bundles can be installed, updated, uninstalled, and so on while the remaining functionality of the framework is active. The following are some of the available commands and a description of their usage. We will get to use many of those as we go along, so you need not worry much about learning them by heart. Just know they exist. Listing installed bundles: lb One of the most frequently used shell commands is the list bundles command (lb) , which gives a listing of the currently installed bundles, showing some informationon each of them. Let's check what's running on our newly installed framework: g! lb START LEVEL 1 ID|State |Level|Name 0|Active | 0|System Bundle (3.0.1) 1|Active | 1|Apache Felix Bundle Repository (1.6.2) 2|Active | 1|Apache Felix Gogo Command (0.6.0) 3|Active | 1|Apache Felix Gogo Runtime (0.6.0) 4|Active | 1|Apache Felix Gogo Shell (0.6.0) The listing provides the following useful information about each bundle: Each bundle is given a unique id on install—this ID is used by commands such as update or uninstall to apply acti ons on that bundle The bundle's start level The bundle's name and version This command also takes a parameter for filtering the bundles list. For example, to include only bundles that have 'bundle' in their name: g! lb bundle START LEVEL 1 ID|State |Level|Name 0|Active | 0|System Bundle (3.0.1) 1|Active | 1|Apache Felix Bundle Repository (1.6.2) help The help command provides hints on the usage of commands. When called without any parameters, the help command gives a listing of the available commands: g! help felix:bundlelevel felix:cd felix:frameworklevel felix:headers felix:help felix:inspect felix:install felix:lb felix:log felix:ls felix:refresh felix:resolve felix:start felix:stop felix:uninstall felix:update felix:which gogo:cat gogo:each gogo:echo gogo:format gogo:getopt gogo:gosh gogo:grep gogo:not gogo:set gogo:sh gogo:source gogo:tac gogo:telnetd gogo:type gogo:until obr:deploy obr:info obr:javadoc obr:list obr:repos obr:source More help on the syntax of each command can be requested by typing help &ltcommand-name>. For example, for more help on the repos command: g! help repos repos - manage repositories scope: obr parameters: String ( add | list | refresh | remove ) String[] space-delimited list of repository URL When the command is available with multiple signatures, a help block per signature is provided, for example: g! help help help - displays information about a specific command scope: felix parameters: String target command help - displays available commands scope: felix Here, the help command has 2 syntaxes: one that takes a parameter (the target command), and another that takes no parameters. We've used the first one to get help on help. Some commands may have not registered help content with the shell service. Those will show minimal information using help &ltcommand>. In most cases, they expose a separate help listing—usually &ltcommand> -? or &ltcommand> -- help. install The install command is used to instruct Felix to install an external bundle. The syntax is as follows: g! help install install - install bundle using URLs scope: felix parameters: String[] target URLs Each bundle is located using the URL and is downloaded to the local cache for installation. Once a bundle is installed, it is given a unique id. This ID is used to refer to this bundle when using commands such as update or uninstall. For example: g! install http://www.mysite.com/testbundle-1.0.0.jar Bundle ID: 7 Here, the bundle I've just installed has the ID 7. g! lb START LEVEL 1 ID|State |Level|Name 0|Active | 0|System Bundle (3.0.1) 1|Active | 1|Apache Felix Bundle Repository (1.6.2) 2|Active | 1|Apache Felix Gogo Command (0.6.0) 3|Active | 1|Apache Felix Gogo Runtime (0.6.0) 4|Active | 1|Apache Felix Gogo Shell (0.6.0) 7|Installed | 1|Test Bundle (1.0.0) In cases where many bundles are to be installed from the same base URL, you may want to set a session variable with the common base URL to simplify the task. For example, instead of executing: g! install http://site.com/bundle1.jar http://site.com/bundle2.jar You would write: g! b = http://site.com g! install $b/bundle1.jar $b/bundle2.jar
Read more
  • 0
  • 0
  • 3080

article-image-getting-started-enterprise-library
Packt
10 Nov 2010
6 min read
Save for later

Getting Started with Enterprise Library

Packt
10 Nov 2010
6 min read
Introducing Enterprise Library Enterprise Library (EntLib) is a collection of reusable software components or application blocks designed to assist software developers with common enterprise development challenges. Each application block addresses a specific cross-cutting concern and provides highly configurable features, which results in higher developer productivity. EntLib is implemented and provided by Microsoft patterns & practices group, a dedicated team of professionals who work on solving these cross-cutting concerns with active participation from the developer community. This is an open source project and thus freely available under the Microsoft Public License (Ms-PL) at the CodePlex open source community site (http://entlib.codeplex.com), basically granting us a royalty-free copyright license to reproduce its contribution, build derivative works, and distribute them. More information can be found at the Enterprise Library community site http://www.codeplex.com/entlib. Enterprise Library consists of nine application blocks; two are concerned with wiring up stuff together and the remaining seven are functional application blocks. The following is the complete list of application blocks; these are briefly discussed in the next sections. Wiring Blocks Unity Dependency Injection Policy Injection Application Block Functional Blocks Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Wiring Application Blocks Wiring blocks provide mechanisms to build highly flexible, loosely coupled, and maintainable systems. These blocks are mainly about wiring or plugging together different functionalities. The following two blocks fall under this category: Unity Dependency Injection Policy Injection Application Block Unity Application Block The Unity Application Block is a lightweight, flexible, and extensible dependency injection container that supports interception and various injection mechanisms such as constructor, property, and method call injection. The Unity Block is a standalone open source project, which can be leveraged in our application. This block allows us to develop loosely coupled, maintainable, and testable applications. Enterprise Library leverages this block for wiring the configured objects. More information on the Unity block is available at http://unity.codeplex.com. Policy Injection Application Block The Policy Injection Application Block is included in this release of Enterprise Library for backwards compatibility and policy injection is implemented using the Unity interception mechanism. This block provides a mechanism to change object behavior by inserting code between the client and the target object without modifying the code of the target object. Functional Application Blocks Enterprise Library consists of the following functional application blocks, which can be used individually or can be grouped together to address a specific cross-cutting concern. Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Data Access Application Block Developing an application that stores/ retrieve data in/from some kind of a relational database is quite common; this involves performing CRUD (Create, Read, Update, Delete) operations on the database by executing T-SQL or stored procedure commands. But we often end up writing the plumbing code over and over again to perform these operations: plumbing code such as creating a connection object, opening and closing a connection, parameter caching, and so on. The following are the key benefits of the Data Access block: The Data Access Application Block (DAAB) abstracts developers from the underlying database technology by providing a common interface to perform database operations. DAAB also takes care of the ordinary tasks like creating a connection object, opening and closing a connection, parameter caching, and so on. It helps in bringing consistency to the application and allows changing of database type by modifying the configuration. Logging Application Block Logging is an essential activity, which is required to understand what's happening behind the scene while the application is running. This is especially helpful in identifying issues and tracing the source of the problem without debugging. The Logging Application Block provides a very simple, flexible, standard, and consistent way to log messages. Administrators have the power to change the log destination (file, database, e-mail, and so on), modify message format, decide on which category is turned on/off, and so on. Exception Handling Application Block Handling exceptions appropriately and allowing the user to either continue or exit gracefully is essential for any application to avoid user frustration. The Exception Handling Application Block adapts the policy-driven approach to allow developers/administrators to define how to handle exceptions. The following are the key benefits of the Exception Handling Block: It provides the ability to log exception messages using the Logging Application Block. It provides a mechanism to replace the original exception with another exception, which prevents disclosure of sensitive information. It provides mechanism to wrap the original exception inside another exception to maintain the contextual information. Caching Application Block Caching in general is a good practice for data that has a long life span; caching is recommended if the possibility of data being changed at the source is low and the change doesn't have significant impact on the application. The Caching Application Block allows us to cache data locally in our application; it also gives us the flexibility to cache the data in-memory, in a database or in an isolated storage. Validation Application Block The Validation Application Block (VAB) provides various mechanisms to validate user inputs. As a rule of thumb always assume user input is not valid unless proven to be valid. The Validation block allows us to perform validation in three different ways; we can use configuration, attributes, or code to provide validation rules. Additionally it also includes adapters specifically targeting ASP.NET, Windows Forms, and Windows Communication Foundation (WCF). Security Application Block The Security Application Block simplifies authorization based on rules and provides caching of the user's authorization and authentication data. Authorization can be done against Microsoft Active Directory Service, Authorization Manager (AzMan) , Active Directory Application Mode (ADAM), and Custom Authorization Provider. Decoupling of the authorization code from the authorization provider allows administrators to change the provider in the configuration without changing the code. Cryptography Application Block The Cryptography Application Block provides a common API to perform basic cryptography operations without inclining towards any specific cryptography provider and the provider is configurable. Using this application block we can perform encryption/decryption, hashing, & validate whether the hash matches some text.
Read more
  • 0
  • 0
  • 3770