Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-microsoft-azure-blob-storage
Packt
07 Jan 2011
5 min read
Save for later

Microsoft Azure Blob Storage

Packt
07 Jan 2011
5 min read
  Microsoft Azure: Enterprise Application Development Straight talking advice on how to design and build enterprise applications for the cloud Build scalable enterprise applications using Microsoft Azure The perfect fast-paced case study for developers and architects wanting to enhance core business processes Packed with examples to illustrate concepts Written in the context of building an online portal for the case-study application Blobs in the Azure ecosystem Blobs are one of the three simple storage options for Windows Azure, and are designed to store large files in binary format. There are two types of blobs—block blobs and page blobs. Block blobs are designed for streaming, and each blob can have a size of up to 200 GB. Page blobs are designed for read/write access and each blob can store up to 1 TB each. If we're going to store images or video for use in our application, we'd store them in blobs. On our local systems, we would probably store these files in different folders. In our Azure account, we place blobs into containers, and just as a local hard drive can contain any number of folders, each Azure account can have any number of containers. Similar to folders on a hard drive, access to blobs is set at the container level, where permissions can be either "public read" or "private". In addition to permission settings, each container can have 8 KB of metadata used to describe or categorize it (metadata are stored as name/value pairs). Each blob can be up to 1 TB depending on the type of blob, and can also have up to 8 KB of metadata. For data protection and scalability, each blob is replicated at least three times, and "hot blobs" are served from multiple servers. Even though the cloud can accept blobs of up to 1 TB in size, Development Storage can accept blobs only up to 2 GB. This typically is not an issue for development, but still something to remember when developing locally. Page blobs form the basis for Windows Azure Drive—a service that allows Azure storage to be mounted as a local NTFS drive on the Azure instance, allowing existing applications to run in the cloud and take advantage of Azure-based storage while requiring fewer changes to adapt to the Azure environment. Azure drives are individual virtual hard drives (VHDs) that can range in size from 16 MB to 1 TB. Each Windows Azure instance can mount up to 16 Azure drives, and these drives can be mounted or dismounted dynamically. Also, Windows Azure Drive can be mounted as readable/writable from a single instance of an Azure service, or it can be mounted as a read-only drive for multiple instances. At the time of writing, there was no driver that allowed direct access to the page blobs forming Azure drives, but the page blobs can be downloaded, used locally, and uploaded again using the standard blob API. Creating Blob Storage Blob Storage can be used independent of other Azure services, and even if we've set up a Windows Azure or SQL Azure account, Blob Storage is not automatically created for us. To create a Blob Storage service, we need to follow these steps: Log in to the Windows Azure Developer portal and select our project. After we select our project, we should see the project page, as shown in the next screenshots: Clicking the New Service link on the application page takes us to the service creation page, as shown next: Selecting Storage Account allows us to choose a name and description for our storage service. This information is used to identify our services in menus and listings. Next, we choose a unique name for our storage account. This name must be unique across all of Azure—it can include only lowercase letters and numbers, and must be at least three characters long. If our account name is available, we then choose how to localize our data. Localization is handled by "affinity groups", which tie our storage service to the data centers in different geographic regions. For some applications, it may not matter where we locate our data. For other applications, we may want multiple affinity groups to provide timely content delivery. And for a few applications, regulatory requirements may mean we have to bind our data to a particular region. Clicking the Create button creates our storage service, and when complete, a summary page is shown. The top half of the summary page reiterates the description of our service and provides the endpoints and 256-bit access keys. These access keys are very important—they are the authentication keys we need to pass in our request if we want to access private storage or add/update a blob. The bottom portion of the confirmation page reiterates the affinity group the storage service belongs to. We can also enable a content delivery network and custom domain for our Blob Storage account. Once we create a service, it's shown on the portal menu and in the project summary once we select a project. That's it! We now have our storage services created. We're now ready to look at blobs in a little more depth.
Read more
  • 0
  • 0
  • 4015

article-image-scribus-manipulate-and-place-objects-layout
Packt
07 Jan 2011
5 min read
Save for later

Scribus: Manipulate and Place Objects in a Layout

Packt
07 Jan 2011
5 min read
Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents.   Resizing objects Well it's time to work on the logo: it's really big and we would like it to be aligned the top part of the card. There are several ways to resize an object or frame. Resizing with the mouse When an object is selected, for example, click on the logo, and you can see a red rectangle outline. This doesn't affect the object properties but only shows that it is selected. There are little red square handles at each corner and at the middle of each side. If the mouse gets over one of these handles, the cursor will change to a double arrow. If you press the left mouse button when the pointer is on one of them and then move the pointer, you'll see the size changing according to the mouse movements. Just release the button when you're done. While resizing the frame an information box appears near the pointer and displays the new width. You will notice that the proportions of the object are not kept, and that the logo is modified. To avoid this, just press the Ctrl key while dragging the handles and you'll see that the logo will be scaled proportionally. Resizing with the Properties Palette As an alternative, you can use the Width and Height fields of the XYZ tab in the Properties Palette. If you need to keep the ratio, be sure that the chain button at the right-hand side of the field is activated. You can set the size in three ways: By scrolling the mouse wheel within the field. Pressing Ctrl or Shift while scrolling will increase or decrease the effect. If you already know the size, you can directly write it. This is mostly the case when you have a graphical charter that defines it or when you're already recreating an existing document. You can also use the small arrows at the right-hand side of the field (the same modifiers apply as described for the mouse wheel). Resizing with the keyboard Another way to resize objects is by using the keyboard. It's useful when you're typing and you need some extra space to put some more text, and that don't want to put your hands on the mouse. In this case, just: Press Esc to enter the Layout mode and leave the Content mode Press Alt and one of the arrows at the same time Press E to go back to Content Edit mode If you do some tests, you'll find that each arrow controls a side: the left arrow affects the size by moving the left-hand side, the right arrow affects the right-hand side, and so on. You can see that with this method the shape can only grow. Have a go hero – vector circle style Since the past two or three years, you might have noticed that shapes are being used in their pure form. For example, check this easy sample and try to reproduce it in the best way you can: copy-paste, moving, and resizing are all you'll need to know. Scaling objects Scaling objects—what can be different here from resizing? Once more, it's on Text Frames that the difference is more evident. Compare the results you can get: The difference is simple: in the top example the content has been scaled with the frame, and in the second only the frame is scaled. So it's scaling the content. You can scale a Text Frame (with its consent) by pressing the Alt key while resizing with the mouse. The Alt key applies, as always, while the mouse is pressed during the resizing movement. So did you see something missing in our card? Time for action – scaling the name of our company Let's say that our company name is "GraphCo" as in the previous image and that we want to add it to the card. Take the Insert Text Frame tool and draw a little frame in the page. An alternative could be clicking on the page instead of dragging. Once you've clicked, the Object Size window is displayed and you can set 12mm or so as width, and 6mm as the height. Then click on OK to create the frame. Double-click in the frame and type the name of the company. Select the text and change the font family to one that you like (here the font is OpenDINSchriftenEngShrift), and decrease the size if the name is not completely visible. Scale the frame until it is about 50mm wide. We can fix the width later. What just happened? Most of the time, you will use simple resizing instead of scaling. When you want the text to match some area and you don't want to play indefinitely with the font size setting, you may prefer to use the scaling functionality. Using the scale options makes it very easy to resize the frame and the text visually without trying to find the best font size in pt, which can sometimes be quite long.  
Read more
  • 0
  • 0
  • 2936

article-image-scribus-creating-layout
Packt
07 Jan 2011
9 min read
Save for later

Scribus: Creating a Layout

Packt
07 Jan 2011
9 min read
  Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents. Creating a new layout Creating a layout in Scribus means dealing with the New Document window. It is not a complex window but be aware that many things you'll set here will be considered definitive. If these settings look simple or evident, you should consider all these settings as important. Some of them like the page size mean that you already have an idea of the final document, or atleast that you've already made some choices that won't change after it is created. Of course, Scribus will let you change this later if you change your mind, but many things you will have done in the meantime will simply have to be done again. Time for action – setting page size and paper size and margins This window is the first that opens when you launch Scribus or when you go to the File | New menu. It contains several options that need to be set. First among these options will certainly be the page size. In our case, people usually use 54x85mm (USA: 51×89mm). When you type the measurements in the Width and Height fields, the Size option, which contains the common size presets, is automatically switched to Custom. If you want to use a different system unit, just change the Default Unit value placed below. Usually, we prefer Millimeters (mm), which is quite precise without having too many significant decimals. Then, you can set the margin for your document. Professional printers are very different from desktop printers as they can print without margins. In fact, consider margins as helpers to place objects. For a small document like a business card, having small 4mm margins will be good. What just happened? Some common page sizes are: the series (the ISO standards biggest starting with A0 841x1189, that is 1m², and halving at each half step), the US formats, especially letter (216x279mm), legal (216x356mm), and tabloid (approximately 279x432mm, 11x17in), commonly used in the UK for newspapers. The best business card size When choosing the size for the business card, you'll consider the existing size often used. Is ISO 54x85.6mm better than the US 2x3.5in, or the European 55x85mm, or the Australian 55x90mm, when only a few millimeters divide them? Best is certainly to match the most commonly-used size in your country. Remember one thing: a business card must have to be easily stored and sorted. Grabbing an uncommon format can just lead to the fact that no one will be able to put your card in their wallet. Presets will be useful if you want to print locally, but don't forget that your print company crops the paper to the size you want. So don't mind being creative and do some testing. For example, you might print on an A3 size paper for your final document or in an A3+ real printing size so that you'll be able to use bleeds, as we'll explain in the following sections. Here we're talking about the page size and not the paper size, which can be double if the Document Layout is set to any option but Single Page. For all the folded documents, the page size differs from the paper size—keep that in mind. For now choose 54x85.6 in landscape: just set 54 as the height or change the orientation button if you haven't. The other setting that might interest you is the margin . In Scribus, consider the margin as a helper. In fact nobody in the professional print process will need margins. It is useful for desktop printers, which can't print up to the sheet border. As our example is much smaller than the usual paper size, we won't have any trouble with it. Scribus has some presets for margins that are available only when a layout other than Single Page is selected. For our model, 4mm to each side will be fine. If you want to set all the fields at once, just click on the chain button at the right-hand side of the margin fields. But actually, we can consider that we won't have much to write and that it would be nice if our margins could help position the text. So let's define the margins as follows: Left: 10mm Right: 40mm Top: 30mm Bottom: 2mm Choosing a layout We've already talked about this option several times but here we are again. What kind of layout would you choose? Single page will simulate what you might have in a text processor. You can have as many pages as you want but it will be printed page after page. You'll get its result when printing with your desktop printer: Double-sided will be the option you'll use when you'll need a folded document. This is useful for magazines, newsletters, books, or such documents. In this layout, the reader will see two pages side by side at once, and you can easily manage elements that will overlap both pages. The fold will be in the exact middle. Usually, unless you have a small document size like A5 or smaller, this layout is intended to be printed by a professional. 3-Fold and 4-Fold are more for commercial little brochures. Usually, you won't use it in Scribus and will prefer a Single Page layout that you'll divide later into three or four parts. Why? Because with the folded out, Scribus will consider each "fold" as a page and will print each of them on a separate sheet—a bit tricky. You can see that for a business card, where no fold is needed, the Single Page layout will be our choice. (Move the mouse over the image to enlarge.) For the moment we won't need other options, so you can click on OK. You'll get a white rectangle on a greyish workspace. The red outline is the selection indicator for the selected page. It shows the borders of the page. The blue rectangle shows where the margins are placed. Save the document as often as possible "Save the document as often as possible"—this is the first commandment of a software user, but in Scribus this is much more important for several reasons: First of all, apologies, Scribus is a very nice piece of software but still not perfect (but which one is?). It can crash sometimes, slightly more than you'd wish, and never at a time you would expect or appreciate. Saving often will help you save a lot of time doing again what you've already achieved during the day. The Scribus undo system acts on layout options but not on text manipulations. Saving often can be helpful if you make mistakes that you can't undo. In Scribus, we will use File | Save As (or Ctrl + Shift + S) to set the document name and format. It's very simple because you have no other choice than Scribus Documents *.sla. In the list, you will see sla.gz that will be used when the Compress File checkbox will be selected. Usually, a Scribus file is not that large in size and there is no real need to compress it. Of course, if the file already exists, Scribus asks whether you want to overwrite the previous one. Scribus file version Each Scribus release has enhanced the file format to be able to store the new possibilities in the file. But when saving, you cannot choose a version: Scribus will always use the current one. Every document can be opened in future Scribus releases but not in the older ones. So be careful when you need to send the file to someone or else when you're working on several computers. Once you've used Saved As, you'll just have to simply save (File | Save) or more magically use Ctrl + S, and the modifications will automatically be added to the saved document. The extra Save as Template menu will store the actual file in a special Scribus folder. When you want to create a new document with the same global aspect, you can go to the New from Template menu and grab it from the list. There are some default templates available here, but yours might be better. Saving as a template might not be the usual saving process; this is done at the end when the basics of your layout have been made. Saving as template must happen only once for a template. So we'll use it at the end of our tutorial. Basic frames for text and images The biggest part of a design job is adding frames, setting their visual aspect, and importing content into them. In our business card we'll need a logo, name, and other information. You may add a photo. Time for action – adding the logo They are several types of graphic elements in a layout. The logo is of course one of the most important. Generally, we prefer using vector logos in SVG or EPS. Let's import a logo. In the File menu choose File | Import | Get Vector File. The cursor has now been changed, and you can click on the page where you want to place the logo. Try to click at the upper-left corner of the margins. It will certainly not be correctly placed and the logo may be too big. We'll soon see how to change it. A warning will appear and inform you that some SVG features will not be supported. There is no option other than clicking on OK, and everything should be good. What just happened? The logo is the master piece of the card. It helps recognize the origin of the contact. In some ways, it is the most important recognition for a company. Usually, a logo is the only graphical element on the card. It can be put anywhere you want, but generally the upper left-hand side corner is the place of choice.  
Read more
  • 0
  • 0
  • 5789
Visually different images

article-image-wxpython-28-advanced-building-blocks-user-interface
Packt
30 Dec 2010
10 min read
Save for later

wxPython 2.8: Advanced Building Blocks of a User Interface

Packt
30 Dec 2010
10 min read
Displaying collections of data and managing complex window layouts are a task that most UI developers will be faced with at some point. wxPython provides a number of components to help developers meet the requirements of these more demanding interfaces. As the amount of controls and data that an application is required to display in its user interface increases, so does the task of efficiently managing available screen real estate. To fit this information into the available space requires the use of some more advanced controls and containers; so let's dive in and begin our exploration of some of the more advanced controls that wxPython has to offer. Listing data with a ListCtrl The ListCtrl is a versatile control for displaying collections of text and/or images. The control supports many different display formats, although typically its most often-used display mode is the report mode. Report mode has a visual representation that is very similar to a grid or spreadsheet in that it can have multiple rows and columns with column headings. This recipe shows how to populate and retrieve data from a ListCtrl that was created in report mode. How to do it... The ListCtrl takes a little more set up than most basic controls, so we will start by creating a subclass that sets up the columns that we wish to have in the control: class MyListCtrl(wx.ListCtrl): def __init__(self, parent): super(MyListCtrl, self).__init__(parent, style=wx.LC_REPORT) # Add three columns to the list self.InsertColumn(0, "Column 1") self.InsertColumn(1, "Column 2") self.InsertColumn(2, "Column 3") def PopulateList(self, data): """Populate the list with the set of data. Data should be a list of tuples that have a value for each column in the list. [('hello', 'list', 'control'),] """ for item in data: self.Append(item) Next we will create an instance of our ListCtrl and put it on a Panel, and then use our PopulateList method to put some sample data into the control: class MyPanel(wx.Panel): def __init__(self, parent): super(MyPanel, self).__init__(parent) # Attributes self.lst = MyListCtrl(self) # Setup data = [ ("row %d" % x, "value %d" % x, "data %d" % x) for x in range(10) ] self.lst.PopulateList(data) # Layout sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.lst, 1, wx.EXPAND) self.SetSizer(sizer) # Event Handlers self.Bind(wx.EVT_LIST_ITEM_SELECTED, self.OnItemSelected) def OnItemSelected(self, event): selected_row = event.GetIndex() val = list() for column in range(3): item = self.lst.GetItem(selected_row, column) val.append(item.GetText()) # Show what was selected in the frames status bar frame = self.GetTopLevelParent() frame.PushStatusText(",".join(val)) How it works... Usually there tends to be a fair amount of set up with the ListCtrl, and due to this it is good to encapsulate the usage of the control in a specialized subclass instead of using it directly. We kept things pretty basic here in our ListCtrl class. We just used the InsertColumn method to set our list up with three columns. Then the PopulateList method was added for convenience, to allow the population of the ListCtrl from a Python list of data. It simply wraps the Append method of ListCtrl, which just takes an iterable that has a string for each column in the list. The MyPanel class is there to show how to use the ListCtrl class that we created. First we populate it with some data by generating a list of tuples and calling our PopulateList method. To show how to retrieve data from the list, we created an event handler for EVT_LIST_ITEM_SELECTED which will be fired each time a new selection is made in the control. In order to retrieve a value from a ListCtrl, you need to know the row and column index of the cell that you wish to retrieve the data from, and then call GetItem with the row and column to get the ListItem object that represents that cell. Then the string value of the cell can be retrieved by calling the GetText method of ListItem. There's more... Depending on the style flags that are used to create a ListCtrl, it will behave in many different possible ways. Because of this, it is important to know some of the different style flags that can be used to create a ListCtr.   Style flags Description LC_LIST In List mode, the control will calculate the columns automatically, so there is no need to call InsertColumn. It can be used to display strings and, optionally, small icons LC_REPORT Single or multicolumn report view that can be shown with or without headers LC_ICON Large icon view that can optionally have labels LC_SMALL_ICON Small icon view that can optionally have labels LC_EDIT_LABELS Allow the item labels to be editable by users LC_NO_HEADER Hide the column headers (report mode) LC_SORT_ASCENDING Sort items in ascending order (must provide a SortItems callback method) LC_SORT_DESCENDING Sort items in descending order (must provide a SortItems callback method) LC_HRULE Draw a horizontal line between rows (report mode) LC_VRULE Draw a vertical line between columns (report mode) LC_SINGLE_SEL Only allow a single item to be selected at a time (Default is to allow for multiple selections) LC_VIRTUAL Fetch items to display in the list on demand (report mode) Virtual Mode When a ListCtrl is created in virtual mode (using the LC_VIRTUAL style flag), it does not store the data internally; instead it will instead ask for the data from a datasource when it needs to display it. This mode is useful when you have a very large set of data where preloading it in the control would present performance issues. To use a ListCtrl in virtual mode, you must call SetItemCount to tell the control how many rows of data there are, and override the OnGetItemText method to return the text for the ListItem when the control asks for it. Browsing files with the CustomTreeCtrl A TreeCtrl is a way of displaying hierarchical data in a user interface. The CustomTreeCtrl is a fully owner-drawn TreeCtrl that looks and functions much the same way as the default TreeCtrl, but that offers a number of additional features and customizability that the default native control cannot. This recipe shows how to make a custom file browser class by using the CustomTreeCtrl. How to do it... To create this custom FileBrowser control, we will use its constructor to set up the images to use for the folders and files in the tree: import os import wx import wx.lib.customtreectrl as customtree class FileBrowser(customtree.CustomTreeCtrl): FOLDER, ERROR, FILE = range(3) def __init__(self, parent, rootdir, *args, **kwargs): super(FileBrowser, self).__init__(parent, *args, **kwargs) assert os.path.exists(rootdir), "Invalid Root Directory!" assert os.path.isdir(rootdir), "rootdir must be a Directory!" # Attributes self._il = wx.ImageList(16, 16) self._root = rootdir self._rnode = None # Setup for art in (wx.ART_FOLDER, wx.ART_ERROR, wx.ART_NORMAL_FILE): bmp = wx.ArtProvider.GetBitmap(art, size=(16,16)) self._il.Add(bmp) self.SetImageList(self._il) self._rnode = self.AddRoot(os.path.basename(rootdir), image=FileBrowser.FOLDER, data=self._root) self.SetItemHasChildren(self._rnode, True) # use Windows-Vista-style selections self.EnableSelectionVista(True) # Event Handlers self.Bind(wx.EVT_TREE_ITEM_EXPANDING, self.OnExpanding) self.Bind(wx.EVT_TREE_ITEM_COLLAPSED, self.OnCollapsed) def _GetFiles(self, path): try: files = [fname for fname in os.listdir(path) if fname not in ('.', '..')] except OSError: files = None return files The following two event handlers are used to update which files are displayed when a node is expanded or collapsed in the tree: def OnCollapsed(self, event): item = event.GetItem() self.DeleteChildren(item) def OnExpanding(self, event): item = event.GetItem() path = self.GetPyData(item) files = self._GetFiles(path) # Handle Access Errors if files is None: self.SetItemImage(item, FileBrowser.ERROR) self.SetItemHasChildren(item, False) return for fname in files: fullpath = os.path.join(path, fname) if os.path.isdir(fullpath): self.AppendDir(item, fullpath) else: self.AppendFile(item, fullpath) The following methods are added as an API for working with the control to add items and retrieve their on-disk paths: def AppendDir(self, item, path): """Add a directory node""" assert os.path.isdir(path), "Not a valid directory!" name = os.path.basename(path) nitem = self.AppendItem(item, name, image=FileBrowser.FOLDER, data=path) self.SetItemHasChildren(nitem, True) def AppendFile(self, item, path): """Add a file to a node""" assert os.path.isfile(path), "Not a valid file!" name = os.path.basename(path) self.AppendItem(item, name, image=FileBrowser.FILE, data=path) def GetSelectedPath(self): """Get the selected path""" sel = self.GetSelection() path = self.GetItemPyData(sel) return path def GetSelectedPaths(self): """Get a list of selected paths""" sels = self.GetSelections() paths = [self.GetItemPyData(sel) for sel in sels ] return paths How it works... With just a few lines of code here we have created a pretty useful little widget for displaying and working with the file system. Let's take a quick look at how it works. In the classes constructor, we added a root node with the control's AddRoot method. A root node is a top-level node that has no other parent nodes above it. The first argument is the text that will be shown, the image argument specifies the default image for the TreeItem, and the data argument specifies any type of data associated with the item—in this case we are setting a string for the items path. We then called SetItemHasChildren for the item so that it will get a button next to it to allow it to be expanded. The last thing that we did in the constructor was to Bind the control to two events so that we can update the tree when one of its nodes is being expanded or collapsed. Immediately before the node is going to be expanded our handler for EVT_TREE_ITEM_ EXPANDING will be called. It is here where we find all the files and folders under a directory node, and then add them as children of that node by calling AppendItem, which works just like AddRoot but is used to add items to already-existing nodes in the tree. Conversely when a node in the tree is going to be collapsed, our EVT_TREE_ITEM_COLLAPED event handler will be called. Here we simply call DeleteChildren in order to remove the children items from the node so that we can update them more easily the next time that the node is expanded. Otherwise, we would have to find what was different the next time it was expanded, and then remove the items that have been deleted and insert new items that may have been added to the directory. The last two items in our class are for getting the file paths of the selected items, which—since we store the file path in each node—is simply just a matter of getting the data from each of the currently-selected TreeItems with a call to GetPyData. There's more... Most of what we did in this recipe could actually also be replicated with the standard TreeCtrl. The difference is in the amount of extra customizability that the CustomTreeCtrl provides. Since it is a fully owner-drawn control, nearly all of the visible attributes of it can be customized. Following is a list of some of the functions that can be used to customize its appearance:
Read more
  • 0
  • 0
  • 2421

article-image-working-geo-spatial-data-python
Packt
30 Dec 2010
7 min read
Save for later

Working with Geo-Spatial Data in Python

Packt
30 Dec 2010
7 min read
Python Geospatial Development If you want to follow through the examples in this article, make sure you have the following Python libraries installed on your computer: GDAL/OGR version 1.7 or later (http://gdal.org) pyproj version 1.8.6 or later (http://code.google.com/p/pyproj) Shapely version 1.2 or later (http://trac.gispython.org/lab/wiki/Shapely) Reading and writing geo-spatial data In this section, we will look at some examples of tasks you might want to perform that involve reading and writing geo-spatial data in both vector and raster format. Task: Calculate the bounding box for each country in the world In this slightly contrived example, we will make use of a Shapefile to calculate the minimum and maximum latitude/longitude values for each country in the world. This "bounding box" can be used, among other things, to generate a map of a particular country. For example, the bounding box for Turkey would look like this: Start by downloading the World Borders Dataset from:   http://thematicmapping.org/downloads/world_borders.php Decompress the .zip archive and place the various files that make up the Shapefile (the .dbf, .prj, .shp, and .shx files) together in a suitable directory. We next need to create a Python program that can read the borders of each country. Fortunately, using OGR to read through the contents of a Shapefile is trivial: import osgeo.ogr shapefile = osgeo.ogr.Open("TM_WORLD_BORDERS-0.3.shp") layer = shapefile.GetLayer(0) for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) The feature consists of a geometry and a set of fields. For this data, the geometry is a polygon that defines the outline of the country, while the fields contain various pieces of information about the country. According to the Readme.txt file, the fields in this Shapefile include the ISO-3166 three-letter code for the country (in a field named ISO3) as well as the name for the country (in a field named NAME). This allows us to obtain the country code and name like this: countryCode = feature.GetField("ISO3") countryName = feature.GetField("NAME") We can also obtain the country's border polygon using: geometry = feature.GetGeometryRef() There are all sorts of things we can do with this geometry, but in this case we want to obtain the bounding box or envelope for the polygon: minLong,maxLong,minLat,maxLat = geometry.GetEnvelope() Let's put all this together into a complete working program: # calcBoundingBoxes.py import osgeo.ogr shapefile = osgeo.ogr.Open("TM_WORLD_BORDERS-0.3.shp") layer = shapefile.GetLayer(0) countries = [] # List of (code,name,minLat,maxLat, # minLong,maxLong) tuples. for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) countryCode = feature.GetField("ISO3") countryName = feature.GetField("NAME") geometry = feature.GetGeometryRef() minLong,maxLong,minLat,maxLat = geometry.GetEnvelope() countries.append((countryName, countryCode, minLat, maxLat, minLong, maxLong)) countries.sort() for name,code,minLat,maxLat,minLong,maxLong in countries: print "%s (%s) lat=%0.4f..%0.4f, long=%0.4f..%0.4f" % (name, code,minLat, maxLat,minLong, maxLong) Running this program produces the following output: % python calcBoundingBoxes.py Afghanistan (AFG) lat=29.4061..38.4721, long=60.5042..74.9157 Albania (ALB) lat=39.6447..42.6619, long=19.2825..21.0542 Algeria (DZA) lat=18.9764..37.0914, long=-8.6672..11.9865 ... Task: Save the country bounding boxes into a Shapefile While the previous example simply printed out the latitude and longitude values, it might be more useful to draw the bounding boxes onto a map. To do this, we have to convert the bounding boxes into polygons, and save these polygons into a Shapefile. Creating a Shapefile involves the following steps: Define the spatial reference used by the Shapefile's data. In this case, we'll use the WGS84 datum and unprojected geographic coordinates (that is, latitude and longitude values). This is how you would define this spatial reference using OGR: import osgeo.osr spatialReference = osgeo.osr.SpatialReference() spatialReference.SetWellKnownGeogCS('WGS84') We can now create the Shapefile itself using this spatial reference: import osgeo.ogr driver = osgeo.ogr.GetDriverByName("ESRI Shapefile") dstFile = driver.CreateDataSource("boundingBoxes.shp")) dstLayer = dstFile.CreateLayer("layer", spatialReference) After creating the Shapefile, you next define the various fields that will hold the metadata for each feature. In this case, let's add two fields to store the country name and its ISO-3166 code: fieldDef = osgeo.ogr.FieldDefn("COUNTRY", osgeo.ogr.OFTString) fieldDef.SetWidth(50) dstLayer.CreateField(fieldDef) fieldDef = osgeo.ogr.FieldDefn("CODE", osgeo.ogr.OFTString) fieldDef.SetWidth(3) dstLayer.CreateField(fieldDef) We now need to create the geometry for each feature—in this case, a polygon defining the country's bounding box. A polygon consists of one or more linear rings; the first linear ring defines the exterior of the polygon, while additional rings define "holes" inside the polygon. In this case, we want a simple polygon with a square exterior and no holes: linearRing = osgeo.ogr.Geometry(osgeo.ogr.wkbLinearRing) linearRing.AddPoint(minLong, minLat) linearRing.AddPoint(maxLong, minLat) linearRing.AddPoint(maxLong, maxLat) linearRing.AddPoint(minLong, maxLat) linearRing.AddPoint(minLong, minLat) polygon = osgeo.ogr.Geometry(osgeo.ogr.wkbPolygon) polygon.AddGeometry(linearRing) You may have noticed that the coordinate (minLong, minLat)was added to the linear ring twice. This is because we are defining line segments rather than just points—the first call to AddPoint()defines the starting point, and each subsequent call to AddPoint()adds a new line segment to the linear ring. In this case, we start in the lower-left corner and move counter-clockwise around the bounding box until we reach the lower-left corner again:   Once we have the polygon, we can use it to create a feature: feature = osgeo.ogr.Feature(dstLayer.GetLayerDefn()) feature.SetGeometry(polygon) feature.SetField("COUNTRY", countryName) feature.SetField("CODE", countryCode) dstLayer.CreateFeature(feature) feature.Destroy() Notice how we use the setField() method to store the feature's metadata. We also have to call the Destroy() method to close the feature once we have finished with it; this ensures that the feature is saved into the Shapefile. Finally, we call the Destroy() method to close the output Shapefile: dstFile.Destroy() Putting all this together, and combining it with the code from the previous recipe to calculate the bounding boxes for each country in the World Borders Dataset Shapefile, we end up with the following complete program: # boundingBoxesToShapefile.py import os, os.path, shutil import osgeo.ogr import osgeo.osr # Open the source shapefile. srcFile = osgeo.ogr.Open("TM_WORLD_BORDERS-0.3.shp") srcLayer = srcFile.GetLayer(0) # Open the output shapefile. if os.path.exists("bounding-boxes"): shutil.rmtree("bounding-boxes") os.mkdir("bounding-boxes") spatialReference = osgeo.osr.SpatialReference() spatialReference.SetWellKnownGeogCS('WGS84') driver = osgeo.ogr.GetDriverByName("ESRI Shapefile") dstPath = os.path.join("bounding-boxes", "boundingBoxes.shp") dstFile = driver.CreateDataSource(dstPath) dstLayer = dstFile.CreateLayer("layer", spatialReference) fieldDef = osgeo.ogr.FieldDefn("COUNTRY", osgeo.ogr.OFTString) fieldDef.SetWidth(50) dstLayer.CreateField(fieldDef) fieldDef = osgeo.ogr.FieldDefn("CODE", osgeo.ogr.OFTString) fieldDef.SetWidth(3) dstLayer.CreateField(fieldDef) # Read the country features from the source shapefile. for i in range(srcLayer.GetFeatureCount()): feature = srcLayer.GetFeature(i) countryCode = feature.GetField("ISO3") countryName = feature.GetField("NAME") geometry = feature.GetGeometryRef() minLong,maxLong,minLat,maxLat = geometry.GetEnvelope() # Save the bounding box as a feature in the output # shapefile. linearRing = osgeo.ogr.Geometry(osgeo.ogr.wkbLinearRing) linearRing.AddPoint(minLong, minLat) linearRing.AddPoint(maxLong, minLat) linearRing.AddPoint(maxLong, maxLat) linearRing.AddPoint(minLong, maxLat) linearRing.AddPoint(minLong, minLat) polygon = osgeo.ogr.Geometry(osgeo.ogr.wkbPolygon) polygon.AddGeometry(linearRing) feature = osgeo.ogr.Feature(dstLayer.GetLayerDefn()) feature.SetGeometry(polygon) feature.SetField("COUNTRY", countryName) feature.SetField("CODE", countryCode) dstLayer.CreateFeature(feature) feature.Destroy() # All done. srcFile.Destroy() dstFile.Destroy() The only unexpected twist in this program is the use of a sub-directory called bounding-boxes to store the output Shapefile. Because a Shapefile is actually made up of multiple files on disk (a .dbf file, a .prj file, a .shp file, and a .shx file), it is easier to place these together in a sub-directory. We use the Python Standard Library module shutil to delete the previous contents of this directory, and then os.mkdir() to create it again. If you aren't storing the TM_WORLD_BORDERS-0.3.shp Shapefile in the same directory as the script itself, you will need to add the directory where the Shapefile is stored to your osgeo.ogr.Open() call. You can also store the boundingBoxes.shp Shapefile in a different directory if you prefer, by changing the path where this Shapefile is created. Running this program creates the bounding box Shapefile, which we can then draw onto a map. For example, here is the outline of Thailand along with a bounding box taken from the boundingBoxes.shp Shapefile:  
Read more
  • 0
  • 0
  • 7061

article-image-inheritance-python
Packt
30 Dec 2010
8 min read
Save for later

Inheritance in Python

Packt
30 Dec 2010
8 min read
Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples        Basic inheritance Technically, every class we create uses inheritance. All Python classes are subclasses of the special class named object. This class provides very little in terms of data and behaviors (those behaviors it does provide are all double-underscore methods intended for internal use only), but it does allow Python to treat all objects in the same way. If we don't explicitly inherit from a different class, our classes will automatically inherit from object. However, we can openly state that our class derives from object using the following syntax: class MySubClass(object): pass This is inheritance! Since Python 3 automatically inherits from object if we don't explicitly provide a different superclass. A superclass, or parent class, is a class that is being inherited from. A subclass is a class that is inheriting from a superclass. In this case, the superclass is object, and MySubClass is the subclass. A subclass is also said to be derived from its parent class or that the subclass extends the parent. As you've probably figured out from the example, inheritance requires a minimal amount of extra syntax over a basic class definition. Simply include the name of the parent class inside a pair of parentheses after the class name, but before the colon terminating the class definition. This is all we have to do to tell Python that the new class should be derived from the given superclass. How do we apply inheritance in practice? The simplest and most obvious use of inheritance is to add functionality to an existing class. Let's start with a simple contact manager that tracks the name and e-mail address of several people. The contact class is responsible for maintaining a list of all contacts in a class variable, and for initializing the name and address, in this simple class: class Contact: all_contacts = [] def __init__(self, name, email): self.name = name self.email = email Contact.all_contacts.append(self) This example introduces us to class variables. The all_contacts list, because it is part of the class definition, is actually shared by all instances of this class. This means that there is only one Contact.all_contacts list, and if we call self.all_contacts on any one object, it will refer to that single list. The code in the initializer ensures that whenever we create a new contact, the list will automatically have the new object added. Be careful with this syntax, for if you ever set the variable using self.all_contacts, you will actually be creating a new instance variable on the object; the class variable will still be unchanged and accessible as Contact.all_contacts. This is a very simple class that allows us to track a couple pieces of data about our contacts. But what if some of our contacts are also suppliers that we need to order supplies from? We could add an order method to the Contact class, but that would allow people to accidentally order things from contacts who are customers or family friends. Instead, let's create a new Supplier class that acts like a Contact, but has an additional order method: class Supplier(Contact): def order(self, order): print("If this were a real system we would send " "{} order to {}".format(order, self.name)) Now, if we test this class in our trusty interpreter, we see that all contacts, including suppliers, accept a name and e-mail address in their __init__, but only suppliers have a functional order method: >>> c = Contact("Some Body", "[email protected]") >>> s = Supplier("Sup Plier", "[email protected]") >>> print(c.name, c.email, s.name, s.email) Some Body [email protected] Sup Plier [email protected] >>> c.all_contacts [<__main__.Contact object at 0xb7375ecc>, <__main__.Supplier object at 0xb7375f8c>] >>> c.order("Ineed pliers") Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Contact' object has no attribute 'order' >>> s.order("I need pliers") If this were a real system we would send I need pliers order to Supplier >>> So now our Supplier class can do everything a Contact can do (including adding itself to the list of all_contacts) and all the special things it needs to handle as a supplier. This is the beauty of inheritance. Extending built-ins One of the most interesting uses of this kind of inheritance is adding functionality to built-in classes. In the Contact class seen earlier, we are adding contacts to a list of all contacts. What if we also wanted to search that list by name? Well, we could add a method on the Contact class to search it, but it feels like this method actually belongs on the list itself. We can do this using inheritance: class ContactList(list): def search(self, name): '''Return all contacts that contain the search value in their name.''' matching_contacts = [] for contact in self: if name in contact.name: matching_contacts.append(contact) return matching_contacts class Contact: all_contacts = ContactList() def __init__(self, name, email): self.name = name self.email = email self.all_contacts.append(self) Instead of instantiating a normal list as our class variable, we create a new ContactList class that extends the built-in list. Then we instantiate this subclass as our all_contacts list. We can test the new search functionality as follows: >>> c1 = Contact("John A", "[email protected]") >>> c2 = Contact("John B", "[email protected]") >>> c3 = Contact("Jenna C", "[email protected]") >>> [c.name for c in Contact.all_contacts.search('John')] ['John A', 'John B'] >>> Are you wondering how we changed the built-in syntax [] into something we can inherit from? Creating an empty list with [] is actually a shorthand for creating an empty list using list(); the two syntaxes are identical: >>> [] == list() True So, the list data type is like a class that we can extend, not unlike object. As a second example, we can extend the dict class, which is the long way of creating a dictionary (the {:} syntax). class LongNameDict(dict): def longest_key(self): longest = None for key in self: if not longest or len(key) > len(longest): longest = key return longest This is easy to test in the interactive interpreter: >>> longkeys = LongNameDict() >>> longkeys['hello'] = 1 >>> longkeys['longest yet'] = 5 >>> longkeys['hello2'] = 'world' >>> longkeys.longest_key() 'longest yet' Most built-in types can be similarly extended. Commonly extended built-ins are object, list, set, dict, file, and str. Numerical types such as int and float are also occasionally inherited from. Overriding and super So inheritance is great for adding new behavior to existing classes, but what about changing behavior? Our contact class allows only a name and an e-mail address. This may be sufficient for most contacts, but what if we want to add a phone number for our close friends? We can do this easily by just setting a phone attribute on the contact after it is constructed. But if we want to make this third variable available on initialization, we have to override __init__. Overriding is altering or replacing a method of the superclass with a new method (with the same name) in the subclass. No special syntax is needed to do this; the subclass's newly created method is automatically called instead of the superclass's method. For example: class Friend(Contact): def __init__(self, name, email, phone): self.name = name self.email = email self.phone = phone Any method can be overridden, not just __init__. Before we go on, however, we need to correct some problems in this example. Our Contact and Friend classes have duplicate code to set up the name and email properties; this can make maintenance complicated, as we have to update the code in two or more places. More alarmingly, our Friend class is neglecting to add itself to the all_contacts list we have created on the Contact class. What we really need is a way to call code on the parent class. This is what the super function does; it returns the object as an instance of the parent class, allowing us to call the parent method directly: class Friend(Contact): def __init__(self, name, email, phone): super().__init__(name, email) self.phone = phone This example first gets the instance of the parent object using super, and calls __init__ on that object, passing in the expected arguments. It then does its own initialization, namely setting the phone attribute. A super() call can be made inside any method, not just __init__. This means all methods can be modified via overriding and calls to super. The call to super can also be made at any point in the method; we don't have to make the call as the first line in the method. For example, we may need to manipulate the incoming parameters before forwarding them to the superclass.  
Read more
  • 0
  • 0
  • 8788
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-geo-spatial-data-python-working-geometry
Packt
29 Dec 2010
8 min read
Save for later

Geo-Spatial Data in Python: Working with Geometry

Packt
29 Dec 2010
8 min read
Python Geospatial Development Build a complete and sophisticated mapping application from scratch using Python tools for GIS development Build applications for GIS development using Python Analyze and visualize Geo-Spatial data Comprehensive coverage of key GIS concepts Recommended best practices for storing spatial data in a database Draw maps, place data points onto a map, and interact with maps A practical tutorial with plenty of step-by-step instructions to help you develop a mapping application from scratch         Read more about this book       (For more resources on Python, see here.) Working with Shapely geometries Shapely is a very capable library for performing various calculations on geo-spatial data. Let's put it through its paces with a complex, real-world problem. Task: Identify parks in or near urban areas The U.S. Census Bureau makes available a Shapefile containing something called Core Based Statistical Areas (CBSAs), which are polygons defining urban areas with a population of 10,000 or more. At the same time, the GNIS website provides lists of placenames and other details. Using these two datasources, we will identify any parks within or close to an urban area. Because of the volume of data we are potentially dealing with, we will limit our search to California. Feel free to download the larger data sets if you want, though you will have to optimize the code or your program will take a very long time to check all the CBSA polygon/placename combinations. Let's start by downloading the necessary data. Go to the TIGER website at http://census.gov/geo/www/tiger Click on the 2009 TIGER/Line Shapefiles Main Page link, then follow the Download the 2009 TIGER/Line Shapefiles now link. Choose California from the pop-up menu on the right, and click on Submit. A list of the California Shapefiles will be displayed; the Shapefile you want is labelled Metropolitan/Micropolitan Statistical Area. Click on this link, and you will download a file named tl_2009_06_cbsa.zip. Once the file has downloaded, uncompress it and place the resulting Shapefile into a convenient location so that you can work with it. You now need to download the GNIS placename data for California. Go to the GNIS website: http://geonames.usgs.gov/domestic Click on the Download Domestic Names hyperlink, and then choose California from the pop-up menu. You will be prompted to save the CA_Features_XXX.zip file. Do so, then decompress it and place the resulting CA_Features_XXX.txt file into a convenient place. The XXX in the above file name is a date stamp, and will vary depending on when you download the data. Just remember the name of the file as you'll need to refer to it in your source code. We're now ready to write the code. Let's start by reading through the CBSA urban area Shapefile and extracting the polygons that define the boundary of each urban area: shapefile = osgeo.ogr.Open("tl_2009_06_cbsa.shp")layer = shapefile.GetLayer(0)for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) geometry = feature.GetGeometryRef() ... Make sure you add directory paths to your osgeo.ogr.Open() statement (and to the file() statement below) to match where you've placed these files. Using what we learned in the previous section, we can convert this geometry into a Shapely object so that we can work with it: wkt = geometry.ExportToWkt()shape = shapely.wkt.loads(wkt) Next, we need to scan through the CA_Features_XXX.txt file to identify the features marked as a park. For each of these features, we want to extract the name of the feature and its associated latitude and longitude. Here's how we might do this: f = file("CA_Features_XXX.txt", "r")for line in f.readlines(): chunks = line.rstrip().split("|") if chunks[2] == "Park": name = chunks[1] latitude = float(chunks[9]) longitude = float(chunks[10]) ... Remember that the GNIS placename database is a pipedelimited text file. That's why we have to split the line up using line.rstrip().split("|"). Now comes the fun part—we need to figure out which parks are within or close to each urban area. There are two ways we could do this, either of which will work: We could use the shape.distance() method to calculate the distance between the shape and a Point object representing the park's location: We could dilate the polygon using the shape.buffer() method, and then see if the resulting polygon contained the desired point: The second option is faster when dealing with a large number of points as we can pre-calculate the dilated polygons and then use them to compare against each point in turn. Let's take this option: # findNearbyParks.pyimport osgeo.ogrimport shapely.geometryimport shapely.wktMAX_DISTANCE = 0.1 # Angular distance; approx 10 km.print "Loading urban areas..."urbanAreas = {} # Maps area name to Shapely polygon.shapefile = osgeo.ogr.Open("tl_2009_06_cbsa.shp")layer = shapefile.GetLayer(0)for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) name = feature.GetField("NAME") geometry = feature.GetGeometryRef() shape = shapely.wkt.loads(geometry.ExportToWkt()) dilatedShape = shape.buffer(MAX_DISTANCE) urbanAreas[name] = dilatedShapeprint "Checking parks..."f = file("CA_Features_XXX.txt", "r")for line in f.readlines(): chunks = line.rstrip().split("|") if chunks[2] == "Park": parkName = chunks[1] latitude = float(chunks[9]) longitude = float(chunks[10]) pt = shapely.geometry.Point(longitude, latitude) for urbanName,urbanArea in urbanAreas.items(): if urbanArea.contains(pt): print parkName + " is in or near " + urbanNamef.close() Don't forget to change the name of the CA_Features_XXX.txt file to match the actual name of the file you downloaded. You may also need to change the path names to the tl_2009_06_CBSA.shp file and the CA_Features file if you placed them in a different directory. If you run this program, you will get a master list of all the parks that are in or close to an urban area: % python findNearbyParks.pyLoading urban areas...Checking parks...Imperial National Wildlife Refuge is in or near El Centro, CATwinLakesStateBeach is in or near Santa Cruz-Watsonville, CAAdmiralWilliamStandleyState Recreation Area is in or near Ukiah, CAAgate Beach County Park is in or near San Francisco-Oakland-Fremont, CA... Note that our program uses angular distances to decide if a park is in or near a given urban area. An angular distance is the angle (in decimal degrees) between two rays going out from the center of the Earth to the Earth's surface. Because a degree of angular measurement (at least for the latitudes we are dealing with here) roughly equals 100 km on the Earth's surface, an angular measurement of 0.1 roughly equals a real distance of 10 km. Using angular measurements makes the distance calculation easy and quick to calculate, though it doesn't give an exact distance on the Earth's surface. If your application requires exact distances, you could start by using an angular distance to filter out the features obviously too far away, and then obtain an exact result for the remaining features by calculating the point on the polygon's boundary that is closest to the desired point, and then calculating the linear distance between the two points. You would then discard the points that exceed your desired exact linear distance. Converting and standardizing units of geometry and distance Imagine that you have two points on the Earth's surface with a straight line drawn between them: Each point can be described as a coordinate using some arbitrary coordinate system (for example, using latitude and longitude values), while the length of the straight line could be described as the distance between the two points. Given any two coordinates, it is possible to calculate the distance between them. Conversely, you can start with one coordinate, a desired distance and a direction, and then calculate the coordinates for the other point. Of course, because the Earth's surface is not flat, we aren't really dealing with straight lines at all. Rather, we are calculating geodetic or Great Circle distances across the surface of the Earth. The pyproj Python library allows you to perform these types of calculations for any given datum. You can also use pyproj to convert from projected coordinates back to geographic coordinates, and vice versa, allowing you to perform these sorts of calculations for any desired datum, coordinate system, and projection. Ultimately, a geometry such as a line or a polygon consists of nothing more than a list of connected points. This means that, using the above process, you can calculate the geodetic distance between each of the points in any polygon and total the results to get the actual length for any geometry. Let's use this knowledge to solve a realworld problem.  
Read more
  • 0
  • 0
  • 4910

article-image-python-testing-mock-objects
Packt
28 Dec 2010
13 min read
Save for later

Python Testing: Mock Objects

Packt
28 Dec 2010
13 min read
How to install Python Mocker Python Mocker isn't included in the standard Python distribution. That means that we need to download and install it. Time for action – installing Python Mocker At the time of this writing, Python Mocker's home page is located at http://labix.org/mocker, while its downloads are hosted at https://launchpad.net/mocker/+download. Go ahead and download the newest version, and we'll see about installing it. The first thing that needs to be done is to unzip the downloaded file. It's a .tar.bz2, which should just work for Unix, Linux, or OSX users. Windows users will need a third-party program (7-Zip works well: http://www.7-zip.org/) to uncompress the archive. Store the uncompressed file in some temporary location. Once you have the files unzipped somewhere, go to that location via the command line. Now, to do this next step, you either need to be allowed to write files into your Python installation's site-packages directory (which you are, if you're the one who installed Python in the first place) or you need to be using Python version 2.6 or higher. If you can write to site-packages, type $ python setup.py install If you can't write to site-packages, but you're using Python 2.6 or higher, type $ python setup.py install --user Sometimes, a tool called easy_install can simplify the installation process of Python modules and packages. If you want to give it a try, download and install setuptools from http://pypi.python.org/pypi/setuptools, according to the directions on that page, and then run the command easy_install mocker. Once that command is done, you should be ready to use Nose. Once you have successfully run the installer, Python Mocker is ready for use. What is a mock object in software testing? "Mock" in this sense means "imitation," and that's exactly what a mock object does. Mock objects imitate the real objects that make up your program, without actually being those objects or relying on them in any way. Instead of doing whatever the real object would do, a mock object performs predefined simple operations that look like what the real object should do. That means its methods return appropriate values (which you told it to return) or raise appropriate exceptions (which you told it to raise). A mock object is like a mockingbird; imitating the calls of other birds without comprehending them. We've already used one mock object in our earlier work when we replaced time.time with an object (in Python, functions are objects) that returned an increasing series of numbers. The mock object was like time.time, except that it always returned the same series of numbers, no matter when we ran our test or how fast the computer was that we ran it on. In other words, it decoupled our test from an external variable. That's what mock objects are all about: decoupling tests from external variables. Sometimes those variables are things like the external time or processor speed, but usually the variables are the behavior of other units. Python Mocker The idea is pretty straightforward, but one look at that mock version of time.time shows that creating mock objects without using a toolkit of some sort can be a dense and annoying process, and can interfere with the readability of your tests. This is where Python Mocker (or any of several other mock object toolkits, depending on preference) comes in. Time for action – exploring the basics of Mocker We'll walk through some of the simplest—and most useful—features of Mocker. To do that, we'll write tests that describe a class representing a specific mathematical operation (multiplication) which can be applied to the values of arbitrary other mathematical operation objects. In other words, we'll work on the guts of a spreadsheet program (or something similar). We're going to use Mocker to create mock objects to stand in place of the real operation objects. Create up a text file to hold the tests, and add the following at the beginning (assuming that all the mathematical operations will be defined in a module called operations): >>> from mocker import Mocker >>> import operations We've decided that every mathematical operation class should have a constructor accepting the objects representing the new object's operands. It should also have an evaluate function that accepts a dictionary of variable bindings as its parameter and returns a number as the result. We can write the tests for the constructor fairly easily, so we do that first (Note that we've included some explanation in the test file, which is always a good idea): We're going to test out the constructor for the multiply operation, first. Since all that the constructor has to do is record all of the operands, this is straightforward. >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p2 = mocker.mock() >>> mocker.replay() >>> m = operations.multiply(p1, p2) >>> m.operands == (p1, p2) True >>> mocker.restore() >>> mocker.verify() The tests for the evaluate method are somewhat more complicated, because there are several things we need to test. This is also where we start seeing the real advantages of Mocker: Now we're going to check the evaluate method for the multiply operation. It should raise a ValueError if there are less than two operands, it should call the evaluate methods of all operations that are operands of the multiply, and of course it should return the correct value. >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p1.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(97.43) >>> mocker.replay() >>> m = operations.multiply(p1) >>> m.evaluate({}) Traceback (most recent call last): ValueError: multiply without at least two operands is meaningless >>> mocker.restore() >>> mocker.verify() >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p1.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(97.43) >>> p2 = mocker.mock() >>> p2.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(-16.25) >>> mocker.replay() >>> m = operations.multiply(p1, p2) >>> round(m.evaluate({}), 2) -1583.24 >>> mocker.restore() >>> mocker.verify() If we run the tests now, we get a list of failed tests. Most of them are due to Mocker being unable to import the operations module, but the bottom of the list should look like this: Finally, we'll write some code in the operations module that passes these tests, producing the following: class multiply: def __init__(self, *operands): self.operands = operands def evaluate(self, bindings): vals = [x.evaluate(bindings) for x in self.operands] if len(vals) < 2: raise ValueError('multiply without at least two ' 'operands is meaningless') result = 1.0 for val in vals: result *= val return result Now when we run the tests, none of them should fail. What just happened? The difficulty in writing the tests for something like this comes(as it often does) from the need to decouple the multiplication class from all of the other mathematical operation classes, so that the results of the multiplication test only depend on whether multiplication works correctly. We addressed this problem by using the Mocker framework for mock objects. The way Mocker works is that you first create an object representing the mocking context, by doing something such as mocker = Mocker(). The mocking context will help you create mock objects, and it will store information about how you expect them to be used. Additionally, it can help you temporarily replace library objects with mocks (like we've previously done with time.time) and restore the real objects to their places when you're done. We'll see more about doing that in a little while. Once you have a mocking context, you create a mock object by calling its mock method, and then you demonstrate how you expect the mock objects to be used. The mocking context records your demonstration, so later on when you call its replay method it knows what usage to expect for each object and how it should respond. Your tests (which use the mock objects instead of the real objects that they imitate), go after the call to replay. Finally, after test code has been run, you call the mocking context's restore method to undo any replacements of library objects, and then verify to check that the actual usage of the mocks was as expected. Our first use of Mocker was straightforward. We tested our constructor, which is specified to be extremely simple. It's not supposed to do anything with its parameters, aside from store them away for later. Did we gain anything at all by using Mocker to create mock objects to use as the parameters, when the parameters aren't even supposed to do anything? In fact, we did. Since we didn't tell Mocker to expect any interactions with the mock objects, it will report nearly any usage of the parameters (storing them doesn't count, because storing them isn't actually interacting with them) as errors during the verify step. When we call mocker.verify(), Mocker looks back at how the parameters were really used and reports a failure if our constructor tried to perform some action on them. It's another way to embed our expectations into our tests. We used Mocker twice more, except in those later uses we told Mocker to expect a call to an evaluate method on the mock objects (i.e. p1 and p2), and to expect an empty dictionary as the parameter to each of the mock objects' evaluate call. For each call we told it to expect, we also told it that its response should be to return a specific floating point number. Not coincidentally, that mimics the behavior of an operation object, and we can use the mocks in our tests of multiply.evaluate. If multiply.evaluate hadn't called the evaluate methods of mock, or if it had called one of them more than once, our mocker.verify call would have alerted us to the problem. This ability to describe not just what should be called but how often each thing should be called is a very useful too that makes our descriptions of what we expect much more complete. When multiply.evaluate calls the evaluate method of mock, the values that get returned are the ones that we specified, so we know exactly what multiply.evaluate ought to do. We can test it thoroughly, and we can do it without involving any of the other units of our code. Try changing how multiply.evaluate works and see what mocker.verify says about it. Mocking functions Normal objects (that is to say, objects with methods and attributes created by instantiating a class) aren't the only things you can make mocks of. Functions are another kind of object that can be mocked, and it turns out to be pretty easy. During your demonstration, if you want a mock object to represent a function, just call it. The mock object will recognize that you want it to behave like a function, and it will make a note of what parameters you passed it, so that it can compare them against what gets passed to it during the test. For example, the following code creates a mock called func, which pretends to be a function that, when called once with the parameters 56 and hello, returns the number 11. The second part of the example uses the mock in a very simple test: >>> from mocker import Mocker >>> mocker = Mocker() >>> func = mocker.mock() >>> func(56, "hello") # doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(11) >>> mocker.replay() >>> func(56, "hello") 11 >>> mocker.restore() >>> mocker.verify() Mocking containers Containers are another category of somewhat special objects that can be mocked. Like functions, containers can be mocked by simply using a mock object as if it were a container during your example. Mock objects are able to understand examples that involve the following container operations: looking up a member, setting a member, deleting a member, finding the length, and getting an iterator over the members. Depending on the version of Mocker, membership testing via the in operator may also be available. In the following example, all of the above capabilities are demonstrated, but the in tests are disabled for compatibility with versions of Mocker that don't support them. Keep in mind that even though, after we call replay, the object called container looks like an actual container, it's not. It's just responding to stimuli we told it to expect, in the way we told it to respond. That's why, when our test asks for an iterator, it returns None instead. That's what we told it to do, and that's all it knows. >>> from mocker import Mocker >>> mocker = Mocker() >>> container = mocker.mock() >>> container['hi'] = 18 >>> container['hi'] # doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(18) >>> len(container) 0 >>> mocker.result(1) >>> 'hi' in container # doctest: +SKIP True >>> mocker.result(True) >>> iter(container) # doctest: +ELLIPSIS <...> >>> mocker.result(None) >>> del container['hi'] >>> mocker.result(None) >>> mocker.replay() >>> container['hi'] = 18 >>> container['hi'] 18 >>> len(container) 1 >>> 'hi' in container # doctest: +SKIP True >>> for key in container: ... print key Traceback (most recent call last): TypeError: iter() returned non-iterator of type 'NoneType' >>> del container['hi'] >>> mocker.restore() >>> mocker.verify() Something to notice in the above example is that during the initial phase, a few of the demonstrations (for example, the call to len) did not return a mocker.Mock object, as we might have expected. For some operations, Python enforces that the result is of a particular type (for example, container lengths have to be integers), which forces Mocker to break its normal pattern. Instead of returning a generic mock object, it returns an object of the correct type, although the value of the returned object is meaningless. Fortunately, this only applies during the initial phase, when you're showing Mocker what to expect, and only in a few cases, so it's usually not a big deal. There are times when the returned mock objects are needed, though, so it's worth knowing about the exceptions.
Read more
  • 0
  • 0
  • 2548

article-image-wxpython-28-window-layout-and-design
Packt
24 Dec 2010
8 min read
Save for later

wxPython 2.8: Window Layout and Design

Packt
24 Dec 2010
8 min read
wxPython 2.8 Application Development Cookbook Over 80 practical recipes for developing feature-rich applications using wxPython Develop flexible applications in wxPython. Create interface translatable applications that will run on Windows, Macintosh OSX, Linux, and other UNIX like environments. Learn basic and advanced user interface controls. Packed with practical, hands-on cookbook recipes and plenty of example code, illustrating the techniques to develop feature-rich applications using wxPython.        Once you have an idea of how the interface of your applications should look, it comes the time to put it all together. Being able to take your vision and translate it into code can be a tricky and often tedious task. A window's layout is defined on a two dimensional plane with the origin being the window's top-left corner. All positioning and sizing of any widgets, no matter what it's onscreen appearance, is based on rectangles. Clearly understanding these two basic concepts goes a long way towards being able to understand and efficiently work with the toolkit. Traditionally in older applications, window layout was commonly done by setting explicit static sizes and positions for all the controls contained within a window. This approach, however, can be rather limiting as the windows will not be resizable, they may not fit on the screen under different resolutions, trying to support localization becomes more difficult because labels and other text will differ in length in different languages, the native widgets will often be different sizes on different platforms making it difficult to write platform independent code, and the list goes on. So, you may ask what the solution to this is. In wxPython, the method of choice is to use the Sizer classes to define and manage the layout of controls. Sizers are classes that manage the size and positioning of controls through an algorithm that queries all of the controls that have been added to the Sizer for their recommended best minimal sizes and their ability to stretch or not, if the amount of available space increases, such as if a user makes a dialog bigger. Sizers also handle cross-platform widget differences, for example, buttons on GTK tend to have an icon and be generally larger than the buttons on Windows or OS X. Using a Sizer to manage the button's layout will allow the rest of the dialog to be proportionally sized correctly to handle this without the need for any platform-specific code. So let us begin our adventure into the world of window layout and design by taking a look at a number of the tools that wxPython provides in order to facilitate this task. Using a BoxSizer A BoxSizer is the most basic of Sizer classes. It supports a layout that goes in a single direction—either a vertical column or a horizontal row. Even though it is the most basic to work with, a BoxSizer is one of the most useful Sizer classes and tends to produce more consistent cross-platform behavior when compared to some of the other Sizers types. This recipe creates a simple window where we want to have two text controls stacked in a vertical column, each with a label to the left of it. This will be used to illustrate the most simplistic usage of a BoxSizer in order to manage the layout of a window's controls. How to do it... Here we define our top level Frame, which will use a BoxSizer to manage the size of its Panel: class BoxSizerFrame(wx.Frame): def __init__(self, parent, *args, **kwargs): super(BoxSizerFrame, self).__init__(*args, **kwargs)         # Attributes self.panel = BoxSizerPanel(self)         # Layout sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.panel, 1, wx.EXPAND) self.SetSizer(sizer) self.SetInitialSize() The BoxSizerPanel class is the next layer in the window hierarchy, and is where we will perform the main layout of the controls: class BoxSizerPanel(wx.Panel): def __init__(self, parent, *args, **kwargs): super(BoxSizerPanel, self).__init__(*args, **kwargs)         # Attributes self._field1 = wx.TextCtrl(self) self._field2 = wx.TextCtrl(self)         # Layout self._DoLayout() Just to help reduce clutter in the __init__ method, we will do all the layout in a separate _DoLayout method: def _DoLayout(self): """Layout the controls""" vsizer = wx.BoxSizer(wx.VERTICAL) field1_sz = wx.BoxSizer(wx.HORIZONTAL) field2_sz = wx.BoxSizer(wx.HORIZONTAL)     # Make the labels field1_lbl = wx.StaticText(self, label="Field 1:") field2_lbl = wx.StaticText(self, label="Field 2:")     # Make the first row by adding the label and field # to the first horizontal sizer field1_sz.AddSpacer(50) field1_sz.Add(field1_lbl) field1_sz.AddSpacer(5) # put 5px of space between field1_sz.Add(self._field1) field1_sz.AddSpacer(50) # Do the same for the second row field2_sz.AddSpacer(50) field2_sz.Add(field2_lbl) field2_sz.AddSpacer(5) field2_sz.Add(self._field2) field2_sz.AddSpacer(50)     # Now finish the layout by adding the two sizers # to the main vertical sizer. vsizer.AddSpacer(50) vsizer.Add(field1_sz) vsizer.AddSpacer(15) vsizer.Add(field2_sz) vsizer.AddSpacer(50)     # Finally assign the main outer sizer to the panel self.SetSizer(vsizer) How it works... The previous code shows the basic pattern of how to create a simple window layout programmatically, using sizers to manage the controls. First let's start by taking a look at the BoxSizerPanel class's _DoLayout method, as this is where the majority of the layout in this example takes place. First, we started off by creating three BoxSizer classes: one with a vertical orientation, and two with a horizontal orientation. The layout we desired for this window requires us to use three BoxSizer classes and this is why. If you break down what we want to do into simple rectangles, you will see that: We wanted two TextCtrl objects each with a label to the left of them which can simply be thought of as two horizontal rectangles. We wanted the TextCtrl objects stacked vertically in the window which is just a vertical rectangle that will contain the other two rectangles. This is illustrated by the following screenshot (borders are drawn in and labels are added to show the area managed by each of Panel's three BoxSizers): In the section where we populate the first horizontal sizer (field1_sz), we use two of the BoxSizer methods to add items to the layout. The first is AddSpacer, which does simply as its named and adds a fixed amount of empty space in the left-hand side of the sizer. Then we use the Add method to add our StaticText control to the right of the spacer, and continue from here to add other items to complete this row. As you can see, these methods add items to the layout from left to right in the sizer. After this, we again do the same thing with the other label and TextCtrl in the second horizontal sizer. The last part of the Panel's layout is done by adding the two horizontal sizers to the vertical sizer. This time, since the sizer was created with a VERTICAL orientation, the items are added from top to bottom. Finally, we use the Panel's SetSizer method to assign the main outer BoxSizer as the Panel's sizer. The BoxSizerFrame also uses a BoxSizer to manage the layout of its Panel. The only difference here is that we used the Add method's proportion and flags parameters to tell it to make the Panel expand to use the entire space available. After setting the Frame's sizer, we used its SetInitialSize method, which queries the window's sizer and its descendents to get and set the best minimal size to set the window to. We will go into more detail about these other parameters and their effects in the next recipe. There's more... Included below is a little more additional information about adding spacers and items to a sizer's layout. Spacers The AddSpacer will add a square-shaped spacer that is X pixels wide by X pixels tall to the BoxSizer, where X is the value passed to the AddSpacer method. Spacers of other dimensions can be added by passing a tuple as the first argument to the BoxSizer's Add method. someBoxSizer.Add((20,5)) This will add a 20x5 pixel spacer to the sizer. This can be useful when you don't want the vertical space to be increased by as much as the horizontal space, or vice versa. AddMany The AddMany method can be used to add an arbitrary number of items to the sizer in one call. AddMany takes a list of tuples that contain values that are in the same order as the Add method expects. someBoxSizer.AddMany([(staticText,), ((10, 10),), (txtCtrl, 0, wx.EXPAND)])) This will add three items to the sizer: the first two items only specify the one required parameter, and the third specifies the proportion and flags parameters.
Read more
  • 0
  • 0
  • 2172

article-image-python-built-functions
Packt
24 Dec 2010
10 min read
Save for later

Python Built-in Functions

Packt
24 Dec 2010
10 min read
  Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples         Read more about this book       (For more resources on Python, see here.) There are numerous functions in Python that perform a task or calculate a result on certain objects without being methods on the class. Their purpose is to abstract common calculations that apply to many types of classes. This is applied duck typing; these functions accept objects with certain attributes or methods that satisfy a given interface, and are able to perform generic tasks on the object. Len The simplest example is the len() function. This function counts the number of items in some kind of container object such as a dictionary or list. For example: >>> len([1,2,3,4])4 Why don't these objects have a length property instead of having to call a function on them? Technically, they do. Most objects that len() will apply to have a method called __len__() that returns the same value. So len(myobj) seems to callmyobj.__len__(). Why should we use the function instead of the method? Obviously the method is a special method with double-underscores suggesting that we shouldn't call it directly. There must be an explanation for this. The Python developers don't make such design decisions lightly. The main reason is efficiency. When we call __len__ on an object, the object has to look the method up in its namespace, and, if the special __getattribute__ method (which is called every time an attribute or method on an object is accessed) is defined on that object, it has to be called as well. Further __getattribute__ for that particular method may have been written to do something nasty like refusing to give us access to special methods such as __len__! The len function doesn't encounter any of this. It actually calls the __len__ function on the underlying class, so len(myobj) maps to MyObj.__len__(myobj). Another reason is maintainability. In the future, the Python developers may want to change len() so that it can calculate the length of objects that don't have a __len__, for example by counting the number of items returned in an iterator. They'll only have to change one function instead of countless __len__ methods across the board. Reversed The reversed() function takes any sequence as input, and returns a copy of that sequence in reverse order. It is normally used in for loops when we want to loop over items from back to front. Similar to len, reversed calls the __reversed__() function on the class for the parameter. If that method does not exist, reversed builds the reversed sequence itself using calls to __len__ and __getitem__. We only need to override __reversed__ if we want to somehow customize or optimize the process: normal_list=[1,2,3,4,5]class CustomSequence(): def __len__(self): return 5 def __getitem__(self, index): return "x{0}".format(index)class FunkyBackwards(CustomSequence): def __reversed__(self): return "BACKWARDS!"for seq in normal_list, CustomSequence(), FunkyBackwards(): print("n{}: ".format(seq.__class__.__name__), end="") for item in reversed(seq): print(item, end=", ") The for loops at the end print the reversed versions of a normal list, and instances of the two custom sequences. The output shows that reversed works on all three of them, but has very different results when we define __reversed__ ourselves: list: 5, 4, 3, 2, 1,CustomSequence: x4, x3, x2, x1, x0,FunkyBackwards: B, A, C, K, W, A, R, D, S, !, Note: the above two classes aren't very good sequences, as they don't define a proper version of __iter__ so a forward for loop over them will never end. Enumerate Sometimes when we're looping over an iterable object in a for loop, we want access to the index (the current position in the list) of the current item being processed. The for loop doesn't provide us with indexes, but the enumerate function gives us something better: it creates a list of tuples, where the first object in each tuple is the index and the second is the original item. This is useful if we want to use index numbers directly. Consider some simple code that outputs all the lines in a file with line numbers: import sysfilename = sys.argv[1]with open(filename) as file: for index, line in enumerate(file): print("{0}: {1}".format(index+1, line), end='') Running this code on itself as the input file shows how it works: 1: import sys2: filename = sys.argv[1]3:4: with open(filename) as file:5: for index, line in enumerate(file):6: print("{0}: {1}".format(index+1, line), end='') The enumerate function returns a list of tuples, our for loop splits each tuple into two values, and the print statement formats them together. It adds one to the index for each line number, since enumerate, like all sequences is zero based. Zip The zip function is one of the least object-oriented functions in Python's collection. It takes two or more sequences and creates a new sequence of tuples. Each tuple contains one element from each list. This is easily explained by an example; let's look at parsing a text file. Text data is often stored in tab-delimited format, with a "header" row as the first line in the file, and each line below it describing data for a unique record. A simple contact list in tab-delimited format might look like this: first last emailjohn smith [email protected] doan [email protected] neilson [email protected] A simple parser for this file can use zip to create lists of tuples that map headers to values. These lists can be used to create a dictionary, a much easier object to work with in Python than a file! import sysfilename = sys.argv[1]contacts = []with open(filename) as file: header = file.readline().strip().split('t') for line in file: line = line.strip().split('t') contact_map = zip(header, line) contacts.append(dict(contact_map))for contact in contacts: print("email: {email} -- {last}, {first}".format( **contact)) What's actually happening here? First we open the file, whose name is provided on the command line, and read the first line. We strip the trailing newline, and split what's left into a list of three elements. We pass 't' into the strip method to indicate that the string should be split at tab characters. The resulting header list looks like ["first", "last", "email"]. Next, we loop over the remaining lines in the file (after the header). We split each line into three elements. Then, we use zip to create a sequence of tuples for each line. The first sequence would look like [("first", "john"), ("last", "smith"), ("email", "[email protected]")]. Pay attention to what zip is doing. The first list contains headers; the second contains values. The zip function created a tuple of header/value pairs for each matchup. The dict constructor takes the list of tuples, and maps the first element to a key and the second to a value to create a dictionary. The result is added to a list. At this point, we are free to use dictionaries to do all sorts of contact-related activities. For testing, we simply loop over the contacts and output them in a different format. The format line, as usual, takes variable arguments and keyword arguments. The use of **contact automatically converts the dictionary to a bunch of keyword arguments (we'll understand this syntax before the end of the chapter) Here's the output: email: [email protected] -- smith, johnemail: [email protected] -- doan, janeemail: [email protected] -- neilson, david If we provide zip with lists of different lengths, it will stop at the end of the shortest list. There aren't many useful applications of this feature, but zip will not raise an exception if that is the case. We can always check the list lengths and add empty values to the shorter list, if necessary. The zip function is actually the inverse of itself. It can take multiple sequences and combine them into a single sequence of tuples. Because tuples are also sequences, we can "unzip" a zipped list of tuples by zipping it again. Huh? Have a look at this example: >>> list_one = ['a', 'b', 'c']>>> list_two = [1, 2, 3]>>> zipped = zip(list_one, list_two)>>> zipped = list(zipped)>>> zipped[('a', 1), ('b', 2), ('c', 3)]>>> unzipped = zip(*zipped)>>> list(unzipped)[('a', 'b', 'c'), (1, 2, 3)] First we zip the two lists and convert the result into a list of tuples. We can then use parameter unpacking to pass these individual sequences as arguments to the zip function. zip matches the first value in each tuple into one sequence and the second value into a second sequence; the result is the same two sequences we started with! Other functions Another key function is sorted(), which takes an iterable as input, and returns a list of the items in sorted order. It is very similar to the sort() method on lists, the difference being that it works on all iterables, not just lists. Like list.sort, sorted accepts a key argument that allows us to provide a function to return a sort value for each input. It can also accept a reverse argument. Three more functions that operate on sequences are min, max, and sum. These each take a sequence as input, and return the minimum or maximum value, or the sum of all values in the sequence. Naturally, sum only works if all values in the sequence are numbers. The max and min functions use the same kind of comparison mechanism as sorted and list.sort, and allow us to define a similar key function. For example, the following code uses enumerate, max, and min to return the indices of the values in a list with the maximum and minimum value: def min_max_indexes(seq): minimum = min(enumerate(seq), key=lambda s: s[1]) maximum = max(enumerate(seq), key=lambda s: s[1]) return minimum[0], maximum[0] The enumerate call converts the sequence into (index, item) tuples. The lambda function passed in as a key tells the function to search the second item in each tuple (the original item). The minimum and maximum variables are then set to the appropriate tuples returned by enumerate. The return statement takes the first value (the index from enumerate) of each tuple and returns the pair. The following interactive session shows how the returned values are, indeed, the indices of the minimum and maximum values: >>> alist = [5,0,1,4,6,3]>>> min_max_indexes(alist)(1, 4)>>> alist[1], alist[4](0, 6) We've only touched on a few of the more important Python built-in functions. There are numerous others in the standard library, including: all and any, which accept an iterable and returns True if all, or any, of the items evaluate to true (that is a non-empty string or list, a non-zero number, an object that is not None, or the literal True). eval, exec, and compile, which execute string as code inside the interpreter. hasattr, getattr, setattr, and delattr, which allow attributes on an object to be manipulated as string names. And many more! See the interpreter help documentation for each of the functions listed in dir(__builtins__). Summary In this article we took a look at many useful built-in functions. Further resources on this subject: Python Graphics: Animation Principles [Article] Animating Graphic Objects using Python [Article] Python 3: When to Use Object-oriented Programming [Article] Objects in Python [Article]
Read more
  • 0
  • 0
  • 1363
article-image-using-groovy-closures-instead-template-method
Packt
23 Dec 2010
3 min read
Save for later

Using Groovy Closures Instead of Template Method

Packt
23 Dec 2010
3 min read
  Groovy for Domain-Specific Languages Extend and enhance your Java applications with Domain Specific Languages in Groovy Build your own Domain Specific Languages on top of Groovy Integrate your existing Java applications using Groovy-based Domain Specific Languages (DSLs) Develop a Groovy scripting interface to Twitter A step-by-step guide to building Groovy-based Domain Specific Languages that run seamlessly in the Java environment         Read more about this book       (For more resources on Groovy, see here.) Template Method Pattern Overview The template method pattern often applies during the thought "Well I have a piece of code that I want to use again, but I can't use it 100%. I want to change a few lines to make it useful." In general, using this pattern involves creating an abstract class and varying its implementation through abstract hook methods. Subclasses implement these abstract hook methods to solve their specific problem. This approach is very effective and is used extensively in frameworks. However, closures provide an elegant solution. Sample HttpBuilder Request It is best to illustrate the closure approach with an example. Recently I was developing a consumer of REST webservices with HttpBuilder. With HttpBuilder, the client simply creates the class and issues an HTTP call. The framework waits for a response and provides hooks for processing. Many of the requests being made were very similar to one another, only the URI was different. In addition, each request needed to process the returned XML differently, as the XML received would vary. I wanted to use the same request code, but vary the XML processing. To summarize the problem: HttpBuilder code should be reused Different URIs should be sent out with the same HttpBuilder code Different XML should be processed with the same HttpBuilder code Here is my first draft of HttpBuilder code. Note the call to convertXmlToCompanyDomainObject(xml). static String URI_PREFIX = '/someApp/restApi/' private List issueHttpBuilderRequest(RequestObject requestObj, String uriPath) { def http = new HTTPBuilder("http://localhost:8080/") def parsedObjectsFromXml = [] http.request(Method.POST, ContentType.XML) { req -> // set uri path on the delegate uri.path = URI_PREFIX + uriPath uri.query = [ company: requestObj.company, date: requestObj.date type: requestObj.type ] headers.'User-Agent' = 'Mozilla/5.0' // when response is a success, parse the gpath xml response.success = { resp, xml -> assert resp.statusLine.statusCode == 200 // store the list parsedObjectsFromXml = convertXmlToCompanyDomainObject(xml) } // called only for a 404 (not found) status code: response.'404' = { resp -> log.info 'HTTP status code: 404 Not found' } } parsedObjectsFromXml } private List convertXmlToCompanyDomainObject(GPathResult xml) { def list = [] // .. implementation to parse the xml and turn into objects } As you can see, URI is passed as a parameter to issueHttpBuilderRequest. This solves the problem of sending different URIs, but what about parsing the different XML formats that are returned? Using Template Method Pattern The following diagram illustrates applying the template method pattern to this problem. In summary, we need to move the issueHttpBuilderRequest code to an abstract class, and provide an abstract method convertXmlToDomainObjects(). Subclasses would provide the appropriate XML conversion implementation.
Read more
  • 0
  • 0
  • 2501

article-image-visual-studio-2010-test-types
Packt
21 Dec 2010
9 min read
Save for later

Visual Studio 2010 Test Types

Packt
21 Dec 2010
9 min read
Software Testing using Visual Studio 2010 A step by step guide to understand the features and concepts of testing applications using Visual Studio. Master all the new tools and techniques in Visual Studio 2010 and the Team Foundation Server for testing applications Customize reports with Team foundation server. Get to grips with the new Test Manager tool for maintaining Test cases Take full advantage of new Visual Studio features for testing an application's User Interface Packed with real world examples and step by step instructions to get you up and running with application testing Software testing in Visual Studio 2010 Before getting into the details of the actual testing using Visual Studio 2010 let us find out the different tools provided by Visual Studio 2010 and their usage and then we can execute the actual tests. Visual Studio 2010 provides different tools for testing and management such as the Test List Editor and the Test View. The test projects and the actual test files are maintained in Team Foundation Server (TFS) for managing the version control of the source and the history of changes. Using Test List Editor we can group similar tests, create any number of Test Lists, and add or delete tests from a Test List. The other aspect of this article is to see the different file types generated in Visual Studio during testing. Most of these files are in XML format, which are created automatically whenever a new test is created. For the new learners of Visual Studio, there is a brief overview on each one of those windows. While we go through the windows and their purposes, we can check the Integrated Development Environment (IDE) and the tools integration into Visual Studio 2010. Testing as part of the Software Development Life Cycle The main objective of testing is to find the defects early in the SDLC. If the defect is found early, then the cost will be lower than when the defect is found during the production or implementation stage. Moreover, testing is carried out to assure the quality and reliability of the software. In order to find the defect as soon as possible, the testing activities should start early, that is in the Requirement phase of SDLC and continue till the end of the SDLC. In the Coding phase various testing activities take place. Based on the design, the developers start coding the modules. Static and dynamic testing is carried out by the developers. Code reviews and code walkthroughs are conducted by the team. Once the coding is complete, then comes the Validation phase, where different phases or forms of testing are performed: Unit Testing: This is the first stage of testing in the SDLC. This is performed by the developer to check whether the developed code meets the stated functionality. If there are any defects found during this testing then the defect is logged against the code and the developer fixes it. The code is retested and then moved to the testers after confirming the code without any defects for the purpose of functionality. This phase may identify a lot of code defects which reduces the cost and time involved in testing the application by testers, fixing the code, and retesting the fixed code. Integration Testing: This type of testing is carried out between two or more modules or functions together with the intention of finding interface defects between them. This testing is completed as a part of unit or functional testing and, sometimes, becomes its own standalone test phase. On a larger scale, integration testing can involve putting together groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Defects found are logged and later fixed by the developers. There are different ways of integration testing such as top-down and bottom-up. The Top-Down approach is intended to test the highest level of components and integrate first to test the high level logic and the flow. The low level components are tested later. The Bottom-Up approach is exactly opposite to the top-down approach. In this case the low level functionalities are tested and integrated first and then the high level functionalities are tested. The disadvantage of this approach is that the high level or the most complex functionalities are tested later. The Umbrella approach uses both the top-down and bottom-up patterns. The inputs for functions are integrated in the bottom-up approach and then the outputs for functions are integrated in the top-down approach. System Testing: This type of testing compares the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. Once all the modules are integrated, several errors may arise. Testing done at this stage is called system testing. Defects found in this type of testing are logged by the testers and fixed by the developers. Regression Testing: This type of testing is carried out in all the phases of the testing life cycle, once the defects logged by the testers are fixed by the developers or if any new functionality changes due to the defects logged. The main objective of this type of testing is testing with the intention of determining if bug fixes have been successful and have not created any new defects. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred and to check if any new functionality that was introduced in the software caused prior bugs to resurface. Types of testing Visual Studio provides a range of testing types and tools for testing software applications. The following are some of those types: Unit test Manual test Web Performance Test Coded UI Test Load Test Generic test Ordered test In addition to these types there are additional tools provided to manage, order the listing, and execution of tests created in Visual Studio. Some of these are the Test View, Test List Editor, and Test Results window. We will look at the details of these testing tools and the supporting tools for managing testing in Visual Studio 2010. Unit test Unit testing is one of the earliest phases of testing the application. In this phase the developers have to make sure the code is producing the expected result as per the stated functionality. It is extremely important to run unit tests to catch defects in the early stage of the software development cycle. The main goal of unit testing is to isolate each piece of the code or individual functionality and test if the method is returning the expected result for different sets of parameter values. A unit test is a functional class method test which calls a method with the appropriate parameters, exercises it, and compares the results with the expected outcome to ensure the correctness of the implemented code. Visual Studio 2010 has great support for unit testing through the integrated automated unit test framework, which enables the team to write and run unit tests. Visual Studio has the functionality to automatically generate unit test classes and methods during the implementation of the class. Visual Studio generates the test methods or the base code for the test methods but it remains the responsibility of the developer or the team to modify the generated test methods and to include the code for actual testing. The generated unit testing code will contain several attributes to identify the Test Class, Test Method, and Test Project. These attributes are assigned when the unit test code is generated from the original source code. Here is a sample of the generated unit test code. A Unit test is used by developers to identify functionality change and code defects. We can run the unit test any number of times and make sure the code delivers the expected functionality and is not affected by new code change or defect fix. All the methods and classes generated for the automated unit testing are inherited from the namespace Microsoft.VisualStudio.TestTools.UnitTesting. Manual test Manual testing is the oldest and the simplest type of testing but yet very crucial for software testing. The tester would be writing the test cases based on the functional and non-functional requirements and then testing the application based on each test case written. It helps us to validate whether the application meets various standards defined for effective and efficient accessibility and usage. Manual testing comes to play in the following scenarios: There is not enough budget for automation The tests are more complicated or too difficult to convert into automated tests Not enough time to automate the tests Automated tests would be time consuming to create and run The tested code hasn't stabilized sufficiently for cost effective automation. We can create manual tests by using Visual Studio 2010 very easily. The most important step in a Manual test is to document all the required test steps for the scenario with supporting information, which could be in a separate file. Once all the test cases are created, we should add the test cases to the Test Plan to be able to run the test and gather the test result every time we run the test. The new Microsoft Test Manager tool helps us when adding or editing the test cases to the Test Plan. The following are additional Manual testing features that are supported by Visual Studio 2010: Running the Manual test multiple times with different data by adding parameters Create multiple test cases using an existing test case to get the base test case first and then customize or modify the test Sharing test steps between multiple test cases Remove test cases from the test if not required Adding or copying test steps from Microsoft Excel or Microsoft Word or any other supported tool There are a lot of other manual testing features that are supported in Visual Studio 2010.  
Read more
  • 0
  • 0
  • 2062

article-image-testing-tools-and-techniques-python
Packt
20 Dec 2010
10 min read
Save for later

Testing Tools and Techniques in Python

Packt
20 Dec 2010
10 min read
  Python Testing: Beginner's Guide An easy and convenient approach to testing your powerful Python projects Covers everything you need to test your code in Python Easiest and enjoyable approach to learn Python testing Write, execute, and understand the result of tests in the unit test framework Packed with step-by-step examples and clear explanations So let's get on with it! Code coverage Tests tell you when the code you're testing doesn't work the way you thought it would, but they don't tell you a thing about the code that you're not testing. They don't even tell you that the code you're not testing isn't being tested. Code coverage is a technique, which can be used to address that shortcoming. A code coverage tool watches while your tests are running, and keeps track of which lines of code are (and aren't) executed. After the tests have run, the tool will give you a report describing how well your tests cover the whole body of code. It's desirable to have the coverage approach 100%, as you probably figured out already. Be careful not to focus on the coverage number too intensely though, it can be a bit misleading. Even if your tests execute every line of code in the program, they can easily not test everything that needs to be tested. That means you can't take 100% coverage as certain proof that your tests are complete. On the other hand, there are times when some code really, truly doesn't need to be covered by the tests—some debugging support code, for example—and so less than 100% coverage can be completely acceptable. Code coverage is a tool to give you insight into what your tests are doing, and what they may be overlooking. It's not the definition of a good test suite. coverage.py We're going to be working with a module called coverage.py, which is—unsurprisingly—a code coverage tool for Python. Since coverage.py isn't built in to Python, we'll need to download and install it. You can download the latest version from the Python Package Index at http://pypi.python.org/pypi/coverage. As before, users of Python 2.6 or later can install the package by unpacking the archive, changing to the directory, and typing: $ python setup.py install --user Users of older versions of Python need write permission to the system-wide site-packages directory, which is part of the Python installation. Anybody who has such permission can install coverage by typing: $ python setup.py install We'll walk through the steps of using coverage.py here, but if you want more information you can find it on the coverage.py home page at http://nedbatchelder.com/code/coverage/. Time for action – using coverage.py We'll create a little toy code module with tests, and then apply coverage.py to find out how much of the code the tests actually use. Place the following test code into test_toy.py. There are several problems with these tests, which we'll discuss later, but they ought to run. from unittest import TestCase import toy class test_global_function(TestCase): def test_positive(self): self.assertEqual(toy.global_function(3), 4) def test_negative(self): self.assertEqual(toy.global_function(-3), -2) def test_large(self): self.assertEqual(toy.global_function(2**13), 2**13 + 1) class test_example_class(TestCase): def test_timestwo(self): example = toy.example_class(5) self.assertEqual(example.timestwo(), 10) def test_repr(self): example = toy.example_class(7) self.assertEqual(repr(example), '<example param="7">') Put the following code into toy.py. Notice the if __name__ == '__main__' clause at the bottom. We haven't dealt with one of those in a while, so I'll remind you that the code inside that block runs doctest if we were to run the module with python toy.py. python toy.py. def global_function(x): r""" >>> global_function(5) 6 """ return x + 1 class example_class: def __init__(self, param): self.param = param def timestwo(self): return self.param * 2 def __repr__(self): return '<example param="%s">' % self.param if __name__ == '__main__': import doctest doctest.testmod() Go ahead and run Nose. It should find them, run them, and report that all is well. The problem is, some of the code isn't ever tested. Let's run it again, only this time we'll tell Nose to use coverage.py to measure coverage while it's running the tests. $ nosetests --with-coverage --cover-erase What just happened? In step 1, we have a couple of TestCase classes with some very basic tests in them. These tests wouldn't be much use in a real world situation, but all we need them for is to illustrate how the code coverage tool works. In step 2, we have the code that satisfies the tests from step 1. Like the tests themselves, this code wouldn't be much use, but it serves as an illustration. In step 4, we passed --with-coverage and --cover-erase as command line parameters when we ran Nose. What did they do? Well, --with-coverage is pretty straightforward: it told Nose to look for coverage.py and to use it while the tests execute. That's just what we wanted. The second parameter, --cover-erase, tells Nose to forget about any coverage information that was acquired during previous runs. By default, coverage information is aggregated across all of the uses of coverage.py. This allows you to run a set of tests using different testing frameworks or mechanisms, and then check the cumulative coverage. You still want to erase the data from previous test runs at the beginning of that process, though, and the --cover-erase command line is how you tell Nose to tell coverage.py that you're starting anew. What the coverage report tells us is that 9/12 (in other words, 75%) of the executable statements in the toy module were executed during our tests, and that the missing lines were line 16 and a lines 19 through 20. Looking back at our code, we see that line 16 is the __repr__ method. We really should have tested that, so the coverage check has revealed a hole in our tests that we should fix. Lines 19 and 20 are just code to run doctest, though. They're not something that we ought to be using under normal circumstances, so we can just ignore that coverage hole. Code coverage can't detect problems with the tests themselves, in most cases. In the above test code, the test for the timestwo method violates the isolation of units and invokes two different methods of example_class. Since one of the methods is the constructor, this may be acceptable, but the coverage checker isn't in a position to even see that there might be a problem. All it saw was more lines of code being covered. That's not a problem— it's how a coverage checker ought to work— but it's something to keep in mind. Coverage is useful, but high coverage doesn't equal good tests. Version control hooks Most version control systems have the ability to run a program that you've written in response to various events, as a way of customizing the version control system's behavior. These programs are commonly called hooks. Version control systems are programs for keeping track of changes to a source code tree, even when those changes are made by different people. In a sense, they provide an universal undo history and change log for the whole project, going all the way back to the moment you started using the version control system. They also make it much easier to combine work done by different people into a single, unified entity, and to keep track of different editions of the same project. You can do all kinds of things by installing the right hook programs, but we'll only focus on one use. We can make the version control program automatically run our tests, when we commit a new version of the code to the version control repository. This is a fairly nifty trick, because it makes it difficult for test-breaking bugs to get into the repository unnoticed. Somewhat like code coverage, though there's potential for trouble if it becomes a matter of policy rather than simply being a tool to make your life easier. In most systems, you can write the hooks such that it's impossible to commit code that breaks tests. That may sound like a good idea at first, but it's really not. One reason for this is that one of the major purposes of a version control system is communication between developers, and interfering with that tends to be unproductive in the long run. Another reason is that it prevents anybody from committing partial solutions to problems, which means that things tend to get dumped into the repository in big chunks. Big commits are a problem because they make it hard to keep track of what changed, which adds to the confusion. There are better ways to make sure you always have a working codebase socked away somewhere, such as version control branches. Bazaar Bazaar is a distributed version control system, which means that it is capable of operating without a central server or master copy of the source code. One consequence of the distributed nature of Bazaar is that each user has their own set of hooks, which can be added, modified, or removed without involving anyone else. Bazaar is available on the Internet at http://bazaar-vcs.org/. If you don't have Bazaar already installed, and don't plan on using it, you can skip this section. Time for action – installing Nose as a Bazaar post-commit hook Bazaar hooks go in your plugins directory. On Unix-like systems, that's ~/.bazaar/plugins/, while on Windows it's C:Documents and Settings<username>Application DataBazaar<version>plugins. In either case, you may have to create the plugins subdirectory, if it doesn't already exist. Place the following code into a file called run_nose.py in the plugins directory. Bazaar hooks are written in Python: from bzrlib import branch from os.path import join, sep from os import chdir from subprocess import call def run_nose(local, master, old_num, old_id, new_num, new_id): try: base = local.base except AttributeError: base = master.base if not base.startswith('file://'): return try: chdir(join(sep, *base[7:].split('/'))) except OSError: return call(['nosetests']) branch.Branch.hooks.install_named_hook('post_commit', run_nose, 'Runs Nose after each commit') Make a new directory in your working files, and put the following code into it in a file called test_simple.py. These simple (and silly) tests are just to give Nose something to do, so that we can see that the hook is working. from unittest import TestCase class test_simple(TestCase): def test_one(self): self.assertNotEqual("Testing", "Hooks") def test_two(self): self.assertEqual("Same", "Same") Still in the same directory as test_simple.py, run the following commands to create a new repository and commit the tests to it. The output you see might differ in details, but it should be quite similar overall. $ bzr init $ bzr add $ bzr commit Notice that there's a Nose test report after the commit notification. From now on, any time you commit to a Bazaar repository, Nose will search for and run whatever tests it can find within that repository. What just happened? Bazaar hooks are written in Python, so we've written our hook as a function called run_nose. Our run_nose function checks to make sure that the repository which we're working on is local, and then it changes directories into the repository and runs nose. We registered run_nose as a hook by calling branch.Branch.hooks.install_named_hook.
Read more
  • 0
  • 0
  • 2910
article-image-web-frameworks-python-geo-spatial-development
Packt
16 Dec 2010
11 min read
Save for later

Web Frameworks for Python Geo-Spatial Development

Packt
16 Dec 2010
11 min read
Python Geospatial Development Build a complete and sophisticated mapping application from scratch using Python tools for GIS development Build applications for GIS development using Python Analyze and visualize Geo-Spatial data Comprehensive coverage of key GIS concepts Recommended best practices for storing spatial data in a database Draw maps, place data points onto a map, and interact with maps A practical tutorial with plenty of step-by-step instructions to help you develop a mapping application from scratch      The "Slippy Map" Stack The "slippy map" is a concept popularized by Google Maps: a zoomable map where the user can click and drag to scroll around and double-click to zoom in. Here is an example of a Google Maps slippy map showing a portion of Europe: (Image copyright Google; map data copyright Europa Technologies, PPWK, Tele Atlas). Slippy maps have become extremely popular, and much of the work done on geo-spatial web application development has been focused on creating and working with slippy maps. The slippy map experience is typically implemented using a custom software stack, like this: Starting at the bottom, the raw map data is typically stored in a Shapefile or database. This is then rendered using a tool such as Mapnik, and a tile cache is used to speed up repeated access to the same map images. A user-interface library such as OpenLayers is then used to display the map in the user's web browser, and to respond when the user clicks on the map. Finally, a web server is used to allow web browsers to access and interact with the slippy map. Let's take a closer look at each of these pieces. Spatially-enabled Databases In a sense, almost any database can be used to store geo-spatial data: simply convert a geometry to WKT format and store the results in a text column. But while this would allow you to store geo-spatial data in a database, it wouldn’t let you query it in any useful way. All you could do is retrieve the raw WKT text and convert it back to a geometry object one record at a time. A spatially-enabled database, on the other hand, is aware of the notion of space, and allows you to work with spatial objects and concepts directly. In particular, a spatially-enabled database allows you to: Store spatial data types (points, lines, polygons, and so on) directly in the database, in the form of a geometry column. Perform spatial queries on your data. For example: select all landmarks within 10 km of the city named "San Francisco". Perform spatial joins on your data. For example: select all cities and their associated countries by joining cities and countries on (city inside country). Create new spatial objects using various spatial functions. For example: set "danger_zone" to the intersection of the "flooded_area" and "urban_area" polygons. As you can imagine, a spatially-enabled database is an extremely powerful tool for working with your geo-spatial data. By using spatial indexes and other optimizations, spatial databases can quickly perform these types of operations, and can scale to support vast amounts of data simply not feasible using other data-storage schemes. Map Rendering Mapnik is an example of a Python library for generating good-looking maps. Within the context of a web application, map rendering is usually performed by a web service which takes a request and returns the rendered map image file. For example, your application might include a map renderer at the relative URL /render which accepts the following parameters: minX, maxX, minY, maxY: The minimum and maximum latitude and longitude for the area to include on the map. width, height: The pixel width and height for the generated map image. layers: A comma-separated list of layers which are to be included on the map. The available predefined layers are: coastline, forest, waterways, urban, and street. formatThe desired image format. Available formats are: PNG, JPEG, GIF. This hypothetical /render web service would return the rendered map image back to the caller. Once this has been set up, the web service would act as a black box providing map images upon request for other parts of your web application. As an alternative to hosting and configuring your own map renderer, you can choose to use an openly available external renderer. For example, OpenStreetMap provides a freely-available map renderer for OpenStreetMap data at: http://dev.openstreetmap.de/staticmap Tile Caching Because creating an image out of raw map data is a time- and processor-intensive operation, your entire web application can be overloaded if you get too many requests for data at any one time. While there is a lot you can do to improve the speed of the map-generation process, there are still limits on how many maps your application can render in a given time period. Because the map data is generally quite static, you can get a huge improvement in your application’s performance by caching the generated images. This is generally done by dividing the world up into tiles, rendering tile images as required, and then stitching the tiles together to produce the desired map: Tile caches work in exactly the same way as any other cache: When a tile is requested, the tile cache checks to see if it contains a copy of the rendered tile. If so, the cached copy is returned right away. Otherwise, the map rendering service is called to generate the tile, and the newly-rendered tile is added to the cache before returning it back to the caller. As the cache grows too big, tiles which haven't been requested for a long time are removed to make room for new tiles. Of course, tile caching will only work if the underlying map data doesn't change. You can't use a tile cache where the rendered image varies from one request to the next. One interesting use of a tile cache is to combine it with map overlays to improve performance even when the map data does change. Because the outlines of countries and other physical features on a map don't change, it is possible to use a map generator with a tile cache to generate the base map onto which changing features are then drawn as an overlay: The final map could be produced using Mapnik, by drawing the overlay onto the base map, which is accessed using a RasterDataSource and displayed using a RasterSymbolizer. If you have enough disk space, you could even pre-calculate all of the base map tiles and have them available for quick display. Using Mapnik in this way is a fast and efficient way of combining changing and non-changing map data onto a single view — though there are other ways of overlaying data onto a map, for example using vector and raster layers within the OpenLayers library. User Interface Libraries While it is easy to build a simple web-based interface in HTML, users are increasingly expecting web applications to compete with desktop applications in terms of their user interface. Selecting objects by clicking on them, drawing images with the mouse, and dragging-and-dropping are no longer actions restricted to desktop applications. AJAX (Asynchronous JavaScript and XML) is the technology typically used to build complex user interfaces in a web application. In particular, running JavaScript code on the user's web browser allows the application to dynamically respond to user actions, and make the web page behave in ways that simply can't be achieved with static HTML pages. While JavaScript is ubiquitous, it is also hard to program in. The various web browsers in which the JavaScript code can run all have their own quirks and limitations, making it hard to write code that runs the same way on every browser. JavaScript code is also very low level, requiring detailed manipulation of the web page contents to achieve a given effect. For example, implementing a pop-up menu requires the creation of a element that contains the menu, formatting it appropriately (typically using CSS), and making it initially invisible. When the user clicks on the page, the pop-up menu should be shown by making the associated element visible. You then need to respond to the user mousing over each item in the menu by visually highlighting that item and un-highlighting the previously-highlighted item. Then when the user clicks, you have to hide the menu again before responding to the user’s action. All this detailed low-level coding can take weeks to get right—especially when dealing with multiple types of browsers and different browser versions. Since all you want to do in this case is have a pop-up menu that allows the user to choose an action, it just isn’t worth doing all this low-level work yourself. Instead, you would typically make use of one of the available user interface libraries to do all the hard work for you. These user interface libraries are written in JavaScript, and you typically add them to your web site by making the JavaScript library file(s) available for download, and then adding a line like the following to your HTML page to import the JavaScript library: <script type="text/javascript" src="library.js"> If you are writing your own web application from scratch, you would then make calls to the library to implement the user interface for your application. However, many of the web application frameworks that include a user interface library will write the necessary code for you, making even this step unnecessary. Web Servers In many ways a web server is the least interesting part of a web application: the web server listens for incoming HTTP requests from web browsers and returns either static content or the dynamic output of a program in response to these requests: There are many different types of web servers, ranging from the pure-python SimpleHTTPServer included in the Python Standard Library, through more fully-featured servers such as CherryPy, and of course the most popular industrial-strength web server of them all: Apache. One of the main consequences of your choice of web server is how fast your application will run. Obviously, a pure-Python web server will be slower than a compiled high-performance server such as Apache. In addition, writing CGI scripts in Python will cause the entire Python interpreter to be started up every time a request is received—so even if you are running a high performance web server, your application can still run slowly if you don't structure your application correctly. A slow web server doesn’t just affect your application’s responsiveness: if your server runs slowly, it won’t take many requests to overload the server. Another consequence of your choice of web server is how your application’s code interacts with the end user. The HTTP protocol itself is stateless—that is, each incoming request is completely separate, and a web page handler has no way of knowing what the user has done previously unless you explicitly code your pages in such a way that the application’s state is passed from one request to the next (for example, using hidden HTML form fields). Because some web servers run your Python code only when a request comes in, there is often no way of having a long-running process sitting in the background that keeps track of the user’s state or performs other capabilities for your web page handlers. For example, an in-memory cache might be used to improve performance, but you can’t easily use such a cache with CGI scripts as the entire interpreter is restarted for every incoming HTTP request. While the choice of web server will often have been made for you, for example by the server you are running your system on, or by the web application framework you are using, it is still worthwhile understanding the issues and consequences involved in the choice of a web server, and how your application design can affect the performance and scalability of your overall system. Summary While you could choose to use Google Maps to create a slippy map experience for your users, it is just as easy to combine your own geo-spatial data source such as a spatially-enabled database with a map renderer, a tile cache, a user-interface library and a web server to create your own custom slippy map stack. Implementing your own stack means you aren't limited by Google's licensing requirements, and can use your own map data and render the maps in whatever way you like. You also have far more flexibility in terms of what information gets displayed, and can go beyond what is possible using Google Maps, for example by implementing geo-spatial editing and analysis tools directly within your slippy map. Further resources on this subject: Python Graphics: Animation Principles [Article] Python 3: Object-Oriented Design [Article] Animating Graphic Objects using Python [Article] Python Testing: Beginner's Guide [Book]
Read more
  • 0
  • 0
  • 2697

article-image-tips-and-tricks-effectively-using-aspnet
Packt
15 Dec 2010
4 min read
Save for later

Tips and Tricks for Effectively Using ASP.NET

Packt
15 Dec 2010
4 min read
ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy Drastically reduce page load times Configure and improve compression – the single most important way to improve your site's performance Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all        It is good to monitor the performance of your site continuously Tip: The performance of your website is affected by both the things you control, such as code changes, and the things you cannot control such as increases in the number of visitors or server problems. Because of this, it makes sense to monitor the performance of your site continuously. That way, you find out that the site is becoming too slow before your manager does. It is important to reduce the "time to first byte" Tip: The "time to first byte" is the time it takes your server to generate a page, plus the time taken to move the first byte over the Internet to the browser. Reducing that time is important for visitor retention—you want to give them something to look at, and provide confidence that they'll have the page in their browser soon. It involves making better use of system resources such as memory and CPU. Caching is one of the methods used to improve website performance Tip: Caching allows you to store individual objects, parts of web pages, or entire web pages in memory either in the browser, a proxy, or the server. That way, those objects or pages do not have to be generated again for each request, giving you: Reduced response time Reduced memory and CPU usage Less load on the database Fewer round trips to the server, when using browser or proxy caching Reduced retrieval times when the content is served from proxy cache, by bringing the contents closer to the browser Building projects in the Release mode is a good practice Tip: If your site is a web-application project rather than a website, or if your website is a part of a solution containing other projects, be sure to build your releases in the release mode. This removes debugging overhead from your code, so it uses less CPU and memory. It is important to reduce Round Trips between browser and server Tip: Round trips between browser and server can take a long time, increasing wait times for the visitor. Hence it is necessary to cut down on them. The same goes for round trips between web server and database server. Permanent redirects Tip: If you are redirecting visitors to a new page because the page is outdated, use a permanent 301 redirect. Browsers and proxies will update their caches, and search engines will use them as well. That way, you reduce traffic to the old page. You can issue a 301 redirect programmatically: Response.StatusCode = 301; Response.AddHeader("Location", "NewPage.aspx"); Response.End(); For .NET 4 or higher: Response.RedirectPermanent("NewPage.aspx");   Hotlinking should be avoided Tip: Hotlinking is the practice of linking to someone else's images on your site. If this happens to you and your images, another web master gets to show your images on their site and you get to pay for the additional bandwidth and incur the additional load on your server. A great little module that prevents hot linking is LeechGuard Hot-Linking Prevention Module at http://www.iis.net/community/default.aspx?tabid=34&i=1288&g=6. Session state is taking too much memory Tip: If you decide that session state is taking too much memory, here are some solutions. Reduce session state life time Reduce space taken by session state Use another session mode Stop using session state Cookies Tip: ASP.NET disables all output caching if you set a cookie, to make sure the cookie isn't sent to the wrong visitor. Since setting cookies and proxy caching also don't go together performance-wise, you'll want to keep setting the number of cookies to a minimum. This can be done by trying not to set a cookie on every request but only when strictly needed. Minimizing the duration of locks Tip: Acquire locks on shared resources just before you access them, and release them immediately after you are finished with them. By limiting the time each resource is locked, you minimize the time threads need to wait for resources to become available.
Read more
  • 0
  • 0
  • 2246