Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-tcltk-handling-string-expressions
Packt
02 Mar 2011
11 min read
Save for later

Tcl/Tk: Handling String Expressions

Packt
02 Mar 2011
11 min read
Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language When I first started using Tcl, everything I read or researched stressed the mantra "Everything is a string". Coming from a hard-typed coding environment, I was used to declaring variable types and in Tcl this was not needed. A set command could—and still does—create the variable and assigns the type on the fly. For example, set variable "7" and set variable 7 will both create a variable containing 7. However, with Tcl, you can still print the variable containing a numeric 7 and add 1 to the variable containing a string representation of 7. It still holds true today that everything in Tcl is a string. When we explore the TK Toolkit and widget creation, you will rapidly see that widgets themselves have a set of string values that determine their appearance and/or behavior. As a pre-requisite for the recipes in this article, launch the Tcl shell as appropriate for your operating system. You can access Tcl from the command line to execute the commands. As with everything else we have seen, Tcl provides a full suite of commands to assist in handling string expressions. However due to the sheer number of commands and subsets, I won't be listing every item individually in the following section. Instead we will be creating numerous recipes and examples to explore in the following sections. A general list of the commands is as follows: CommandDescriptionstringThe string command contains multiple keywords allowing for manipulation and data gathering functions.appendAppends to a string variable.formatFormat a string in the same manner as C sprint.regexpRegular Expression matching.regsubPerforms substitution, based on Regular Expression matching.scanParses a string using conversion specifiers in the same manner as C sscanf.substPerform backslash, command, and variable substitution on a string. Using the commands listed in the table, a developer can address all their needs as applies to strings. In the following sections, we will explore these commands as well as many subsets of the string command. Appending to a string Creating a string in Tcl using the set command is the starting point for all string commands. This will be the first command for most, if not all of the following recipes. As we have seen previously, entering a set variable value on the command line does this. However, to fully implement strings within a Tcl script, we need to interact with these strings from time to time, for example, with an open channel to a file or HTTP pipe. To accomplish this, we will need to read from the channel and append to the original string. To accomplish appending to a string, Tcl provides the append command. The append command is as follows: append variable value value value... How to do it… In the following example, we will create a string of comma-delimited numbers using the for control construct. Return values from the commands are provided for clarity. Enter the following command: % set var 0 0 % for {set x 1} {$x<=10}{$x<=10} {incr x} { append var , $x } %puts $var 0,1,2,3,4,5,6,7,8,9,10 How it works… The append command accepts a named variable to contain the resulting string and a space delimited list of strings to append. As you can see, the append command accepted our variable argument and a string containing the comma. These values were used to append to original variable (containing a starting value of 0). The resulting string output with the puts command displays our newly appended variable complete with commas. Formatting a string Strings, as we all know, are our primary way of interacting with the end-user. Whether presented in a message box or simply directed to the Tcl shell, they need to be as fluid as possible, in the values they present. To accomplish this, Tcl provides the format command. This command allows us to format a string with variable substitution in the same manner as the ANSI C sprintf procedure. The format command is as follows: format string argument argument argument... The format command accepts a string containing the value to be formatted as well as % conversion specifiers. The arguments contain the values to be substituted into the final string. Each conversion specifier may contain up to six (6) sections—an XPG2 position specifier, a set of fags, minimum field width, a numeric precision specifier, size modifier, and a conversion character. The conversion specifiers are as follows: SpecifierDescriptiond or iFor converting an integer to a signed decimal string.uFor converting an integer to an unsigned decimal string.oFor converting an integer to an unsigned octal sting.x or XFor converting an integer to an unsigned hexadecimal string. The lowercase x is used for lowercase hexadecimal notations. The uppercase X will contain the uppercase hexadecimal notations.cFor converting an integer to the Unicode character it represents.sNo conversion is performed.fFor converting the number provided to a signed decimal string of the form xxx.yyy, where the number of y's is determined with the precision of 6 decimal places (by default).e or EIf the uppercase E is used, it is utilized in the string in place of the lowercase e.g or GIf the exponent is less than -4 or greater than or equal to the precision, then this is used for converting the number utilized for the %e or %E; otherwise for converting in the same manner as %f.%The % sign performs no conversion; it merely inserts a % character into the string. There are three differences between the Tcl format and the ANSI C sprintf procedure: The %p and %n conversion switches are not supported. The % conversion for %c only accepts an integer value. Size modifiers are ignored for formatting of floating-point values. How to do it… In the following example, we format a long date string for output on the command line. Return values from the commands are provided for clarity. Enter the following command: % set month May May % set weekday Friday Friday % set day 5 5 % set extension th th %set year 2010 2010 %puts [format "Today is %s, %s %d%s %d" $weekday $month $day $extension $year] Today is Friday, May 5th 2010 How it works… The format command successfully replaced the desired conversion fag delimited regions with the variables assigned. Matching a regular expression within a string Regular expressions provide us with a powerful method to locate an arbitrarily complex pattern within a string. The regexp command is similar to a Find function in a text editor. You search for a defined string for the character or the pattern of characters you are looking for and it returns a Boolean value that indicates success or failure and populates a list of optional variables with any matched strings. The -indices and -inline options must be used to modify the behavior, as indicated by this statement. But it doesn't stop there; by providing switches, you can control the behavior of regexp. The switches are as follows: SwitchBehavior-aboutNo actual matching is made. Instead regexp returns a list containing information about the regular expression where the first element is a subexpression count and the second is a list of property names describing various attributes about the expression.-expandedAllows the use of expanded regular expression, wherein whitespaces and comments are ignored.-indicesReturns a list of two decimal strings, containing the indices in the string to match for the first and last characters in the range-lineEnables the newline-sensitive matching similar to passing the -linestop and -lineanchor switches. -linestop Changes the behavior of [^] bracket expressions and the "." character so that they stop at newline characters.-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line.-nocaseTreats uppercase characters in the search string as lowercase.-allCauses the command to match as many times as possible and returns the count of the matches found.-inline Causes regexp to return a list of the data that would otherwise have been placed in match variables. Match variables may NOT be used if -inline is specified.   -startAllows us to specify a character index from which searching should start.--Denotes the end of switches being passed to regexp. Any argument following this switch will be treated as an expression, even if they start with a "-". Now that we have a background in switches, let's look at the command itself: regexp switches expression string submatchvar submatchvar... The regexp command determines if the expression matches part or all of the string and returns a 1 if the match exists or a 0 if it is not found. If the variables (submatchvar) (for example myNumber or myData) are passed after the string, they are used as variables to store the returned submatchvar. Keep in mind that if the –inline switch has been passed, no return variables should be included in the command. Getting ready To complete the following example, we will need to create a Tcl script file in your working directory. Open the text editor of your choice and follow the next set of instructions. How to do it… A common use for regexp is to accept a string containing multiple words and to split it into its constituent parts. In the following example, we will create a string containing an IP address and assign the values to the named variables. Enter the following command: % regexp "([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})" $ip all first second third fourth % puts "$all n$first n$second n$third n$fourth" 192.168.1.65 192 168 1 65 How it works… As you can see, the IP Address has been split into its individual octet values. What regexp has done is match the groupings of decimal characters [0-9] of a varying length of 1 to 3 characters {1, 3} delimited by a "." character. The original IP address is assigned to the first variable (all) while the octet values are assigned to the remaining variables (first, second, third and fourth). Performing character substitution on a string If regexp is a Find function, then regsub is equivalent to Find and Replace. The regsub command accepts a string and using Regular Expression pattern matching, it locates and, if desired, replaces the pattern with the desired value. The syntax of regsub is similar to regexp as are the switches. However, additional control over the substitution is added. The switches are as listed next: SwitchDescription-allCauses the command to perform substitution for each match found The & and n sequences are handled for each substitution-expandedAllows use of expanded regular expression wherein whitespace and comments are ignored-lineEnables newline sensitive matching similar to passing the -linestop and -lineanchor switches-linestopChanges the behavior of [^] bracket expressions so that they stop at newline characters-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line-nocaseTreats Upper Case characters in the search string as Lower Case-startAllows specification of a character offset in the string from which to start matching Now that we have a background in switches as they apply to the regsub command, let's look at the command: regsub switches expression string substitution variable The regsub command matches the expression against the string provided and either copies the string to the variable or returns the string if a variable is not provided. If a match is located, the portion of the string that matched is replaced by substitution. Whenever a substitution contains an & or a character, it is replaced with the portion of the string that matches the expression. If the substitution contains the switch "n" (where n represents a numeric value between 1 and 9), it is replaced with the portion of the string that matches with the nth sub-expression of the expression. Additional backslashes may be used in the substitution to prevent interpretation of the &, , n, and the backslashes themselves. As both the regsub command and the Tcl interpreter perform backslash substitution, you should enclose the string in curly braces to prevent unintended substitution. How to do it… In the following example, we will substitute every instance of the word one, which is a word by itself, with the word three. Return values from the commands are provided for clarity. Enter the following command: % set original "one two one two one two" one two one two one two % regsub -all {one} $original three new 3 % puts $new three two three two three two How it works… As you can see, the value returned from the regsub command lists the number of matches found. The string original has been copied into the string new, with the substitutions completed. With the addition of additional switches, you can easily parse a lengthy string variable and perform bulk updates. I have used this to rapidly parse a large text file prior to importing data into a database.  
Read more
  • 0
  • 0
  • 5228

article-image-gnucash-payroll-management-depreciation-and-owners-drawing
Packt
02 Mar 2011
5 min read
Save for later

GnuCash: Payroll Management, Depreciation, and Owner's Drawing

Packt
02 Mar 2011
5 min read
  Gnucash 2.4 Small Business Accounting: Beginner's Guide Employees and payroll Payroll is a financial record of salary and benefits provided to an employee. This is one of the more complex transactions in terms of accounting, simply because there are many different deductions and matching payments to be made to various tax authorities and health insurance and other vendors. Payroll is an expense. Deductions may have to be stored in a short term liability account. This is useful for things such as taxes, which may be paid to the government at a different time from paying employee salaries. Time for action – making payroll entries in GnuCash We are going to enter the payroll accounting entries for one employee with appropriate deductions for federal and state income tax and FICA tax: We have created a spreadsheet of the calculations that we are going to use in making the payroll entries. Take a moment to study the following screenshot: Create the expense accounts: You need two expense accounts – one Payroll Expenses for gross pay and another Employer FICA Tax for the company contribution to FICA tax. Create the liability accounts: The tax amounts deducted from the employee's gross pay are owed to the appropriate government agencies. We need to create liability accounts to hold these amounts until they are due. Go ahead and create three accounts, Federal Income Tax, VA Income Tax, and FICA Tax of Account Type Liability with Liabilities as the Parent Account as shown in the following screenshot: What just happened? Monthly salaried employees are typically hired with a gross pay. However, when it comes to making payment, there will be several deductions. When you record payroll in GnuCash, it is done with a single split transaction. This split transaction will populate the appropriate expense and liability accounts. If you want to look up any of the payroll details for a particular employee, at any time, you can simply open and view the split transaction. Net pay For example, most employees in the US will typically have the following deductions: Federal income tax State income tax FICA tax There will be other deductions such as county or local taxes, separate deductions for health, dental, and vision insurance, 401(k) or other retirement plan contributions and so on. The net pay thus calculated becomes payable to the employee and it becomes an expense to the business. Liability accounts The business owes these deducted amounts to the respective tax authorities. In addition, the bookkeeping system must keep track of company contribution to social security tax, Medicare tax, health insurance, 401(k), and so on. These are also employee-related expenses to the business. However, these payments are not made at the same time as the payroll. So, these amounts must be accumulated in respective liability accounts so that the correct amounts can be paid, when they become due. Calculation spreadsheet As we said, GnuCash doesn't have an integrated payroll module. Any calculation of deductions and company contributions must be made outside of GnuCash. This is the reason why we used a payroll calculation spreadsheet in the above tutorial. The spreadsheet can have all the formulas and lookup tables set up so that you can enter the gross salary in one cell and get all the computed values ready to be posted into GnuCash. Split transaction map The following split transaction map covers just the three taxes listed previously, of which the federal and state income taxes are entirely payable by the employee, while the FICA tax has an employee contribution and an equal company contribution. Account Increase Decrease CurrentAssets:Checking Net Salary Expenses:Salaries Gross Salary Liabilities:Federal Income Tax Federal Income Tax Liabilities:VA Income Tax VA Income Tax Liabilities:FICA Tax Employee FICA Tax Expenses:FICA Tax Company FICA Tax Liabilities:FICA Tax Company FICA Tax Payroll FAQ Here is a list of frequently asked questions about the payroll process and our answers: Q: If I use a single Payroll account for all employees, how will I see per employee information? A: Use reports to view information for each employee. Q: How do I print payroll checks? A: When making the Payroll entry, enter only the employee name in the Description field. If you decide to use GnuCash's check printing capabilities, the check will automatically be made out to the employee name correctly. If you want to record other information in the transaction besides the employee name, use the Notes field. Employee and expense voucher When employees spend their own money on behalf of the business, or they draw a cash advance from the business and need to account for expenses incurred, or they use a company card for business expenses, they need to submit an expense voucher to account for the amounts. Under the Business menu you will find the Employee menu item with the Employee, Expense Voucher, and Process Payment modules. Have a go hero – adding more deductions to payroll Create a payroll transaction showing a deduction for health insurance premium as well.
Read more
  • 0
  • 0
  • 4448

article-image-sage-act-2011-creating-quick-report
Packt
24 Feb 2011
6 min read
Save for later

Sage ACT! 2011: Creating a Quick Report

Packt
24 Feb 2011
6 min read
  Sage ACT! 2011 Dashboard and Report Cookbook Introduction The ACT! program provides two different types of reports: Quick reports and Standard reports. The standard report requires that a template be prepared in advance. The template may be brand new, an existing template, or a modified version of an existing template. While the standard report's template design is very flexible, it does require significant effort to design and organize the template. For complex reporting needs or reports that are frequently required, the standard reports are the best. This article shows how to run the various quick reports available in the ACT! program. You'll be shown how to set up, control headers and footers and run the various quick reports. The ease of creating a quick report makes them ideal for single use reports. The configuration of a quick report can't be saved so if a quick report configuration is frequently required, you should consider creating a standard report template instead. The ACT! Demo database is used for the tasks in this article. Setting preferences for the quick reports The preferences for all the quick reports can be individually set in the ACT! general preferences. Unless blocked, they can be set at the time you run the quick report. Here we will set the preferences for the contact list quick reports. Getting ready Because we are setting global preferences, there isn't any preparation required other than to have an ACT! database open. How to do it... From any screen in the ACT! program, click on the Tools menu and choose Preferences.... In the Preferences dialog, click on the Communication tab. In the Printing section, click on the Quick Print Preferences... button. In the Views section, select Contact List. For Print orientation, click on the radio button for Landscape. For Print sizing, click on the radio button for Actual size. For Other options, check Same font in my list view. Check Show Quick Print Options when printing. Click the Header Options button. Check Page Number, Print Date, Print Time, and My Record. Click OK. Click Footer Options button. Uncheck all options and click OK. Click OK to close the Quick Print Preferences dialog. Click OK to close the Preferences dialog. How it works... At this point, we are setting the general options for the Quick Print reports. In the Quick Print Preferences dialog Views section, all of the possible Quick Print report possibilities are shown and each can be configured separately. This task used the Contact List Quick Report as an example. The Portrait - Landscape option determines the orientation of the printed report. As a general rule the Print Sizing option should be set to actual size because the fit to page option shrinks the report both horizontally and vertically to fit on one page. Used with the Contact List Quick Report could result in a final report that was not legible. The font selection would most likely be the same as used on the contact list view, but it does provide for changing the desired font. The header and footer options are the same and typically one or the other would be used but not both. The My record option prints the name of the user running the report. The Show Quick Print options when printing allow for adjusting the report options when running the report. Unchecked, the report will go directly to the printer. There's more... The Quick Reports don't provide any means for setting the page margins. While the Fit to page option should typically be set to Actual size for a list view Quick Report, you will likely want to use the Fit to page option when doing a Quick Print of a detail view. Selecting and organizing the columns for a Contact List Quick Report In this task, we continue working with the Contact List Quick Report. The contact list is able to display all the fields in the contact table. While possible, in most cases this would be impractical. To create a meaningful report you need to decide which fields you want to show in the report. Because you will be printing the contact list, make sure the contact list is showing the fields that you want and that they are arranged in the desired order. In this task we will set up a name and address list. Getting ready There isn't any preparation required for the Contact List Quick Report other than having an ACT! database open. How to do it... In the navigation bar, on the right-hand side of the screen, click on Contacts. In the tool bar, click on List View button. Anywhere in the List View, right-click and select Customize Columns.... In the Customize Columns dialog, add fields to the list by clicking in the desired field in the Available fields box and then clicking the top arrow button (points right >). Remove unwanted fields by clicking on the field in the Show as columns in this order box and then clicking on the second from the top arrow button (points left <). Adjust the order of the fields by clicking on a field in the Show as columns in this order box and click on the Move Up or Move Down buttons to move the field to the desired location. Click the OK button to close and save the field configuration. In the title bar of the contact list, point the cursor to the division between columns and the cursor becomes a double ended arrow. Drag the division line to adjust the column width. Adjust the remaining columns in the same manner. How it works... Because the Quick Reports are basically screen image reports, it's necessary to make the screen look the way you want the report to look. For the Contact List Quick Report (and most list reports) this requires selecting the fields (columns) that you want in the report and arranging and sizing them so they look the way you want them to look on the printed report. The Customize Columns dialog provides the mechanism for selecting the fields to include and to position them in the desired order. Sizing the columns is a bit harder. The columns divisions can be dragged to widen or narrow a column. The column being sized is the column to the left of the column division. There's more... The Quick Reports don't provide any means for directly filtering the output. The best means of filtering the contacts included in the Contact List Quick Report is to use a lookup of the contacts to include in the report. You can also apply a simple sort on the contact list by clicking on the column title of the column you want to use for sorting. Each time you click on the column title, you reverse the sort order.
Read more
  • 0
  • 0
  • 1707
Visually different images

article-image-setting-oracle-order-management
Packt
21 Feb 2011
4 min read
Save for later

How to set up Oracle Order Management

Packt
21 Feb 2011
4 min read
In order to set up Oracle Order Management, there are some mandatory and optional steps. Most of the information that is required while setting up Oracle Order Management is shared through other modules. Some common features include the following: Inventory organization Key and descriptive Flexfields Unit Of Measure (UOM) Price list Customer Picking rules System options System options are the key values that are used for setting up Oracle Order Management Suite. These parameters contain a list of values that should be used as per our business requirement. We can see some common parameter values in the following figure: Profile options Profile options are the system profiles that we assign as per our requirement. These profiles fulfill critical business requirements. We can use these profiles on four different levels, as follows: Site Application Responsibility User Document sequence Document sequence is used for generating sequential number for orders. Using the document sequence an automatic document sequence number will be generated. These document numbers are user defined. We can identify from where new document sequencing should take place and where it is going to end. Also, we can have a unique number sequence for a particular time period. Using the document sequence, we differentiate our document sequencing for sales order documents. We can assign these document sequences to particular transaction types. Each transaction type has its own document sequence numbering. In the Name field, we will give a unique name for the number sequence. Select the application for which the document sequence will be working, and enter start and finish dates from when to when this sequence will be applicable. If we want to keep this document sequence for an unspecified period, then we will keep the To field blank. For automatic number generation, select the Type as Automatic. For the assignment of the document sequence, we will again select Order Management in Application field, which we have selected at the time of defining the new sequence. In the Category field, we will select the order type for which we require the document sequence. In the Ledger field, select the ledger and select Automatic in the Method Type field. Under the Assignment tab, we will again select the sequence that should be used for the transaction type and the Start Date from when this template would be applicable. Transaction type We use transaction types to manage different types of sales order. These transaction types can be according to business requirements (how we want to differentiate our orders). There are various options for which we can classify a new transaction type, as follows: Export sales Local sales Territory-based Price-based Workflows are assigned to transaction types. We can assign price lists, payment terms, invoicing rules, and the inventory organization from where the items against the order would be picked and shipped. To create a new transaction type in Oracle Order Management, navigate to Setup | Transaction Types. Here we will give the name of the new transaction type such as Standard Order Type and so on. Now we will attach the Fulfillment Flow and Negotiation Flow to this transaction type. We will also assign an effective date to this transaction type in order to start working from that date. Also, we can assign the price list and the picking rule to this transaction type. Now under the Shipping tab, we will provide the information for the Warehouse from where the inventory should be picked. We can also leave that blank if we have specified that at the picking-rule level or we can specify that at the order-entry level. We can specify the FOB field. We can attach the transaction type freight terms, as well as specify the shipping method at Transaction Type level. Under the Finance tab, we enter information that would be required in Oracle Accounts Receivable at the time of invoice creation. We can also specify the account for Cost of Goods Sold (COGS) at the Transaction Type level; else we have the option to pick from the Inventory Organization. Invoice Source type will be the source type used for invoices interfaced to Accounts Receivable. We can also specify a particular invoicing rule for the transaction type.
Read more
  • 0
  • 0
  • 3352

article-image-sage-act-2011-working-act-dashboards
Packt
18 Feb 2011
6 min read
Save for later

Sage ACT! 2011: Working with the ACT! Dashboards

Packt
18 Feb 2011
6 min read
In ACT! CRM, dashboards allow you to access key information in the form of a graphical interface. You can filter a Dashboard so that it contains just the information you need, or you can tweak the various elements of a Dashboard to give it a different look. Administrators and Managers of your ACT! database can create brand new Dashboards if they're required. If you want more details about the information you see in a Dashboard, you can drill-down into the Dashboard with a simple double-click to access all the juicy details. At that point, you can edit or add to your information and the Dashboard will update automatically. You can even print out a hard copy of a Dashboard to preserve the contents for posterity. Because many of the Dashboards consist of pie charts and graphs, you might even want to copy one of them and paste it into other applications such as Word, Excel, or PowerPoint. Getting familiar with the Dashboard layouts Quite simply, a Dashboard is a graphical interface that gives you a visual snapshot of a part of your business. ACT! Dashboards let you view and work with the various information contained in your database in one easy-to-access location. Dashboards can take the form of charts, graphs, or even lists. Dashboards are associated with a database and not a user; therefore all users share the same Dashboards. However, you can select your own Dashboard view in much the same way that you can select a layout. You can also filter the information that is shown in your Dashboard. A Dashboard consists of two parts: The Dashboard layout: The Dashboard layout determines which Dashboard components you see and how the filters are set. ACT! comes with six Dashboard layouts. However, Managers and Administrators can create additional Dashboard layouts using the Dashboard Designer or make permanent changes to the existing ones. Dashboard components: Each Dashboard layout consists of one or more components. A component displays different types of data from the ACT! database. For example, a Dashboard layout might include a component that lists a user's top 10 current sales opportunities, another component that graphs the activities of a specific user, and a component that displays a pie chart of the company's current pipeline. A layout can have a maximum of six components. Getting ready In order to really take advantage of the ACT! Dashboard, you'll need to make sure that your database contains a variety of information. Specifically you'll need to make sure that your database contains a few Contacts, Activities, and Opportunities if you're going to view any of those Dashboards. How to do it... Click the Dashboard icon on ACT!'s navigation bar. The following screenshot shows you an example of the default Dashboard page: Choose the Dashboard layout you want from the Dashboard drop-down list, at the top-left side of the Dashboard. How it works... Many of the layouts consist of the exact same components. However, the components in each of the layouts are filtered differently, giving the layouts a bit of variety. There's more... Out of the box there are five Dashboard layouts: ACT! Activities Dashboard: Shows the activities of the user currently signed into ACT!, like the one you see in the following screenshot: ACT! Administration Dashboard: Lists the database users and shows when they've logged in and out of the database. Also lists any remote sync users, the date of their list sync, and how many days they have before their remote database expires if they don't synchronize. ACT! Contact Dashboard: Gives you a list of recently created contacts, recently edited contacts, and the number of fields that have changed. ACT! Default Dashboard: Includes three activity and three opportunity components. ACT! Opportunities Dashboard: Provides you with four different opportunity components including sales analysis by stage, value, and product. Each Dashboard layout is actually a file ending with the .dsh extension. You'll find them safely filed in the Dashboards sub-folder of the database files folder associated with your database. Accessing information from Dashboards Once you've become familiar with the various Dashboard layouts your next step is to start exploring the components found in each layout to see what data they contain. The components are generally arranged in a grid of two columns of three rows for a total of six components per Dashboard layout. The basic components include: My Schedule At-A-Glance: Found on the Default and Activities Dashboards, this component lists your activities of the current user for the current day. Activities by User: Found on the Default and Activities Dashboards, this component displays the activities for the current user for the current month, sorted by type, in a bar chart. The total numbers of activities are shown in the following screenshot. In addition you can hover your mouse over a bar section to see the number of activities for that activity type. Activity List: Found on the Activities Dashboard, this component is identical to the My Schedule At-A-Glance components, except that it lists all the activities for the current month. Opportunity Pipeline by Stage: Found on the Opportunities and Default Dashboards, this component displays your open opportunities in the ACT! Sales Cycle process for the current month sorted by stage, in a pie chart and includes a recap on the side like the one you see in the following screenshot: Contact History Count by User Type: Found on the Contacts Dashboard this component displays history items created by database users within a specified number of days similar to what's shown in the following screenshot: Opportunities - Open by Product: Found on the Opportunities Dashboard, the component displays a pie chart of the open opportunities by product and by user. Top 10 Opportunities: Found on the Opportunities and Default Dashboards, this component displays a list, by company and opportunity name, of the top ten open opportunities in the ACT! Sales Cycle process for the current month. Closed Sales to Date: Found on the Opportunities and Default Dashboards, this component displays the weighted and total value of opportunities in the ACT! Sales Cycle process, you closed and won in the form of a chart. Optionally you can customize the component to indicate a sales goal like the one shown in the following screenshot:
Read more
  • 0
  • 0
  • 2853

article-image-managing-events-using-civicrm
Packt
17 Feb 2011
10 min read
Save for later

Managing Events using CiviCRM

Packt
17 Feb 2011
10 min read
Using CiviCRM Registrant processing and event promotion are often the most challenging, time consuming, and important pieces of the management process. CiviCRM provides flexible event management tools to define the nature of the event, determine the data and fees that must be collected, track participants as they register online or are entered by staff through the administrative tools, and develop the lists, nametags, and other resources you need to present an outstanding professional event. Why host events? Before digging further into things, it's worth taking a minute to ask the questions: why would you host events, and what exactly is an "event" as it relates to CiviCRM? Many not-for-profits exist as organizations that people donate to, become members of, or support in other ways. The purpose and services many organizations provide are not oriented around "walk-in" support or other face-to-face interactions with constituents, apart from events. Events provide an important (and often, the only) in-person interaction with your supporters, members, and others committed to the mission and vision of your organization. As such, the face and voice of the organization is most clearly seen and realized during events. So what exactly is an "event"? In CiviCRM, event management tools allow you to do the following: Publish and advertise an event description including date, time, and location Register participants Calculate and collect fees Collect data about participants Track registrant's status Use more advanced elements, such as self-registration forms on your site with automated waiting lists You are not likely to use the event tools for very small group meetings, such as a one-to-one meeting with a constituent, board meetings, committee meetings, and other such no-fee, basic events. For these events, it will be easier and simpler to use a CiviCRM activity, either with the pre-configured meeting activity type, or through your own custom activity types. Building and promoting your event The first step in event management is to configure your event in the system. Once configured, you can begin collecting registrations and tracking participants. The event tools all reside under the navigation bar's Events menu. As with other areas of CiviCRM, the event administration tools are not always provided in a simple, task-oriented workflow. You are likely to find that the creation, configuration, registration, tracking, reporting, and managing happen in a more iterative pattern, rather than a linear one. We will begin by walking through the event creation process, touching on the various options available, and may circle back later to delve deeper into various areas. Use the New Event option to begin the process of setting up an event. This opens up an initial information and basic settings form. After saving it, you will be directed to a tabbed interface where you optionally configure four other settings areas (or return to adjust this first form). You should carefully work through each tab in a sequential wizard-style way when first becoming familiar with the event tools. Information and settings The first form defines basic information about the event, including its category (type), title, description, dates, and so on. You may notice that the very first field on this form is for selecting an event template. If no event template exists, a notice to this effect will appear at the top of the page. Event templates are very useful if your organization hosts a number of very similar events, such as a monthly breakfast seminar series, or a bi-monthly training workshop. From one event to the next, the structure, fees, location, and perhaps even the basic description remains similar. In such cases, you can save time and reduce errors by setting up an event template under Events | Event Templates. The template creation tool is almost identical to the five-tabbed form we will be working through now, with the exception of a few fields removed (such as event dates) that are presumed to be unique for each event. When creating a template, complete only those fields which are common to all (or most) events of this type. For example, if your monthly breakfast seminar hosts different speakers on different topics, you will want to leave the summary and description fields empty. When a previously configured template is selected while on the Info and Settings form for a new event, the event is pre-populated with any data stored in the template. You will have the opportunity to review, adjust, and fill in any fields in your event. Once a template has been selected for a new event, there is no association back to the template. In other words, changing template settings at a later date will have no impact on existing events that used the template. Its only purpose is to pre- populate the event, and in doing so, to facilitate the event creation process. Returning to the initial event creation form, we see event type and participant role fields. The event type is used to categorize your events, which can be useful when analyzing and managing events. You can, for example, define an event type for your annual fall conference and later run searches based on the event type, such as displaying all constituents who participated in any of the fall conferences over the last five years. The event type options are managed under Administer | CiviEvent | Event Types. Custom data fields used to collect information about your registrants can be configured based on these event types. Since events of the same type will often have similar data collection needs, this is a useful and efficient way to repurpose fields for multiple, similarly-structured events. We will discuss custom data later in this section when we review the use of profiles. They are created and configured through Administer | Customize | Custom Data. Participant roles categorize the nature of the participant. For example, your event may have attendees, guests, speakers, staff, volunteers, and other types of registrants you will track, but also want to classify. Participant roles are managed through Administer | CiviEvent | Participant Roles. Similar to event types, custom data sets may be constructed based on the participant's role. In this way, speakers can be prompted to provide a title and description of their talks, exhibitors can be asked questions about their needs for space, power, and equipment, volunteers can be asked their preferences for an assignment, and basic attendees can be asked their breakout session preferences. When defining a participant role, you have the option of deciding if a role will be counted. We will see later in the event management tools that CiviCRM provides up- to-date counts of your participants, organized into various categories, one of which is the main participant count. Typically, you will want to "count" any attendees who have paid or have committed to pay, but exclude from your count any attendees who have canceled or are in a non-attendee role. Many organizations will not count speakers or staff when reporting the number of attendees for a conference. By creating a staff role and choosing to not count it, you can still track the fact that they were present at the conference and generate name tags from the system for them, but exclude them from the counts provided to your Events Committee or Board of Directors in various reports. Be clear on who is included in your counts as there are some purposes for which you will want a "full" count, such as the number of meals required at the event. Returning to our form, the selection of a participant role in this location determines what role will be assigned when participants register using the event registration form. In most cases, you will select the Attendee role, as that is the standard role intended for event participants. Peer pressure may be a useful tool in your event promotion toolbox. CiviCRM provides the option of exposing a participant listing to site visitors. When visited, a current list of all participants will be displayed. There are three listing templates included in the standard installation, namely Name Only, Name and Email, and Name, Status and Register Date. Unfortunately, at this time, there is no interface for setting up alternative collections of fields to be included in listings. If you have the ability and resources to edit PHP and Smarty .tpl templating files, you can create more templates and tell the system about them through Administer | CiviEvent | Participant Listing Templates. While participant listing pages may serve as an effective peer-pressure promotion tool, they may also be perceived as invasive to privacy by your attendees. Make sure the nature of the event (and the nature of your constituents) supports displaying such a list. In particular, be sensitive to the use of the name and e-mail template, as some contacts may not want their e-mail information disseminated in this way. Enter the title of your event, keeping in mind that it will be publicly visible in the event information and registration pages. You also will want the event to be uniquely named to avoid confusion with other events. For example, if you host an annual conference, you might name it: Annual Conference 2010, Annual Conference 2011, and so on. Use the Event Summary and Complete Description fields to describe the event, such as the topics covered, speakers, audience, and so on. As the name suggests, the Event Summary field should be brief and succinct. It will be included in RSS and iCal feeds generated by the system, which you may use for promotional efforts. The Complete Description field should be more complete, and is displayed on the event information page. Your RSS feeds allow other websites to automatically pull content from your site and display it on their site. The event feeds include information about your events, such as links to send their visitors to your registration pages. iCal feeds are another similar format for publishing your event information. People can set up their Google Calendar, Outlook, and other similar calendaring applications automatically to read in these feeds and display your events on the right date and time. Define the event's start and end date/time. For a single-day event, it is sufficient to just complete the start date field. We'll spend a bit of time in a moment on the waitlist option. Proceed to the final set of fields on this first step of the event wizard, and click to enable or disable the following options: Include Map to Event Location?: Inserts a Google or Yahoo! map on the information page with the event location plotted. You must have a location defined on the second step of the wizard and must have geocoding configured in your Global Settings to make use of this function. Public Event?: Determines if the event appears in RSS/iCal feeds, in the HTML event listing page, and in Drupal's CiviCRM Upcoming Events block. Turn this off if you have "invitation-only" events that you don't want to publicize through the automated methods. Is the Event Active?: Enables or disables the event. A disabled event will be hidden from RSS/iCal/HTML listings and cannot be visited through event information pages or registration forms. Be careful about disabling past events thinking they would not be accessed by site visitors. If you have older content articles on your site that reference the event information page, it's possible that people will visit older events to learn what information was covered. We will see later on that there are options for limiting the date/time window when people may register for the event, so you do not need to be concerned about people registering for past events inadvertently.
Read more
  • 0
  • 0
  • 1757
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-creating-budget-your-business-gnucash
Packt
16 Feb 2011
4 min read
Save for later

Creating a Budget for your Business with Gnucash

Packt
16 Feb 2011
4 min read
Why do you need to create a budget? There are two main reasons why you may want to create budgets. You want a trip planner for your business. You will use this on a day-to-day basis to run your business and make decisions. The second reason is if you are seeking outside finance for your business from a bank, investor, or other lender. They will require you to submit your business plan along with projected financials. Time for action – creating a budget for your business You are going to create a budget for the next three months to serve as a guide for your operations. Typically, investors, banks and other lenders will need financial projections for a longer period. As a minimum they will need one year projections which may go up to 3 to 5 years in many cases. From the menu select Actions Budget | New Budget|. A new budget screen will open. Click on the Options toolbar button. The Budget Options dialog will open. For this tutorial, we are going to select a beginning date of three months back. This is only for the purposes of this tutorial and will allow us to quickly run Budget vs. Actual reports. In the Budget Period pane, change the beginning on date to a date three months ago. Change the Number of Periods to 3. Type in the Budget Name MACS Jun-Aug Budget as shown in the following screenshot and click on OK. The screen will show a list of accounts with a column for each month. The date shown in the title of each column is the beginning of that period. Now enter the values by simply clicking on the cell and entering the amount as shown in the following screenshot.   Using the Tab key while entering budget amounts Don't use the Tab key. The value entered in the previous field seems to vanish into thin air if you use the Tab key. Instead use the Enter key and the mouse. When you are done entering all the values, don't forget to save changes. Now that the budget has been created, you are ready to run the reports. From the menu select Reports Budget | Budget Report|. In the Options dialog select all the Income and Expenses accounts in the Accounts tab. Check Show Difference in the Display tab and click on OK to see the report as shown in the following screenshot: We are going to create the Cash Flow Budget in a spreadsheet. Go ahead and copy the data from the preceding report to the spreadsheet of your choice. Put in additional rows and formulas along the lines shown below. We are showing the cashflow for a six month period in the following screenshot to make it easier for you to see some of the trends and business challenges more clearly: What just happened? What if you had tomorrow's news... TODAY? His name is Gary Hobson. He gets tomorrow's newspaper today. He doesn't know how. He doesn't know why. All he knows is when the early edition hits his doorstep, he has twenty-four hours to set things right. You may recall that in the TV series early edition, Kyle Chandler who plays the role of Gary Hobson uses this knowledge to prevent terrible events each day. What if we told you that you can get tomorrow's news for your business today? You can prevent terrible events from happening to your business. You can get tomorrow's sales, expenses, and cash flow in the form of a budget. Mistakes are far less costly when made on paper than with actual dollars. Sometimes budgets are referred to as projections. For example, banks, investors, and lenders will ask for a business plan with profit and loss, balance sheet, and cash flow projections. Other times, these are called forecasts, especially when referring to sales forecasts. Regardless of whether we call them budgets, projections or forecasts, we are referring to the future. Unlike the rest of bookkeeping, which is concerned with the past, budgeting is one area, which tries to look in the crystal ball, and attempts to see what the future might look like, or what you are committing to make it look like. If you are running a business without a budget, I am sure there are times when the thought flashes through your mind, "I wish I had known that earlier." Your budget is the crystal ball that enables you to see the future, and do something about it. Generally, when you complete a budget, you will have a number of revelations. For example, you might find that your cash flow is going into negative territory in the third month. The budget allows you to perceive problems before they occur and alter your plans to prevent those problems.
Read more
  • 0
  • 0
  • 3903

article-image-overview-tcl-shell
Packt
15 Feb 2011
10 min read
Save for later

An Overview of the Tcl Shell

Packt
15 Feb 2011
10 min read
  Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language      Introduction So, you've installed Tcl, written some scripts, and now you're ready to get a deeper understanding of Tcl and all that it has to offer. So, why are we starting with the shell when it is the most basic tool in the Tcl toolbox? When I started using Tcl I needed to rapidly deliver a Graphical User Interface (GUI) to display a video from the IP-based network cameras. The solution had to run on Windows and Linux and it could not be browser-based due to the end user's security concerns. The client needed it quickly and our sales team had, as usual, committed to a delivery date without speaking to the developer in advance. So, with the requirement document in hand, I researched the open source tools available at the time and Tcl/Tk was the only language that met the challenge. The original solution quickly evolved into a full-featured IP Video Security system with the ability to record and display historic video as well as providing the ability to attach to live video feeds from the cameras. Next search capabilities were added to review the stored video and a method to navigate to specific dates and times. The final version included configuring advanced recording settings such as resolution, color levels, frame rate, and variable speed playback. All was accomplished with Tcl. Due to the time constraints, I was not able get a full appreciation of the capabilities of the shell. I saw it as a basic tool to interact with the interpreter to run commands and access the file system. When I had the time, I returned to the shell and realized just how valuable a tool it is and the many capabilities I had failed to make use of. When used to its fullest, the shell provides much more that an interface to the Tcl interpreter, especially in the early stages of the development process. Need to isolate and test a procedure in a program? Need a quick debugging tool? Need real-time notification of the values stored in a variable? The Tcl shell is the place to go. Since then, I have learned countless uses for the shell that would not only have sped up the development process, but also saved me several headaches in debugging the GUI and video collection. I relied on numerous dialog boxes to pop up values or turned to writing debugging information to error logs. While this was an excellent way to get what I needed, I could have minimized the overhead in terms of coding by simply relying on the shell to display the desired information in the early stages. While dialog windows and error logs are irreplaceable, I now add in quick debugging by using the commands the shell has to offer. If something isn't proceeding as expected, I drop in a command to write to standard out and voila! I have my answer. The shell continues to provide me with a reliable method to isolate issues with a minimum investment of time. The Tcl shell The Tcl Shell (Tclsh) provides an interface to the Tcl interpreter that accepts commands from both standard input and text files. Much like the Windows Command Line or Linux Terminal, the Tcl shell allows a developer to rapidly invoke a command and observe the return value or error messages in standard output. The shell differs based on the Operating System in use. For the Unix/Linux systems, this is the standard terminal console; while on a Windows system, the shell is launched separately via an executable. If invoked with no arguments, the shell interface runs interactively, accepting commands from the native command line. The input line is demarked with a percent sign (%) with the prompt located at the start position. If the shell is invoked from the command line (Windows DOS or Unix/Linux terminal) and arguments are passed, the interpreter will accept the first as the filename to be read. Any additional arguments are processed as variables. The shell will run until the exit command is invoked or until it has reached the end of the text file. When invoked with arguments, the shell sets several Tcl variables that may be accessed within your program, much like the C family of languages. These variables are: VariableExplanationargcThis variable contains the number of arguments passed in with the exception of the script file name. A value of 0 is returned if no arguments were passed in.argvThis variable contains a Tcl List with elements detailing the arguments passed in. An empty string is returned if no arguments were provided.argv0This variable contains the filename (if specified) or the name used to invoke the Tcl shell.TCL_interactiveThis variable contains a '1' if Tclsh is running in interactive mode, otherwise a '0' is contained.envThe env variable is maintained automatically, as an array in Tcl and is created at startup to hold the environment variables on your system. Writing to the Tcl console The following recipe illustrates a basic command invocation. In this example, we will use the puts command to output a "Hello World" message to the console. Getting ready To complete the following example, launch your Tcl Shell as appropriate, based on your operating platform. For example, on Windows, you would launch the executable contained in the Tcl installation location within the bin directory, while on a Unix/Linux installation, you would enter TCLsh at the command line, provided this is the executable name for your particular system. To check the name, locate the executable in the bin directory of your installation. How to do it… Enter the following command: % puts "Hello World" Hello World How it works… As you can see, the puts command writes what it was passed as an argument to standard out. Although this is a basic "Hello World" recipe, you can easily see how this 'simple' command can be used for rapid tracking of the location within a procedure, where a problem may have arisen. Add in variable values and some error handling and you can rapidly isolate issues and correct them without the additional efforts of creating a Dialog Window or writing to an error log. Mathematical expressions The expr command is used to evaluate mathematical expressions. This command can address everything from simple addition and subtraction to advanced computations, such as sine and cosine. This eliminates the need to make system calls to perform advanced mathematical functions. The expr command evaluates the input and arguments, and returns an integer or floating-point value. A Tcl expression consists of a combination of operators, operands, and parenthetical containers (parenthesis, braces, or brackets). There are no strict typing requirements, so any white space is stripped by the command automatically. Tcl supports non-numeric and string comparisons as well as Tcl-specific operators. Tcl expr operands Tcl operands are treated as integers, where feasible. They may be specified as decimal, binary (first two characters must be 0b), hexadecimal (first two characters must be 0x), or octal (first two characters must be 0o). Care should be taken when passing integers with a leading 0, for example 08, as the interpreter would evaluate 08 as an illegal octal value. If no integer formats are included, the command will evaluate the operand as a floating-point numeric value. For scientific notations, the character e (or E) is inserted as appropriate. If no numeric interpretation is feasible, the value will be evaluated as a string. In this case, the value must be enclosed within double quotes or braces. Please note that not all operands are accepted by all operators. To avoid inadvertent variable substitution, it is always best to enclose the operands within braces. For example, take a look at the following: expr 1+1*3 will return a value of 4. expr (1+1)*3 will return a value of 6. Operands may be presented in any of the following: OperandExplanationNumericInteger and floating-point values may be passed directly to the command.BooleanAll standard Boolean values (true, false, yes, no, 0, or 1) are supported.Tcl variableAll referenced variables (in Tcl, a variable is referenced using the $ notation, for example, myVariable is a named variable, whereas $myVariable is the referenced variable).Strings (in double quotes)Strings contained within double quotes may be passed with no need to include backslash, variable, or command substitution, as these are handled automatically.Strings (in braces)Strings contained within braces will be used with no substitution.Tcl commandsTcl commands must be enclosed within square braces. The command will be executed and the mathematical function is performed on the return value.Named functionsFunctions, such as sine, cosine, and so on. Tcl supports a subset of the C programming language math operators and treats them in the same manner and precedence. If a named function (such as sine) is encountered, expr automatically makes a call to the mathfunc namespace to minimize the syntax required to obtain the value. Tcl expr operators may be specified as noted in the following table, in the descending order of precedence: OperatorExplanation- + ~ !Unary minus, unary plus, bitwise NOT and logical NOT. Cannot be applied to string operands. Bit-wise NOT may be applied to only integers.**Exponentiation Numeric operands only.*/ %Multiply, divide, and remainder. Numeric operands only.+ -Add and subtract. Numeric operands only.<< >>Left shift and right shift. Integer operands only. A right shift always propagates the sign bit.< > <= >=Boolean Less, Boolean Greater, Boolean Less Than or Equal To, Boolean Greater Than or Equal To (A value of 1 is returned if the condition is true, otherwise a 0 is returned). If utilized for strings, string comparison will be applied.== !=Boolean Equal and Boolean Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned).eq neBoolean String Equal and Boolean String Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned). Any operand provided will be interpreted as a string.in niList Containment and Negated List Containment (A value of 1 is returned if the condition is true, otherwise a 0 is returned). The first operand is treated as a string value, the second as a list.&Bitwise AND Integers only.^Bitwise Exclusive OR Integers only.|Bitwise OR Integers only.&&Logical AND (a value of 1 is returned if both operands are 0, otherwise a 1 is returned). Boolean and numeric (integer and floating-point) operands only.x?y:zIf-then-else (if x evaluates to non-zero, then the return is the value of y, otherwise the value of z is returned). The x operand must have a Boolean or a numeric value.  
Read more
  • 0
  • 0
  • 7433

article-image-overview-microsoft-sure-step
Packt
03 Feb 2011
9 min read
Save for later

An Overview of Microsoft Sure Step

Packt
03 Feb 2011
9 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions Learn how to effectively use Microsoft Dynamics Sure Step to implement the right Dynamics business solution with quality, on-time and on-budget results. Leverage the Decision Accelerator offerings in Microsoft Dynamics Sure Step to create consistent selling motions while helping your customer ascertain the best solution to fit their requirements. Understand the review and optimization offerings available from Microsoft Dynamics Sure Step to further enhance your business solution delivery during and after go-live. Gain knowledge of the project and change management content provided in Microsoft Dynamics Sure Step. Familiarize yourself with the approach to adopting the Microsoft Dynamics Sure Step methodology as your own. Includes a Foreword by Microsoft Dynamics Sure Step Practitioners.         The success of a business solution, and specifically an Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) solution, isn't solely about technology. Experience tells that it is as much about the people and processes as it is about the software. Software is often viewed as the enabler, with the key to success lying in how the solution is implemented and how the implementations are managed. The transformation from the technological solution being the point of emphasis in the early days of the business software era to the solution becoming an enabler for business transformation has only been furthered by the ERP/CRM reports by independent organizations that decry deployment failures in great detail. What stands out very clearly in these reports is the fact that ERP and CRM solution delivery is characterized by uncertainties and risks. Service providers have to balance time and budget constraints, while delivering the business value of the solution to their customers. Customer organizations need to understand that their involvement and collaboration is critical for the success of the delivery. They will need to invest time, provide relevant and accurate information, and manage the organizational changes to ensure that the solution is delivered as originally envisioned. The need for seamless implementation and deployment of business software is even more accentuated in the current state of the economy with enterprise software sales going through a prolonged period of negative to stagnant growth over the last several quarters. Sales cycles are taking longer to execute, especially as the customers take advantage of the buyer's market and force software providers to prove their solution in the sales cycle before signing off on the purchase. In this market, a good solution delivery approach is critical. We have consistently heard words such as in-scope, within-budget, and on-time being tossed around in the industry. Service providers are still facing these demands; however, in the current context, budgets are tighter, timeframes are shorter, and the demand for a quick return on investment is becoming increasingly critical. Microsoft has always understood that the value of the software is only as good as its implementation and adoption. Accordingly, Microsoft Dynamics Sure Step was developed as the methodology for positioning and deploying the Microsoft Dynamics ERP/CRM suite of products—AX, CRM, GP, NAV, and SL. In the vision of Sure Step, project management is not the prerogative of the project manager only. Sure Step is a partnership of consulting and customer resources, representing a very important triangulation of the collaboration between the software vendor, implementer, and customer, with the implementation methodology becoming a key element of the implemented application. The business solutions market The 2010 calendar year began with the global economy trying to crawl out of a recession. Still, businesses continued to invest in solutions, to leverage the power of information technology to drive down redundancy and waste in their internal processes. This was captured in a study by Gartner of the top industry CIOs, published in their annual report titled Gartner Perspective: IT Spending 2010. In spite of the recessionary pressures, organizations continued to list improving business processes, reducing costs, better use of information, and improving workforce effectiveness as their priorities for IT spending. The Gartner study listed the following top 10 business priorities based on 2009 findings: Business process improvement Reducing enterprise costs Improving enterprise workforce effectiveness Attracting and retaining new customers Increasing the use of information/analytics Creating new products or services (innovation) Targeting customers and markets more effectively Managing change initiatives Expanding current customer relationships Expanding into new markets and geographies The Gartner study listed the following top 10 technology priorities based on 2009 findings: Business intelligence Enterprise applications (ERP, CRM, and others) Servers and storage technologies (virtualization) Legacy application modernization Collaboration technologies Networking, voice, and data communications Technical infrastructure Security technologies Service-oriented applications and architecture Document management The source document for the previous two lists is: Gartner Executive Programs – CIO Agenda 2010. These are also some of the many reasons that companies, regardless of scale, implement ERP and CRM software, which again is evident from the top 10 technology priorities of the CIOs listed above. These demands, however, happen to be articulated even more strongly by small and medium businesses. For these businesses, an ERP/CRM solution can be a sizable percentage of their overall expense outlay, so they have to be especially vigilant about their spending—they just can't afford time and cost overruns as are sometimes visible in the Enterprise market. At the same time, the deployment of rich functionality software must realize a significant and clear advantage for their business. These trends are picked up and addressed by the IT vendors, who are constantly seeking and exploring new technological ingredients to address the Small-to-Medium Enterprise market demands. The importance of a methodology Having a predictable and reliable methodology is important for both the service provider (the implementer) and the users of the solution (the customer). This is especially true for ERP/CRM solution deployment, which can happen at intervals of anywhere from a couple of months to a couple of years, and the implementation team often comprises multiple individuals from the service provider and the customer. Therefore, it is very important that all the individuals are working off the same sheet of music, so to speak. Methodology can be defined as: The methods, rules, and hypothesis employed by, and the theory behind a given discipline or The systematic study of the methods and processes applied within the discipline over time Methodology can also be described as a collection of theories, concepts, and processes pertaining to a specific discipline or field. Rather than just a compilation of methods, methodology refers to the scientific method and the rationale behind it, as well as the assumptions underlying the definitions and components of the method. The definitions we just saw are particularly relevant to the design/architecture of a methodology for ERP/CRM and business solutions. For these solutions, the methodology should not just provide the processes, but it should also provide a connection to the various disciplines and roles that are involved in the execution of the methodology. It should provide detailed guidance and assumptions for each of the components, so that the consumers of the methodology can discern to what extent they will need to employ all or certain aspects of it on a given engagement. As such, a solid approach provides more than just a set of processes for solution deployment. For the service provider, a viable methodology can provide: End-to-end process flows for solution development and deployment, creating a repeatable process leading to excellence in execution Ability to link shell and sample templates, reference architecture, and other similar documentation to key activities A structure for creating an effective Knowledge Management (KM) system, facilitating easier harvesting, storing, retrieval, and reuse of content created by the field on customer engagements Ability to develop a rational structure for training of the consulting team members, including ramp-up of new employees Ability to align the quality assurance approach to the deployment process— important in organizations that use an independent QA process as oversight for consulting efforts Ability to develop a structured estimation process for solution development and deployment Creation of a structure for project scope control and management, and a process for early risk identification and mediation For the customer, a viable methodology can provide: Clear end-to-end process flows for solution development that can be followed by the customer's key users and Subject Matter Experts (SMEs) assigned to the project Consistent terminology and taxonomy, especially where the SMEs may not have had prior experience with implementing systems of such magnitude, thus making it easier for everybody to be on the same page Ability to develop a good Knowledge Management system to capture lessons learned for future projects/upgrades Ability to develop a rational structure and documentation for end-user training and new employee ramp-up Creation of a structure for ensuring that the project stays within scope, including a process for early risk identification and mediation In addition to the points listed here, having a "full lifecycle methodology" provides additional benefits in the sales-to-implementation continuum. The benefits for the service providers include: Better alignment of the consulting teams with the sales teams A more scientific deal management and approval process that takes into account the potential risks Better processes to facilitate the transfer of customer knowledge, ascertained during the sales cycle, to the solution delivery team Ability to show the customer how the service provider has "done it before" and effectively establish trust that they can deliver the envisioned solution Clearly illustrating the business value of the solution to the customer Ability to integrate multiple software packages into an overall solution for the customer Ability to deliver the solution as originally envisioned within scope, on time, and within established budget The benefits for the customers include: Ability to understand and articulate the business value of the solution to all stakeholders in the organization Ensuring that there is a clear solution blueprint established Ensuring that the solution is delivered as originally envisioned within scope, on time, and within established budget Ensuring an overall solution that can integrate multiple software packages In summary, a good methodology creates a better overall ecosystem for the organizations. The points noted in the earlier lists are an indication of some of the ways that the benefits are manifested; as you leverage methodologies in your own organization, you may realize other benefits as well.  
Read more
  • 0
  • 0
  • 2784

article-image-upgrading-microsoft-sure-step
Packt
24 Jan 2011
11 min read
Save for later

Upgrading with Microsoft Sure Step

Packt
24 Jan 2011
11 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions Learn how to effectively use Microsoft Dynamics Sure Step to implement the right Dynamics business solution with quality, on-time and on-budget results. Understand the review and optimization offerings available from Microsoft Dynamics Sure Step to further enhance your business solution delivery during and after go-live. Gain knowledge of the project and change management content provided in Microsoft Dynamics Sure Step. Familiarize yourself with the approach to adopting the Microsoft Dynamics Sure Step methodology as your own.        Upgrade assessment and the diagnostic phase In this section, we will discuss the process, particularly the Upgrade Assessment Decision Accelerator offering, in more detail. We begin by reintroducing the diagram showing the flow of activities and Decision Accelerator offerings for an existing customer. You may recall that the flow is very similar to the one for a prospect, with the only difference being the Upgrade Assessment DA offering replacing the Requirements and Process Review DA. (Move the mouse over the image to enlarge.) As noted before, the flow for the existing customer also begins with Diagnostic Preparation, similar to that for a prospect. The guidance in the activity page can be leveraged to explain/understand the capabilities and features of the new version of the corresponding Microsoft Dynamics solution that is being considered. When interest is established in moving the existing solution to the current version of the solution, the next step is the Upgrade Assessment DA offering, which is the key step in this process. The Upgrade Assessment Decision Accelerator offering The Upgrade Assessment DA is the most important step in the process for an existing Microsoft Dynamics customer. The Upgrade Assessment DA is executed by the services delivery team to get an understanding of the existing solution being used by the customer, determine the components that need to be upgraded to the current release of the product, and determine if any other features need to be enabled as part of the upgrade engagement. In combination with the Scoping Assessment DA offering, the delivery team will also determine the optimal approach, resource plan and estimate, and overall timeline to upgrade the solution to the current product version. Before initiating the Upgrade Assessment DA, the services delivery team should meet with the customer to ascertain and confirm that there is interest in performing the upgrade. Especially where delivery resources are in high demand, this is an important step that the sales teams need to carry out before involving the delivery resources such as solution architects and senior application consultants. Sales personnel can use the resources in the Sure Step Diagnostic Preparation activity to understand and position the current capabilities of the corresponding Microsoft Dynamics solution. Once customer interest in upgrading has been determined, the services delivery team can employ the Upgrade Assessment DA offering. The aim of the Upgrade Assessment is to identify the complexity of upgrading the existing solution and to highlight areas of feature enhancements, complexities, and risks. The steps performed in the execution of the Upgrade Assessment are shown in the following diagram. The delivery team begins the Upgrade Assessment by understanding the overall objectives for the Upgrade. Teams can leverage the product-specific questionnaires provided in Sure Step for Microsoft Dynamics AX, CRM, GP, NAV, and SL. These questionnaires also include specific sections and questions for interfaces, infrastructure, and so on, so they can also be leveraged in the following steps. One of the important tasks at the outset is to review the upgrade path for Microsoft Dynamics and any associated ISV software, to determine whether the upgrade from the customer's existing product version to the targeted version of Microsoft Dynamics is supported. This will have a bearing on how the upgrade can be executed—can you follow a supported upgrade path, or is it pretty much a full reimplementation of the solution? The next step in executing the Upgrade Assessment is to assess the existing solution's configurations and customizations. In this step, the delivery team reviews which features of Microsoft Dynamics have been enabled for the customer, including which ones have been configured to meet the customer's needs and which ones have been customized. This will allow the delivery team to take the overall objectives for the upgrade and determine which of these configurations and customizations will need to be ported over to the new solution, and which ones should be retired. For example, the older version may have necessitated customizations in areas where the solution did not have corresponding functionality. Or perhaps the solution needed a specific ISV solution to meet a need. If the current product version provides these features as standard functionality, these customizations or ISV solutions no longer need to be part of the new solution. The next Upgrade Assessment step is to examine the custom interfaces for the existing solution. This includes assessing any custom code written to interface the solution to third-party solutions, such as an external database for reporting purposes. This step is followed by reviewing the existing infrastructure and architecture configuration so that the delivery team can understand the hardware components that can be leveraged for the new solution. The delivery team can provide confirmation on whether the existing infrastructure can support the upgrade application or if additional infrastructure components may be necessary. The final step of the Upgrade Assessment DA offering is for the delivery team to complete the detailed analysis of the customer's existing solution and generate a report of their findings. The report, to be presented to the customer for approval, will include the following topics: The scope of the upgrade, including a list of functional and technical areas that will be enhanced in the new solution. A list of the functional areas of the application categorized to show the expected complexity involved in upgrading them. If there are areas of the existing implementation that will require further examination or additional effort to upgrade successfully due to the inherent complexity, they must be highlighted. Areas of the current solution that could be remapped to new functionality in the current version of the base Microsoft Dynamics product. An overall recommended approach to the upgrade, including alternatives to address any new functionality desired. The Upgrade Assessment provides the customer early identification of issues and risks that could occur during an upgrade so that appropriate mitigating actions can be initiated accordingly. The customer can also get a level of confidence that an appropriate level of project governance for the upgrade is available, as well as that the correct upgrade approach will be undertaken by the delivery team. In the next sections, we will discuss how the Upgrade Assessment DA becomes the basis for completing the customer's due diligence, and sets the stage for a quality upgrade of the customer's solution. When to use the other Decision Accelerator offerings After the Upgrade Assessment DA has been executed, the remaining DA offerings may also be needed in the due diligence process for the existing Microsoft Dynamics customer. In this section, we will discuss the scenarios that may call for the usage of the DA offerings, and which ones would apply to that particular scenario. From the Upgrade Assessment DA, the delivery team determines the existing business functions and requirements that need to be upgraded to the new release. Using the Fit Gap and Solution Blueprint DA offering, they can then determine and document how these requirements will be ported over. If meeting the requirement is more than implementing standard features, the approach maybe a re-configuration, custom code rewrite, or workflow setup. Additionally, if new features are required as part of the upgrade, these requirements should also be classified in the Fit Gap worksheet either as Fit or as Gap. They should also be further classified as Standard, Configuration, or Workflow as the case may be for the Fits, and Customization for the Gaps. The Architecture Assessment DA can be used determine the new hardware configuration for the upgraded solution. It can also be used to address any performance issues up-front through the execution of the Proof of Concept Benchmark sub-offering. The Scoping Assessment DA can be used to determine the effort, timeline, and resources needed to execute the upgrade. If it was determined with the Upgrade Assessment DA that new functionality will be introduced, the delivery team and the customer must also determine the Release plan. We will discuss upgrade approaches and Release planning in more detail in the next section. It is important to note that all three of the above Decision Accelerator Offerings— the Fit Gap and Solution Blueprint, the Architecture Assessment, and the Scoping Assessment can be executed together with the Upgrade Assessment DA as one engagement for the customer. The point of this section is not that each of these offerings needs to be positioned individually for the customer. On the contrary, depending on the scope, the delivery team could easily perform the exercise in tandem. The point of emphasis in this section for the reader is that if you are assessing an upgrade for the customer, you should be able to leverage the templates in each of the DA offerings, and combine them as you deem fit for your engagement. Lastly, the Proof of Concept DA offering and Business Case DA offering may also apply to an upgrade engagement, but typically only for a small subset of customers. Examples include customers who maybe on a very old version of the Microsoft Dynamics solution so that they pretty much need a re-implementation of the solution with the new version of the product, or customers that need complex functionality to be enabled as part of the upgrade. In both these cases, the customer may request the delivery team to prove out certain components of the solution prior to embarking on a full upgrade, in which case the Proof of Concept DA may be executed. They may also request assistance from the delivery team to assess the return on investment for the upgraded solution, in which case the Business Case DA may be employed. Determining the upgrade approach and release schedule As noted in the previous section, the customer and the delivery team should work together to select the right approach for the upgrade during the course of the upgrade diagnostics. Sure Step recommends two approaches to Upgrades: Technical upgrade: Use this approach if the upgrade mostly applies to application components, such as executable files, code components, and DLLs. This approach can be used to bring a customized solution to the latest release, provided the application functionality and business workflow stay relatively the same. Functional upgrade: Use this approach if new application functionality or major changes in the existing business workflows are desired during the course of the upgrade. Additional planning, testing, and rework of the existing solution are inherent in this complex upgrade process, and as such more aligned to a Functional upgrade. Functional upgrades are typically performed in multiple Releases. The following diagram depicts the two Upgrade approaches and the Release schedules. Depending on the scope of the upgrade, the customer engagement may have one or more delivery Releases. If for example, the customer's solution is on a supported upgrade path, the Technical Upgrade maybe delivered in a single Release using the Sure Step Upgrade project type. If the new solution requires several new processes to be enabled, the Functional Upgrade may be delivered in two or more Releases. For example, if the customer needs advanced supply chain functionality such as production scheduling and/or advanced warehousing to be enabled as part of the upgrade, the recommended approach is to first complete the Technical Upgrade using the Sure Step Upgrade project type to port the existing functionality over to the new product version in Release 1, then add the advanced supply chain functionality using the Rapid, Standard, Agile, or Enterprise project types in Release 2. As noted earlier, the DA offerings can be executed individually or in combination, depending on the customer engagement. Regardless of how they are executed, it is imperative that the customer and delivery team select the right approach and develop the necessary plans such as Project Plan, Resource Plan, Project Charter, and/or Communication Plan. These documents should form the basis for the upgrade delivery Proposal. When the Proposal and Statement of Work are approved, it is time to begin the execution of the solution upgrade.  
Read more
  • 0
  • 0
  • 1852
article-image-manage-sql-azure-databases-web-interface-houston
Packt
21 Jan 2011
2 min read
Save for later

Manage SQL Azure Databases with the Web Interface 'Houston'

Packt
21 Jan 2011
2 min read
  Microsoft SQL Azure Enterprise Application Development Build enterprise-ready applications and projects with SQL Azure Develop large scale enterprise applications using Microsoft SQL Azure Understand how to use the various third party programs such as DB Artisan, RedGate, ToadSoft etc developed for SQL Azure Master the exhaustive Data migration and Data Synchronization aspects of SQL Azure. Includes SQL Azure projects in incubation and more recent developments including all 2010 updates Appendix In order to use this program and follow the article you should have an account on the Windows Azure Platform on which preferably an SQL Azure server has been provisioned. This would also imply that you have a Windows Live ID to access the portal. As mentioned, in this article we look at some of the features of this web based tool and carry out a few tasks. Click the Launch Houston button in the Project Houston CTP1 page shown here on the SQLAzureLabs portal page. This brings up a world map displaying the current Windows Azure Data Centers available and you have to choose the data center on which you have an account. For the present article we will use the Southeast Asia data center and sometimes the North Central US data center. Click on Southeast Asia location. The Silverlight application gets launched from the URL: https://manage-sgp.cloudapp.net/ displaying the license information that you need to agree to before going forward. When you click OK, the Login in page is displayed as shown. You need to enter the server information at the Southeast Asia data center as shown. Click Connect. The connection gets established to the above SQL Azure server as shown in the next image. This is much better looking than the somewhat ‘drab’ looking SSMS interface (albeit fully mature)shown here for comparison. Changing the database If you need to work with a different database, click on Connect DB at the top left of 'Houston' user interface, as shown in the next image. The conneciton interface comes up again where you indicate the name of database as shown. Here the database has been changed to master. Click Connect now connects you to the master database as shown.
Read more
  • 0
  • 0
  • 2298

article-image-aspnet-mvc-2-validating-mvc
Packt
21 Jan 2011
5 min read
Save for later

ASP.NET MVC 2: Validating MVC

Packt
21 Jan 2011
5 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Introduction ASP.NET MVC provides a simple, but powerful, framework for validating forms. In this article, we'll start by creating a simple form, and then incrementally extend the functionality of our project to include client-side validation, custom validators, and remote validation. Basic input validation The moment you create an action to consume a form post, you're validating. Or at least the framework is. Whether it is a textbox validating to a DateTime, or checkbox to a Boolean, we can start making assumptions on what should be received and making provisions for what shouldn't. Let's create a form. How to do it... Create an empty ASP.NET MVC 2 project and add a master page called Site.Master to Views/Shared. In the models folder, create a new model called Person. This model is just an extended version of the Person class.Models/Person.cs: public class Person { [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Middle Name")] public string MiddleName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Birth Date")] public DateTime BirthDate { get; set; } public string Email { get; set; } public string Phone { get; set; } public string Postcode { get; set; } public string Notes { get; set; } } Create a controller called HomeController and amend the Index action to return a new instance of Person as the view model.Controllers/HomeController.cs: public ActionResult Index() { return View(new Person()); } Build and then right-click on the action to create an Index view. Make it an empty view that strongly types to our Person class. Create a basic form in the Index view.Views/Home/Index.aspx: <% using (Html.BeginForm()) {%> <%: Html.EditorForModel() %> <input type="submit" name="submit" value="Submit" /> <% } %> We'll go back to the home controller now to capture the form submission. Create a second action called Index, which accepts only POSTs.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(... At this point, we have options. We can consume our form in a few different ways, let's have a look at a couple of them now:Controllers/HomeController.cs (Example): // Individual Parameters public ActionResult Index(string firstName, DateTime birthdate... // Model Public ActionResult Index(Person person) { Whatever technique you choose, the resolution of the parameters is roughly the same. The technique that I'm going to demonstrate relies on a method called UpdateModel. But first we need to differentiate our POST action from our first catch-all action. Remember, actions are just methods, and overrides need to take sufficiently different parameters to prevent ambiguity. We will do this by taking a single parameter of type FormCollection, though we won't necessarily make use of it.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(FormCollection form) { var person = new Person(); UpdateModel(person); return View(person); } The UpdateModel technique is a touch more long-winded, but comes with advantages. The first is that if you add a breakpoint on the UpdateModel line, you can see the exact point when an empty model becomes populated with the form collection, which is great for demonstration purposes. The main reason I go back to UpdateModel time and time again, is the optional second parameter, includeProperties. This parameter allows you to selectively update the model, thereby bypassing validation on certain properties that you might want to handle independently. Build, run, and submit your form. If your page validates, your info should be returned back to you. However, add your birth date in an unrecognized format and watch it bomb. UpdateModel is a temperamental beast. Switch your UpdateModel for TryUpdateModel and see what happens. TryUpdateModel will return a Boolean indicating the success or failure of the submission. However, the most interesting thing is happening in the browser. How it works... With ASP.NET MVC, it sometimes feels like you're stripping the development process back to basics. I think this is a good thing; more control to render the page you want is good. But there is a lot of clever stuff going on in the background, starting off with Model Binders. When you send a request (GET, POST, and so on) to an ASP.NET MVC application, the query string, route values and the form collection are passed through model binding classes, which result in usable structures (for example, your action's input parameters). These model binders can be overridden and extended to deal with more complex scenarios, but since ASP.NET MVC2, I've rarely made use of this. A good starting point for further investigation would be with DefaultModelBinder and IModelBinder. What about that validation message in the last screenshot, where did it come from? Apart from LableFor and EditorFor, but we also have ValidationMessageFor. If the model binders fail at any point to build our input parameters, the model binder will add an error message to the model state. The model state is picked up and displayed by the ValidationMessageFor method, but more on that later.
Read more
  • 0
  • 0
  • 5018

article-image-python-multimedia-enhancing-images
Packt
20 Jan 2011
5 min read
Save for later

Python Multimedia: Enhancing Images

Packt
20 Jan 2011
5 min read
Adjusting brightness and contrast One often needs to tweak the brightness and contrast level of an image. For example, you may have a photograph that was taken with a basic camera, when there was insufficient light. How would you correct that digitally? The brightness adjustment helps make the image brighter or darker whereas the contrast adjustments emphasize differences between the color and brightness level within the image data. The image can be made lighter or darker using the ImageEnhance module in PIL. The same module provides a class that can auto-contrast an image. Time for action – adjusting brightness and contrast Let's learn how to modify the image brightness and contrast. First, we will write code to adjust brightness. The ImageEnhance module makes our job easier by providing Brightness class. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to Before_BRIGHTENING.png. Use the following code: 1 import Image 2 import ImageEnhance 3 4 brightness = 3.0 5 peak = Image.open( "C:imagesBefore_BRIGHTENING.png ") 6 enhancer = ImageEnhance.Brightness(peak) 7 bright = enhancer.enhance(brightness) 8 bright.save( "C:imagesBRIGHTENED.png ") 9 bright.show() On line 6 in the code snippet, we created an instance of the class Brightness. It takes Image instance as an argument. Line 7 creates a new image bright by using the specified brightness value. A value between 0.0 and less than 1.0 gives a darker image, whereas a value greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the image unchanged. The original and resultant image are shown in the next illustration. Comparison of images before and after brightening. Let's move on and adjust the contrast of the brightened image. We will append the following lines of code to the code snippet that brightened the image. 10 contrast = 1.3 11 enhancer = ImageEnhance.Contrast(bright) 12 con = enhancer.enhance(contrast) 13 con.save( "C:imagesCONTRAST.png ") 14 con.show() Thus, similar to what we did to brighten the image, the image contrast was tweaked by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a black image. A value of 1.0 keeps the current contrast. The resultant image is compared with the original in the following illustration. The original image with the image displaying the increasing contrast. In the preceding code snippet, we were required to specify a contrast value. If you prefer PIL for deciding an appropriate contrast level, there is a way to do this. The ImageOps.autocontrast functionality sets an appropriate contrast level. This function normalizes the image contrast. Let's use this functionality now. Use the following code: import ImageOps bright = Image.open( "C:imagesBRIGHTENED.png ") con = ImageOps.autocontrast(bright, cutoff = 0) con.show() The highlighted line in the code is where contrast is automatically set. The autocontrast function computes histogram of the input image. The cutoff argument represents the percentage of lightest and darkest pixels to be trimmed from this histogram. The image is then remapped. What just happened? Using the classes and functionality in ImageEnhance module, we learned how to increase or decrease the brightness and the contrast of the image. We also wrote code to auto-contrast an image using functionality provided in the ImageOps module. Tweaking colors Another useful operation performed on the image is adjusting the colors within an image. The image may contain one or more bands, containing image data. The image mode contains information about the depth and type of the image pixel data. The most common modes we will use are RGB (true color, 3x8 bit pixel data), RGBA (true color with transparency mask, 4x8 bit) and L (black and white, 8 bit). In PIL, you can easily get the information about the bands data within an image. To get the name and number of bands, the getbands() method of the class Image can be used. Here, img is an instance of class Image. >>> img.getbands() ('R', 'G', 'B', 'A') Time for action – swap colors within an image! To understand some basic concepts, let's write code that just swaps the image band data. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as COLOR_TWEAK.png. Type the following code: 1 import Image 2 3 img = Image.open( "C:imagesCOLOR_TWEAK.png ") 4 img = img.convert('RGBA') 5 r, g, b, alpha = img.split() 6 img = Image.merge( "RGBA ", (g, r, b, alpha)) 7 img.show() Let's analyze this code now. On line 2, the Image instance is created as usual. Then, we change the mode of the image to RGBA. Here we should check if the image already has that mode or if this conversion is possible. You can add that check as an exercise! Next, the call to Image.split() creates separate instances of Image class, each containing a single band data. Thus, we have four Image instances—r, g, b, and alpha corresponding to red, green, and blue bands, and the alpha channel respectively. The code in line 6 does the main image processing. The first argument that Image.merge takes mode as the first argument whereas the second argument is a tuple of image instances containing band information. It is required to have same size for all the bands. As you can notice, we have swapped the order of band data in Image instances r and g while specifying the second argument. The original and resultant image thus obtained are compared in the next illustration. The color of the flower now has a shade of green and the grass behind the flower is rendered with a shade of red. Please download and refer to the supplementary PDF file Chapter 3 Supplementary Material.pdf. Here, the color images are provided that will help you see the difference. Original (left) and the color swapped image (right). What just happened? We accomplished creating an image with its band data swapped. We learned how to use PIL's Image.split() and Image.merge() to achieve this. However, this operation was performed on the whole image. In the next section, we will learn how to apply color changes to a specific color region.
Read more
  • 0
  • 0
  • 8767
article-image-working-master-pages-aspnet-mvc-2
Packt
17 Jan 2011
6 min read
Save for later

Working with Master Pages in ASP.NET MVC 2

Packt
17 Jan 2011
6 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      How to create a master page In this recipe, we will take a look at how to create a master page and associate it with our view. Part of creating a master page is defining placeholders for use in the view. We will then see how to utilize the content placeholders that we defined in the master page. How to do it... Start by creating a new ASP.NET MVC application. Then add a new master page to your solution called Custom.Master. Place it in the Views/Shared directory. Notice that there is a placeholder already placed in the middle of our page. Let's wrap that placeholder with a table. We will put a column to the left and the right of the existing placeholder. Then we will rename the placeholder to MainContent.Views/Shared/Custom.Master: <table> <tr> <td> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> </td> </tr> </table> Next, we will copy the placeholder into the first and the third columns.Views/Shared/Custom.Master: <table> <tr> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="MainContent" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder2" runat="server"></asp:ContentPlaceHolder> </td> </tr> </table> Next, we need to add a new action to the HomeController.cs file, from which we will create a new view. Do this by opening the HomeController.cs file, then add a new action named CustomMasterDemo.Controllers/HomeController.cs: public ActionResult CustomMasterDemo() { return View(); } Then right-click on the CustomerMasterDemo and choose AddView, and select the new Custom.Master page that we created. Next, you need to change the ContentPlaceHolderID box to show the center placeholder name ContentPlaceHolder2. Then hit Add and you should see a new view with four placeholders. Views/Home/CustomMasterDemo.aspx: <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <h2>Custom Master Demo</h2> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server"> <meta name="description" content="Here are some keywords for our page description."> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div style="width:200px;height:200px;border:1px solid #ff0000;"> <ul> <li>Home</li> <li>Contact Us</li> <li>About Us</li> </ul> </div> </asp:Content> <asp:Content ID="Content" ContentPlaceHolderID="ContentPlaceHolder2" runat="server"> <div style="width:200px;height:200px;border:1px solid #000000;"> <b>News</b><br/> Here is a blurb of text on the right! </div> </asp:Content> You should now see a page similar to this: How it works... This particular feature is a server-side carry over from web forms. It works just as it always has. Before being sent down to the client, the view is merged into the master file and processed according to the matching placeholder IDs. Determining the master page in the ActionResult In the previous recipe, we took a look at how to build a master page. In this recipe, we are going to take a look at how to control what master page to use programmatically. There are all sorts of reasons for using different master pages. For example, you might want to use different master pages based on the time of day, if a user is logged in or not, for different areas of your site (blog, shopping, forum, and so on). How to do it... We will get started by first creating a new MVC web application. Next, we need to create a second master page. We can do this quickly by making a copy of the default master page that is provided. Name it Site2.Master. Next, we need to make sure we can tell these two master pages apart. The easiest way to do this is to change the contents of the H1 tag to say Master 1 and Master 2 in each of the master pages. Now we can take a look at the HomeController. We will check if we are in an even or odd second and based on that we can return an even or odd master page. We do this by specifying the master page name that we want to use when we return the view.Controllers/HomeController.cs: public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; string masterName = ""; if (DateTime.Now.Second % 2 == 0) masterName = "Site2"; else masterName = "Site"; return View("Index", masterName); } Now you can run the application. Refreshing the home page should alternate between the two master pages now and then. (Remember that this is based on the second and is now just a pure alternating page scheme.) How it works... This method of controlling which master page is used by the view is built into the MVC framework and is the easiest way of performing this type of control. However, having to dictate this type of logic in every single action would create quite a bit of fluff code in our controller. This option might be appropriate for certain needs though!
Read more
  • 0
  • 0
  • 2968

article-image-introduction-cloud-computing-microsoft-azure
Packt
13 Jan 2011
6 min read
Save for later

Introduction to cloud computing with Microsoft Azure

Packt
13 Jan 2011
6 min read
What is an enterprise application? Before we hop into the cloud, let's talk about who this book is for. Who are "enterprise developers"? In the United States, over half of the economy is small businesses, usually privately owned, with a couple dozen of employees and revenues up to the millions of dollars. The applications that run these businesses have lower requirements because of smaller data volumes and a low number of application users. A single server may host several applications. Many of the business needs for these companies can be met with off-the-shelf software requiring little to no modification. The minority of the United States economy is made up of huge publicly owned corporations—think Microsoft, Apple, McDonald's, Coca-Cola, Best Buy, and so on. These companies have thousands of employees and revenues in the billions of dollars. Because these companies are publicly owned, they are subject to tight regulatory scrutiny. The applications utilized by these companies must faithfully keep track of an immense amount of data to be utilized by hundreds or thousands of users, and must comply with all matters of regulations. The infrastructure for a single application may involve dozens of servers. A team of consultants is often retained to install and maintain the critical systems of a business, and there is often an ecosystem of internal applications built around the enterprise systems that are just as critical. These are the applications we consider to be "enterprise applications", and the people who develop and extend them are "enterprise developers". The high availability of cloud platforms makes them attractive for hosting these critical applications, and there are many options available to the enterprise developer. What is cloud computing? At its most basic, cloud computing is moving applications accessible from our internal network onto an internet (cloud)-accessible space. We're essentially renting virtual machines in someone else's data center, with the capabilities for immediate scale-out, failover, and data synchronization. In the past, having an Internet-accessible application meant we were building a website with a hosted database. Cloud computing changes that paradigm—our application could be a website, or it could be a client installed on a local PC accessing a common data store from anywhere in the world. The data store could be internal to our network or itself hosted in the cloud. The following diagram outlines three ways in which cloud computing can be utilized for an application. In option 1, both data and application have been hosted in the cloud, the second option is to host our application in the cloud and our data locally, and the third option is to host our data in the cloud and our application locally. The expense (or cost) model is also very different. In our local network, we have to buy the hardware and software licenses, install and configure the servers, and finally we have to maintain them. All this counts in addition to building and maintaining the application! In cloud computing, the host usually handles all the installation, configuration, and maintenance of the servers, allowing us to focus mostly on the application. The direct costs of running our application in the cloud are only for each machine-hour of use and storage utilization. The individual pieces of cloud computing have all been around for some time. Shared mainframes and supercomputers have for a long time billed the end users based on that user's resource consumption. Space for websites can be rented on a monthly basis. Providers offer specialized application hosting and, relatively recently, leased virtual machines have also become available. If there is anything revolutionary about cloud computing, then it is its ability to combine all the best features of these different components into a single affordable service offering. Some benefits of cloud computing Cloud computing sounds great so far, right? So, what are some of the tangible benefits of cloud computing? Does cloud computing merit all the attention? Let's have a look at some of the advantages: Low up-front cost:At the top of the benefits list is probably the low up-front cost. With cloud computing, someone else is buying and installing the servers, switches, and firewalls, among other things. In addition to the hardware, software licenses and assurance plans are also expensive on the enterprise level, even with a purchasing agreement. In most cloud services, including Microsoft's Azure platform, we do not need to purchase separate licenses for operating systems or databases. In Azure, the costs include licenses for Windows Azure OS and SQL Azure. As a corollary, someone else is responsible for the maintenance and upkeep of the servers—no more tape backups that must be rotated and sent to off-site storage, no extensive strategies and lost weekends bringing servers up to the current release level, and no more counting the minutes until the early morning delivery of a hot swap fan to replace the one that burned out the previous afternoon. Easier disaster recovery and storage management:With synchronized storage across multiple data centers, located in different regions in the same country or even in different countries, disaster recovery planning becomes significantly easier. If capacity needs to be increased, it can be done quite easily by logging into a control panel and turning on an additional VM. It would be a rare instance indeed when our provider doesn't sell us additional capacity. When the need for capacity passes, we can simply turn off the VMs we no longer need and pay only for the uptime and storage utilization. Simplified migration:Migration from a test to a production environment is greatly simplified. In Windows Azure, we can test an updated version of our application in a local sandbox environment. When we're ready to go live, we deploy our application to a staged environment in the cloud and, with a few mouse clicks in the control panel, we turn off the live virtual machine and activate the staging environment as the live machine—we barely miss a beat! The migration can be performed well in advance of the cut-over, so daytime migrations and midnight cut-overs can become routine. Should something go wrong, the environments can be easily reversed and the issues analyzed the following day. Familiar environment:Finally, the environment we're working on is very familiar. In Azure's case, the environment can include the capabilities of IIS and .NET (or Java or PHP and Apache), with Windows and SQL Server or MySQL. One of the great features of Windows is that it can be confi gured in so many ways, and to an extent, Azure can also be configured in many ways, supporting a rich and familiar application environment.
Read more
  • 0
  • 0
  • 4593