Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-oracle-e-business-suite-creating-items-inventory
Packt
18 Aug 2011
7 min read
Save for later

Oracle E-Business Suite: Creating Items in Inventory

Packt
18 Aug 2011
7 min read
  Oracle E-Business Suite 12 Financials Cookbook Take the hard work out of your daily interactions with E-Business Suite financials by using the 50+ recipes from this cookbook         Read more about this book       Oracle E-Business Suite 12 Financials is a solution that provides out-of-the-box features to meet global financial reporting and tax requirements with one accounting, tax, banking, and payments model, and makes it easy to operate shared services across businesses and regions. In this article by Yemi Onigbode, author of Oracle E-Business Suite 12 Financials Cookbook, we will start with recipes for creating Items. We will cover: Creating Items Exploring Item attributes Creating Item templates Exploring Item controls (For more resources on Oracle, see here.) Introduction An organization's operations include the buying and selling of products and services. Items can represent the products and services that are purchased and sold in an organization. Let's start by looking at the Item creation process. The following diagram details the process for creating Items: The Item Requester (the person who requests an Item) completes an Item Creation Form, which should contain information such as: Costing information Pricing Information Item and Product Categories Details of some of the Item attributes The inventory organization details Once complete, a message is sent to the Master Data Manager (the person who maintains the master data) to create the Item. The message could be sent by fax, e-mail, and so on. The Master Data Manager reviews the form and enters the details of the Item into Oracle E-Business Suite by creating the Item. Once complete, a message is sent to the Item Requester. The Item Requester reviews the Item setup on the system. Let's look at how Items are created and explore the underlying concepts concerning the creation of Items. Creating Items Oracle Inventory provides us with the functionality to create Items. Sets of attributes are assigned to an Item. The attributes define the characteristics of the Item. A group of attributes values defines a template, and a template can be assigned to an Item to automatically define the set of attribute values. An Item template defines the Item Type. For example, a Finished Good template will identify certain characteristics that define the Item as a finished good, with attributes such as "Inventory Item" and "Stockable" with a value of "Yes". Let's look at how to create an Item in Oracle Inventory. We will also assign a Finished Good template to the Item. Getting ready Log in to Oracle E-Business Suite R12 with the username and password assigned to you by the System Administrator. If you are working on the Vision demonstration database, you can use OPERATIONS/WELCOME as the USERNAME/PASSWORD: Select the Inventory Responsibility. Select the V1 Inventory Organization. How to do it... Let's list the steps required to create an Item: Navigate to Items | Master Items. Please note that Items are defined in the Master Organization. Enter the Item code, for example, PRD20001. Enter a description for the Item: Select Copy From from the tools menu (or press Alt+T). We are going to copy the attributes from the Finished Good template: We can also copy attributes from an existing Item. Enter Finished Good and click on the Apply button (or press Alt+A) and click on the Done button. Save the Item definition by clicking on the Save icon (or press Ctrl+S). How it works... Items contain attributes and attributes contain information about an Item. Attributes can be controlled centrally at the Master Organization level or at the Inventory Organization level. There's more... Once the Item is created, we need to assign it to a category and an inventory organization. Assigning Items to inventory organizations For us to be able to perform transactions with the Item in the inventory, we need to assign the Item to an inventory organization. We can also use the organization Item form to change the attributes at the organization level. For example, an Item may be classified as raw materials in one organization and finished goods in another organization. From the Tools menu, select Organization Assignment. Select the inventory organization for the Item. For example, A1–ACME Corporation. Click on the Assigned checkbox. Save the assignment. Assigning Items to categories When an Item is created, it is assigned to a default category. However, you may want to perform transactions with the Item in more than one functional area, such as Inventory, Purchasing, Cost Management, Service, Engineering, and so on. You need to assign the Item to the relevant functional area. A category within a functional area is a logical classification of Items with similar characteristics. From the Tools menu, select Categories. Select the Categories Set, Control Level, and the Category combination to assign to the Item: Save the assignment. Exploring Item attributes There are more than 250 Item attributes grouped into 17 main attribute groups. In this recipe, we will explore the main groups that are used within the financial modules. How to do it... Let's explore some Item attributes: Search for the Finished Good Item by navigating to Items | Master Items: Click on the Find icon. You then enter the Item code and click on the Find button to search for the Item. Select the tabs to review each of the attributes group: In the Main tab, check that the Item Status is Active. We can also enter a long description in the Long Description field. The default value of the primary Unit of Measure (UOM) can be defined in the INV: Default Primary Unit of Measure profile option. The value can be overwritten when creating the Item. The Primary UOM is the default UOM used in other modules. For example, in Receivables it is used for invoices and credit memos. In the Inventory tab, check that the following are enabled: Inventory Item: It enables the Item to be transacted in Inventory. The default Inventory Item category is automatically assigned to the Item, if enabled. Stockable: It enables the Item to be stocked in Inventory. Transactable: Order Management uses this flag to determine how returns are transacted in Inventory. Reservable: It enables the reservation of Items during transactions. For example, during order entry in Order Management. In the Costing tab, check that the following are enabled: Costing: Enables the accounting for Item costs. It can be overridden in the Cost Management module, if average costing is used. Cost of Goods Sold Account: The cost of goods sold account is entered. This is a general ledger account. The value defaults from the Organization parameters. In the Purchasing tab, enter a Default Buyer for the purchase orders, a List Price, and an Expense Account. Check that the following are enabled: Purchased: It enables us to purchase and receive the Item. Purchasable: It enables us to create a Purchase Order for the Item. Allow Description Update: It enables us to change the description of the Item when raising the Purchase Order. RFQ Required: Set this value to Yes to enable us to require a quotation for this Item. Taxable: Set this value to Yes with the Input Tax Classification Code as VAT–15%. This can be used with the default rules in E-Tax. Invoice Matching: Receipt Required–Yes. This is to allow for three-way matching. In the Receiving tab, review the controls. In the Order Management tab, check that the following are enabled: Customer Ordered: This enables us to define prices for an Item assigned to a price list. Customer Orders Enabled: This enables us to sell the Item. Shippable: This enables us to ship the Item to the Customer. Internal Ordered: This enables us to order an Item via internal requisitions. Internal Orders Enabled: This enables us to temporarily exclude an Item from internal requisitions. OE Transactable: This is used for demand management of an Item. In the Invoicing tab, enter values for the Accounting Rule, Invoicing Rule, Output Tax Classification Code, and Payment Terms. Enter the Sales Account code and check that the Invoiceable Item and Invoice Enabled checkboxes are enabled.
Read more
  • 0
  • 0
  • 11946

article-image-terms-and-concepts-related-mdx
Packt
09 Aug 2011
8 min read
Save for later

Terms and Concepts Related to MDX

Packt
09 Aug 2011
8 min read
MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook More than 80 recipes for enriching your Business Intelligence solutions with high-performance MDX calculations and flexible MDX queries in this book and eBook Parts of an MDX query This section contains the brief explanation of the basic elements of MDX queries: members, sets, tuples, axes, properties, and so on. Regular members Regular dimension members are members sourced from the underlying dimension tables. They are the building blocks of dimensions, fully supported in any type of drill operations, drillthrough, scopes, subqueries, and probably all SSAS front-ends. They can have children and be organized in multilevel user hierarchies. Some of the regular member's properties can be dynamically changed using scopes in MDX script or cell calculations in queries - color, format, font, and so on. Measures are a type of regular members, found on the Measures dimension/hierarchy. The other type of members is calculated members. Calculated members Calculated members are artificial members created in a query, session, or MDX script. They do not exist in the underlying dimension table and as such are not supported in drillthrough and scopes. In subqueries, they are only supported if the connection string includes one of these settings: Subqueries = 1 or Subqueries = 2. See here for examples: http://tinyurl.com/ChrisSubquerySettings They also have a limited set of properties compared to regular members and worse support than regular members in some SSAS front-ends. An often practiced workaround is creating dummy regular members in a dimension table and then using MDX script assignments to provide the calculation for them. They are referred to as "Dummy" because they never occur in the fact table which also explains the need for assignments. Tuples A tuple is a coordinate in the multidimensional cube. That coordinate can be huge and is often such. For example, a cube with 10 dimensions each having 5 attributes is a 51 dimensional object (measures being that extra one). To fully define a coordinate we would have to reference every single attribute in that cube. Fortunately, in order to simplify their usage, tuples are allowed to be written using a part of the full coordinate only. The rest of the coordinate inside the tuple is implicitly evaluated by SSAS engine, either using the current members (for unrelated hierarchies) or through the mechanism known as strong relationships (for related hierarchies). It's worth mentioning that the initial current members are cube's default members. Any subsequent current members are derived from the current context of the query or the calculation. Evaluation of implicit members can sometimes lead to unexpected problems. We can prevent those problems by explicitly specifying all the hierarchies we want to have control over and thereby not letting the implicit evaluation to occur for those hierarchies. Contrary to members and sets, tuples are not an object that can be defined in the WITH part of the query or in MDX script. They are non-persistent. Tuples can be found in sets, during iteration or in calculations. They are often used to set or overwrite the current context, in other words, to jump out of the current context and get the value in another coordinate of the cube. Another important aspect of tuples is their dimensionality. When building a set from tuples, two or more tuples can be combined only if they are built from the same hierarchies, specified in the exact same order. That's their dimensionality. You should know that rearranging the order of hierarchies in a tuple doesn't change its value. Therefore, this can be the first step we can do to make the tuples compatible. The other thing is adding the current members of hierarchies present only in the other tuple, to match the other tuple's dimensionality. Named sets A named set is a user-defined collection of members, more precisely, tuples. Named sets are found in queries, sessions, and MDX scripts. Query-based named sets are equivalent to dynamic sets in MDX script. They both react to the context of subquery and slicer. Contrary to them, static sets are constant, independent of any context. Only the sets that have the same dimensionality can be combined together because what we really combine are the tuples they are built from. It is possible to extract one or more hierarchies from the set. It is also possible to expand the set by crossjoining it with hierarchies not present in its tuples. These processes are known as reducing and increasing the dimensionality of a set. Set alias Set aliases can be defined in calculations only, as a part of that calculation and not in the WITH part of the query as a named set. This is done by identifying a part of the calculation that represents a set and giving a name to that expression inside the calculation, using the AS keyword. This way that set can be used in other parts of the calculation or even other calculations of the query or MDX script. Set aliases enable true dynamic evaluation of sets in a query because they can be evaluated for each cell if used inside a calculated measure. The positive effect is that they are cached, calculated only once and used many times in the calculation or query. The downside is that they prevent block-computation mode because the above mentioned evaluation is performed for each cell individually. In short, set aliases can be used in long calculations, where the same set appears multiple times or when that set needs to be truly dynamic. At the same time, they are to be avoided in iterations of any kind. Axis An axis is a part of the query where a set is projected at. A query can have up to 128 axes although most queries have 1 or 2 axes. A query with no axis is also a valid query but almost never used. The important thing to remember is that axes are evaluated independently. SSAS engine knows in which order to calculate them if there is a dependency between them. One way to create such a dependency is to refer to the current member of a hierarchy on the other axis. The other option would be to use the Axis() function. Some SSAS front-ends generate MDX queries that break the axes dependencies established through calculations. The workaround calculations can be very hard if not impossible. Slicer The slicer, also known as the filter axis or the WHERE clause, is a part of the query which sets the context for the evaluation of members and sets on axes and in the WITH part of the query. The slicer, which can be anything from a single tuple up to the multidimensional set, interacts with sets on axes. A single member of a hierarchy in slicer forces the coordinate and reduces the related sets on axes by removing all non-existing combinations. Multiple members of the same hierarchy are not that strong. In their case, individual members in sets on axes overwrite the context of the slicer during their evaluation. Finally, the context established by the slicer can be overwritten in the calculations using tuples. Subquery The subquery, also known as the subselect, is a part of the query which executes first and determines the cube space to be used in the query. Unlike slicer, the subquery doesn't set the coordinate for the query. In other words, current members of all hierarchies (related, unrelated, and even the same hierarchy used in the subquery) remain the same. What the subquery does is it applies the VisualTotals operation on members of hierarchies used in the subquery. The VisualTotals operation changes each member's value with the aggregation value of its children, but only those present in the subquery. Because the slicer and the subquery have different behaviors, one should not be used as a replacement for the other. Whenever you need to set the context for the whole query, use the slicer. That will adjust the total for all hierarchies in the cube. If you only need to adjust the total for some hierarchies in the cube and not for the others, subquery is the way to go; specify those hierarchies in the subquery. This is also an option if you need to prevent any attribute interaction between your subquery and the query. The areas where the subquery is particularly good at are grouping of non-granular attributes, advanced set logic, and restricting members on hierarchies. Cell properties Cell properties are properties that can be used to get specific behaviors for cells. For example: colors, font sizes, types and styles, and so on. Unless explicitly asked for, only the Cell_Ordinal, Value, and Formatted_Value properties are returned by an MDX query. Dimension properties Dimension properties are a set of member properties that return extra information about members on axes. Intrinsic member properties are Key, Name, and Value; the others are those defined by the user for a particular hierarchy in the Dimension Designer. In client tools, dimension properties are often shown in the grid next to the attribute they are bound to or in the hint over that attribute.  
Read more
  • 0
  • 0
  • 1561

article-image-oracle-integration-and-consolidation-products
Packt
09 Aug 2011
11 min read
Save for later

Oracle Integration and Consolidation Products

Packt
09 Aug 2011
11 min read
Oracle Information Integration, Migration, and Consolidation The definitive book and eBook guide to Oracle information integration and migration in a heterogeneous world Data services Data services are at the leading edge of data integration. Traditional data integration involves moving data to a central repository or accessing data virtually through SQL-based interfaces. Data services are a means of making data a 'first class' citizen in your SOA. Recently, the idea of SOA-enabled data services has taken off in the IT industry. This is not any different than accessing data using SQL, JDBC, or ODBC. What is new is that your service-based architecture can now view any database access service as a web service. Service Component Architecture (SCA) plays a big role in data services as now data services created and deployed using Oracle BPEL, Oracle ESB, and other Oracle SOA products can be part of an end-to-end data services platform. No longer do data services deployed in one of the SOA products have to be deployed in another Oracle SOA product. SCA makes it possible to call a BPEL component from Oracle Service Bus and vice versa. Oracle Data Integration Suite Oracle Data Integration (ODI)Suite includes the Oracle Service Bus to publish and subscribe messaging capabilities. Process orchestration capabilities are provided by Oracle BPEL Process Manager, and can be configured to support rule-based, event-based, and data-based delivery services. The Oracle Data Quality for Data Integrator, Oracle Data Profiling products, and Oracle Hyperion Data Relationship Manager provide best-in-class capabilities for data governance, change management hierarchical data management, and provides the foundation for reference data management of any kind. ODI Suite allows you to create data services that can be used in your SCA environment. These data services can be created in ODI, Oracle BPEL or the Oracle Service Bus. You can surround your SCA data services with Oracle Data Quality and Hyperion Data Relationship to cleanse your data and provide master data management. ODI Suite effectively serves two purposes: Bundle Oracle data integration solutions as most customers will need ODI, Oracle BPEL, Oracle Service Bus, and data quality and profiling in order to build a complete data services solution Compete with similar offerings from IBM (InfoSphere Information Server) and Microsoft (BizTalk 2010) that offer complete EII solutions in one offering The ODI Suite data service source and target data sources along with development languages and tools supported are: Data sourceData targetDevelopment languages and toolsERPs, CRMs, B2B systems, flat files, XML data, LDAP, JDBC, ODBCAny data sourceSQL, Java, GUI The most likely instances or use cases when ODI Suite would be the Oracle product or tool selected are: SCA-based data services An end-to-end EII and data migration solution Data services can be used to expose any data source as a service. Once a data service is created, it is accessible and consumable by any web service-enabled product. In the case of Oracle, this is the entire set of products in the Oracle Fusion Middleware Suite. Data consolidation The mainframe was the ultimate solution when it came to data consolidation. All data in an enterprise resided in one or several mainframes that were physically located in a data center. The rise of the hardware and software appliance has created a 'what is old is new again' situation; a hardware and software solution that is sold as one product. Oracle has released the Oracle Exadata appliance and IBM acquired the pure database warehouse appliance company Netezza, HP, and Microsoft announced work on an SQL Server database appliance, and even companies like SAP, EMC, and CICSO are talking about the benefits of database appliances. The difference is (and it is a big difference) that the present architecture is based upon open standards hardware platforms, operating systems, client devices, network protocols, interfaces, and databases. So, you now have a database appliance that is not based upon proprietary operating systems, hardware, network components, software, and data disks. Another very important difference is that enterprise software COTS packages, management tools, and other software infrastructure tools will work across any of these appliance solutions. One of the challenges for customers that run their business on the mainframe is that they are 'locked into' vendor- specific sorting, reporting, job scheduling, system management, and other products usually only offered from IBM, CA, BMC, or Compuware. Mainframe customers also suffer from a lack of choice when it comes to COTS applications. Since appliances are based upon open systems, there is an incredibly large software ecosystem. Oracle Exadata Oracle Exadata is the only database appliance that runs both data warehouse and OLTP applications. Oracle Exadata is an appliance that includes every component an IT organization needs to process information—from a grid database down to the power supply. It is a hardware and software solution that can be up and running in an enterprise in weeks instead of months for typical IT database solutions. Exadata provides high speed data access using a combination of hardware and a database engine that runs at the storage tier. Typical database solutions have to use indexes to retrieve data from storage and then pull large volumes of data into the core database engine, which churns through millions of rows of data to send a handful of row results to the client. Exadata eliminates the need for indexes and data engine processing by placing a lightweight database engine at the storage tier. Therefore, the database engine is only provided with the end result and does not have to utilize complicated indexing schemes, large amounts of CPU, and memory to produce the end results set. Exadata's capabilities to run large OLTP and data warehouse applications, or a large number of smaller OLTP and data warehouse applications on one machine make it a great platform for data consolidation. The first release of Oracle Exadata was based upon HP hardware and was for data warehouses only. The second release came out shortly before Oracle acquired Sun. This release was based upon Sun hardware, but ironically not on Sun Sparc or Solaris (Solaris is now an OS option). The Exadata source and target data sources along with development languages and tools supported are: Data sourceData targetDevelopment languages and toolsAny (depending upon the data source this may involve an intensive migration effort)Oracle ExadataSQL, PL/SQL, Java The most likely instances or use cases when Exadata would be the Oracle product or tool selected are: A move from hundreds of standalone database hardware and software nodes to one database machine A reduction in hardware and software vendors, and one vendor for hardware and software support Keepin It Real The database appliance has become the latest trend in the IT industry. Data warehouse appliances like Netezza have been around for a number of years. Oracle has been the first vendor to offer an open systems database appliance for both DW and OLTP environments. Data grid Instead of consolidating databases physically or accessing the data where it resides, a data grid places the data into an in-memory middle tier. Like physical federation, the data is being placed into a centralized data repository. Unlike physical federation, the data is not placed into a traditional RDBMS system (Oracle database), but into a high-speed memory-based data grid. Oracle offers both a Java and SQL-based data grid solution. The decision of what product to implement often depends on where the corporations system, database, and application developer skills are strongest. If your organization has strong Java or .Net skills and is more comfortable with application servers than databases, then Oracle Coherence is typically the product of choice. If you have strong database administration and SQL skills, then Oracle TimesTen is probably a better solution. The Oracle Exalogic solution takes the data grid to another level by placing Oracle Coherence, along with other Oracle hardware and software solutions, into an appliance. This appliance provides an 'end-to-end' solution or data grid 'in a box'. It reduces management, increases performance, reduces TCO, and eliminates the need for the customer having to build their own hardware and software solution using multiple vendor solutions that may not be certified to work together. Oracle Coherence Oracle Coherence is an in-memory data grid solution that offers next generation Extreme Transaction Processing (XTP). Organizations can predictably scale mission critical applications by using Oracle Coherence to provide fast and reliable access to frequently used data. Oracle Coherence enables customers to push data closer to the application for faster access and greater resource utilization. By automatically and dynamically partitioning data in memory across multiple servers, Oracle Coherence enables continuous data availability and transactional integrity, even in the event of a server failure. Oracle Coherence was purchased from Tangosol Software in 2007. Coherence was an industry-leading middle tier caching solution. The product only offered a Java solution at the time of acquisition, but a .NET offering was already scheduled before the acquisition took place. The Oracle Coherence source and target data sources along with development languages and tools supported are: Data sourceData targetDevelopment languages and toolsJDBC, any data source accessible through Oracle SOA adaptersCoherenceJava, .Net The most likely instances or use cases when Oracle Coherence would be the Oracle product or tool selected are: When it is necessary to replace custom, hand-coded solutions that cache data in middle tier Java or .NET application servers Your company's strengths are in application servers Java or .NET Oracle TimesTen Oracle TimesTen is a data grid/cache offering that has similar characteristics to Oracle Coherence. Both of the solutions offer a product that caches data in the middle tier for high throughput and high transaction volumes. The technology implementations are much different. TimesTen is an in-memory database solution that is accessed through SQL and the data storage mechanism is a relational database. The TimesTen solution data grid can be implemented across a wide area network (WAN) and the nodes that make up the data grid are kept in sync with your back end Oracle database using Oracle Cache Connect. Cache Connect is also used to automatically refresh the TimesTen database on a push or pull basis from your Oracle backend database. Cache Connect can also be used to keep TimesTen databases spread across the global in sync. Oracle TimesTen offers both read and update support, unlike other database in- memory solutions. This means that Oracle TimesTen can be used to run your business even if your backend database is down. The transactions that occur during the downtime are queued and applied to your backend database once it is restored. The other similarity between Oracle Coherence and TimesTen is that they both were acquired technologies. Oracle TimesTen was acquired from the company TimesTen in 2005. The Oracle TimesTen source and target data sources along with development languages and tools supported are: Data sourceData targetDevelopment languages and toolsOracleTimesTenSQL, CLI The most likely instances or use cases when Oracle TimesTen would be the Oracle product or tool selected are: For web-based read-only applications that require a millisecond responseand data close to where request is made For applications where updates need not be reflected back to the user in real-time Oracle Exalogic A simplified explanation of Oracle Exalogic is that it is Exadata for the middle tier application infrastructure. While Exalogic is optimized for enterprise Java, it is also a suitable environment for the thousands of third-party and custom Linux and Solaris applications widely deployed on Java, .NET, Visual Basic, PHP, or any other programming language. The core software components of Exalogic are WebLogic, Coherence, JRocket or Java Hotspot, and Oracle Linux or Solaris. Oracle Exalogic has an optimized version of WebLogic to run Java applications more efficiently and faster than a typical WebLogic implementation. Oracle Exalogic is branded with the Oracle Elastic cloud as an enterprise application consolidation platform. This means that applications can be added on demand and in real-time. Data can be cached in Oracle Coherence for a high speed, centralized, data grid sharable on the cloud. The Exalogic source and target data sources along with development languages and tools supported are: Data sourceData targetDevelopment languages and toolsAny data sourceCoherenceAny language The most likely instances or use cases when Exalogic would be the Oracle product or tool selected are: Enterprise consolidated application server platform Cloud hosted solution Upgrade and Consolidation of hardware or software Oracle Coherence is the product of choice for Java and .NET versed development shops. Oracle TimesTen is more applicable to database-centric and shops more comfortable with SQL.
Read more
  • 0
  • 0
  • 1981
Visually different images

article-image-performing-common-mdx-related-tasks
Packt
05 Aug 2011
6 min read
Save for later

Performing Common MDX-related Tasks

Packt
05 Aug 2011
6 min read
  MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook More than 80 recipes for enriching your Business Intelligence solutions with high-performance MDX calculations and flexible MDX queries in this book and eBook Skipping axis There are situations when we want to display just a list of members and no data associated with them. Naturally, we expect to get that list on rows, so that we can scroll through them nicely. However, the rules of MDX say we can't skip axes. If we want something on rows (which is AXIS(1) by the way), we must use all previous axes as well (columns in this case, which is also known as AXIS(0)). The reason why we want the list to appear on axis 1 and not axis 0 is because a horizontal list is not as easy to read as a vertical one. Is there a way to display those members on rows and have nothing on columns? Sure! This recipe shows how. Getting ready Follow these steps to set up the environment for this recipe: Start SQL Server Management Studio (SSMS) or any other application you use for writing and executing MDX queries and connect to your SQL Server Analysis Services (SSAS) 2008 R2 instance (localhost or servernameinstancename). Click on the New Query button and check that the target database is Adventure Works DW 2008R2. How to do it... Follow these steps to get a one-dimensional query result with members on rows: Put an empty set on columns (AXIS(0)). Notation for empty set is this: {}. Put some hierarchy on rows (AXIS(1)). In this case we used the largest hierarchy available in this cube – Customer hierarchy of the same dimension. Run the following query: SELECT { } ON 0, { [Customer].[Customer].[Customer].MEMBERS } ON 1 FROM [Adventure Works] How it works... Although we can't skip axes, we are allowed to provide an empty set on them. This trick allows us to get what we need – nothing on columns and a set of members on rows. There's more… Notice that this type of query is very convenient for parameter selection of another query as well as for search. See how it can be modified to include only those customers whose name contains the phrase "John": SELECT { } ON 0, { Filter( [Customer].[Customer].[Customer].MEMBERS, InStr( [Customer].[Customer].CurrentMember.Name, 'John' ) > 0 ) } ON 1 FROM [Adventure Works] In the final result, you will notice the "John" phrase in various positions in member names: The idea behind If you put a cube measure or a calculated measure with a non-constant expression on axis 0 instead, you'll slow down the query. Sometimes it won't be so obvious, sometimes it will. It will depend on the measure's definition and the number of members in the hierarchy being displayed. For example, if you put the Sales Amount measure on columns, that measure will have to be evaluated for each member in the rows. Do we need those values? No, we don't. The only thing we need is a list of members; hence we've used an empty set. That way, the SSAS engine doesn't have to go into cube space. It can reside in dimension space which is much smaller and the query is therefore more efficient. Possible workarounds In case of a third-party application or a control which has problems with this kind of MDX statement (i.e. expects something on columns and is not working with an empty set), we can define a constant measure (a measure returning null, 0, 1 or any other constant) and place it on columns instead of that empty set. For example, we can define a calculated measure in the MDX script whose definition is 1, or any other constant value, and use that measure on the columns axis. It might not be as efficient as an empty set, but it is a much better solution than the one with a regular (non-constant) cube measure like the Sales Amount measure.   Handling division by zero errors Another common task is handling errors, especially division by zero type of errors. This recipe offers a way to solve that problem. Not all versions of Adventure Works database have the same date range. If you're not using the recommended version of it, the one for the SSAS 2008 R2, you might have problems with queries. Older versions of Adventure Works database have dates up to the year 2006 or even 2004. If that's the case, make sure you adjust examples by offsetting years in the query with a fixed number. For example, the year 2006 should become 2002 and so on. Getting ready Start a new query in SQL Server Management Studio and check that you're working on Adventure Works database. Then write and execute this query: WITH MEMBER [Date].[Calendar Year].[CY 2006 vs 2005 Bad] AS [Date].[Calendar Year].[Calendar Year].&[2006] / [Date].[Calendar Year].[Calendar Year].&[2005], FORMAT_STRING = 'Percent' SELECT { [Date].[Calendar Year].[Calendar Year].&[2005], [Date].[Calendar Year].[Calendar Year].&[2006], [Date].[Calendar Year].[CY 2006 vs 2005 Bad] } * [Measures].[Reseller Sales Amount] ON 0, { [Sales Territory].[Sales Territory].[Country].MEMBERS } ON 1 FROM [Adventure Works] This query returns 6 rows with countries and 3 rows with years, the third row being the ratio of the previous two, as its definition says. The problem is that we get 1.#INFM on some cells. To be precise, that value (the formatted value of infinity), appears on rows where the CY 2005 is null. Here's a solution for that. How to do it... Follow these steps to handle division by zero errors: Copy the calculated member and paste it as another calculated member. During that, replace the term Bad with Good in its name, just to differentiate those two members. Copy the denominator. Wrap the expression in an outer IIF() statement. Paste the denominator in the condition part of the IIF() statement and compare it against 0. Provide null value for the True part. Your initial expression should be in the False part. Don't forget to include the new member on columns and execute the query: MEMBER [Date].[Calendar Year].[CY 2006 vs 2005 Good] AS IIF ([Date].[Calendar Year].[Calendar Year].&[2005] = 0, null, [Date].[Calendar Year].[Calendar Year].&[2006] / [Date].[Calendar Year].[Calendar Year].&[2005] ), FORMAT_STRING = 'Percent' The result shows that the new calculated measure corrects the problem – we don't get errors (the rightmost column, compared to the one on its left): How it works... A division by zero error occurs when the denominator is null or zero and the numerator is not null. In order to prevent this error, we must test the denominator before the division and handle the case when it is null or zero. That is done using an outer IIF() statement. It is enough to test just for zero because null = 0 returns True. There's more... SQLCAT's SQL Server 2008 Analysis Services Performance Guide has lots of interesting details regarding the IIF() function: http://tinyurl.com/PerfGuide2008 Additionally, you may find Jeffrey Wang's blog article useful in explaining the details of the IIF() function: http://tinyurl.com/IIFJeffrey Earlier versions of SSAS If you're using a version of SSAS prior to 2008 (that is, 2005), the performance will not be as good. See Mosha Pasumansky's article for more info: http://tinyurl.com/IIFMosha  
Read more
  • 0
  • 0
  • 1774

article-image-how-perform-iteration-sets-mdx
Packt
05 Aug 2011
5 min read
Save for later

How to Perform Iteration on Sets in MDX

Packt
05 Aug 2011
5 min read
  MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook More than 80 recipes for enriching your Business Intelligence solutions with high-performance MDX calculations and flexible MDX queries in this book and eBook Iteration is a very natural way of thinking for us humans. We set a starting point, we step into a loop, and we end when a condition is met. While we're looping, we can do whatever we want: check, take, leave, and modify items in that set. Being able to break down the problems in steps makes us feel that we have things under control. However, by breaking down the problem, the query performance often breaks down as well. Therefore, we have to be extra careful with iterations when data is concerned. If there's a way to manipulate the collection of members as one item, one set, without cutting that set into small pieces and iterating on individual members, we should use it. It's not always easy to find that way, but we should at least try. Iterating on a set in order to reduce it Getting ready Start a new query in SSMS and check that you're working on the right database. Then write the following query: SELECT { [Measures].[Customer Count], [Measures].[Growth in Customer Base] } ON 0, NON EMPTY { [Date].[Fiscal].[Month].MEMBERS } ON 1 FROM [Adventure Works] WHERE ( [Product].[Product Categories].[Subcategory].&[1] ) The query returns fiscal months on rows and two measures: a count of customers and their growth compared to the previous month. Mountain bikes are in slicer. Now let's see how we can get the number of days the growth was positive for each period. How to do it... Follow these steps to reduce the initial set: Create a new calculated measure in the query and name it Positive growth days. Specify that you need descendants of current member on leaves. Wrap around the FILTER() function and specify the condition which says that the growth measure should be greater than zero. Apply the COUNT() function on a complete expression to get count of days. The new calculated member's definition should look as follows, verify that it does. WITH MEMBER [Measures].[Positive growth days] AS FILTER( DESCENDANTS([Date].[Fiscal].CurrentMember, , leaves), [Measures].[Growth in Customer Base] > 0 ).COUNT Add the measure on columns. Run the query and observe if the results match the following image: How it works... The task says we need to count days for each time period and use only positive ones. Therefore, it might seem appropriate to perform iteration, which, in this case, can be performed using the FILTER() function. But, there's a potential problem. We cannot expect to have days on rows, so we must use the DESCENDANTS() function to get all dates in the current context. Finally, in order to get the number of items that came up upon filtering, we use the COUNT function. There's more... Filter function is an iterative function which doesn't run in block mode, hence it will slow down the query. In the introduction, we said that it's always wise to search for an alternative if available. Let's see if something can be done here. A keen eye will notice a "count of filtered items" pattern in this expression. That pattern suggests the use of a set-based approach in the form of SUM-IF combination. The trick is to provide 1 for the True part of the condition taken from the FILTER() statement and null for the False part. The sum of one will be equivalent to the count of filtered items. In other words, once rewritten, that same calculated member would look like this: MEMBER [Measures].[Positive growth days] AS SUM( Descendants([Date].[Fiscal].CurrentMember, , leaves), IIF( [Measures].[Growth in Customer Base] > 0, 1, null) ) Execute the query using the new definition. Both the SUM() and the IIF() functions are optimized to run in the block mode, especially when one of the branches in IIF() is null. In this particular example, the impact on performance was not noticeable because the set of rows was relatively small. Applying this technique on large sets will result in drastic performance improvement as compared to the FILTER-COUNT approach. Be sure to remember that in future. More information about this type of optimization can be found in Mosha Pasumansky's blog: http://tinyurl.com/SumIIF Hints for query improvements There are several ways you can avoid the FILTER() function in order to improve performance. When you need to filter by non-numeric values (i.e. properties or other metadata), you should consider creating an attribute hierarchy for often-searched items and then do one of the following: Use a tuple when you need to get a value sliced by that new member Use the EXCEPT() function when you need to negate that member on its own hierarchy (NOT or <>) Use the EXISTS() function when you need to limit other hierarchies of the same dimension by that member Use the NONEMPTY() function when you need to operate on other dimensions, that is, subcubes created with that new member Use the 3-argument EXISTS() function instead of the NONEMPTY() function if you also want to get combinations with nulls in the corresponding measure group (nulls are available only when the NullProcessing property for a measure is set to Preserve) When you need to filter by values and then count a member in that set, you should consider aggregate functions like SUM() with IIF() part in its expression, as described earlier.  
Read more
  • 0
  • 0
  • 5611

article-image-getting-started-oracle-information-integration
Packt
02 Aug 2011
14 min read
Save for later

Getting Started with Oracle Information Integration

Packt
02 Aug 2011
14 min read
  Oracle Information Integration, Migration, and Consolidation: RAW The definitive book and eBook guide to Oracle Information Integration and Migration in a heterogeneous world         Read more about this book       (For more resources on Oracle, see here.) Why consider information integration? The useful life of pre-relational mainframe database management system engines is coming to an end because of a diminishing application and skills base, and increasing costs.—Gartner Group During the last 30 years, many companies have deployed mission critical applications running various aspects of their business on the legacy systems. Most of these environments have been built around a proprietary database management system running on the mainframe. According to Gartner Group, the installed base of mainframe, Sybase, and some open source databases has been shrinking. There is vendor sponsored market research that shows mainframe database management systems are growing, which, according to Gartner, is due primarily to increased prices from the vendors, currency conversions, and mainframe CPU replacements. Over the last few years, many companies have been migrating mission critical applications off the mainframe onto open standard Relational Database Management Systems (RDBMS) such as Oracle for the following reasons: Reducing skill base: Students and new entrants to the job market are being trained on RDBMS like Oracle and not on the legacy database management systems. Legacy personnel are retiring, and those that are not are moving into expensive consulting positions to arbitrage the demand. Lack of flexibility to meet business requirements: The world of business is constantly changing and new business requirements like compliance and outsourcing require application changes. Changing the behavior, structure, access, interface or size of old databases is very hard and often not possible, limiting the ability of the IT department to meet the needs of the business. Most applications on the aging platforms are 10 to 30 years old and are long past their original usable lifetime. Lack of Independent Software Vendor (ISV)applications: With most ISVs focusing on the larger market, it is very difficult to find applications, infrastructure, and tools for legacy platforms. This requires every application to be custom coded on the closed environment by scarce in-house experts or by expensive outside consultants. Total Cost of Ownership (TCO): As the user base for proprietary systems decreases, hardware, spare parts, and vendor support costs have been increasing. Adding to this are the high costs of changing legacy applications, paid either as consulting fees for a replacement for diminishing numbers of mainframe trained experts or increased salaries for existing personnel. All leading to a very high TCO which doesn't even take into account the opportunity cost to the business of having inflexible systems. Business challenges in data integration and migration Once the decision has been taken to migrate away from a legacy environment, the primary business challenge is business continuity. Since many of these applications are mission critical, running various aspects of the business, the migration strategy has to ensure continuity to the new application—and in the event of failure, rollback to the mainframe application. This approach requires data in the existing application to be synchronized with data on the new application. Making the challenge of data migration more complicated is the fact that legacy applications tend to be interdependent, but the need from a risk mitigation standpoint is to move applications one at a time. A follow-on challenge is prioritizing the order in which applications are to be moved off the mainframe, and ensuring that the order meets both the business needs and minimizes the risk in the migration process. Once a specific application is being migrated, the next challenge is to decide which business processes will be migrated to the new application. Many companies have business processes that are present, because that's the way their systems work. When migrating an application off the mainframe, many business processes do not need to migrate. Even among the business processes that need to be migrated, some of these business processes will need to be moved as-is and some of them will have to be changed. Many companies utilize the opportunity afforded by a migration to redo the business processes they have had to live with for many years. Data is the foundation of the modernization process. You can move the application, business logic, and work flow, but without a clean migration of the data the business requirements will not be met. A clean data migration involves: Data that is organized in a usable format by all modern tools Data that is optimized for an Oracle database Data that is easy to maintain Technical challenges of information integration The technical challenges with any information integration all stem from the fact that the application accesses heterogeneous data (VSAM, IMS, IDMS, ADABAS, DB2, MSSQL, and so on) that can even be in a non-relational hierarchical format. Some of the technical problems include: The flexible file definition feature used in COBOL applications in the existing system will have data files with multi-record formats and multi-record types in the same dataset—neither of which exist in RDBMS. Looping data structure and substructure or relative offset record organization such as a linked list, which are difficult to map into a relational table. Data and referential integrity is managed by the Oracle database engine. However, legacy applications already have this integrity built in. One question is whether to use Oracle to handle this integrity and remove the logic from the application. Finally, creating an Oracle schema to maximize performance, which includes mapping non-oracle keys to Oracle primary and secondary keys; especially when legacy data is organized in order of key value which can affect the performance on an Oracle RDBMS. There are also differences in how some engines process transactions, rollbacks, and record locking. General approaches to information integration and migration There are several technical approaches to consider when doing any kind of integration or migration activity. In this section, we will look at a methodology or approach for both data integration and data migration. Data integration Clearly, given this range of requirements, there are a variety of different integration strategies, including the following: Consolidated: A consolidated data integration solution moves all data into a single database and manages it in a central location. There are some considerations that need to be known regarding the differences between non- Oracle and Oracle mechanics. Transaction processing is an example. Some engines use implicit commits and some manage character sets differently than Oracle does, this has an impact on sort order. Federated: A federated data integration solution leaves data in the individual data source where it is normally maintained and updated, and simply consolidates it on the fly as needed. In this case, multiple data sources will appear to be integrated into a single virtual database, masking the number and different kinds of databases behind the consolidated view. These solutions can work bidirectionally. Shared: A shared data integration solution actually moves data and events from one or more source databases to a consolidated resource, or queue, created to serve one or more new applications. Data can be maintained and exchanged using technologies such as replication, message queuing, transportable table spaces, and FTP. Oracle has extensive support for consolidated data integration and while there are many obvious benefits to the consolidated solution, it is not practical for any organization that must deal with legacy systems or integrate with data it does not own. Therefore, we will not discuss this type any further, but instead concentrate on federated and shared solutions. Data migration Over 80 percent of migration projects fail or overrun their original budgets/ timelines, according to a study by the Standish Group. In most cases, this is because of a lack of understanding of some of the unique challenges of a migration project. The top five challenges of a migration project are: Little migration expertise to draw from: Migration is not an industry-recognized area of expertise with an established body of knowledge and practices, nor have most companies built up any internal competency to draw from. Insufficient understanding of data and source systems: The required data is spread across multiple source systems, not in the right format, of poor quality, only accessible through vaguely understood interfaces, and sometimes missing altogether. Continuously evolving target system: The target system is often under development at the time of data migration, and the requirements often change during the project. Complex target data validations: Many target systems have restrictions, constraints, and thresholds on the validity, integrity, and quality of the data to be loaded. Repeated synchronization after the initial migration: Migration is not a one-time effort. Old systems are usually kept alive after new systems launch and synchronization is required between the old and new systems during this handoff period. Also, long after the migration is completed, companies often have to prove the migration was complete and accurate to various government, judicial, and regulatory bodies. Most migration projects fail because of an inappropriate migration methodology, because the migration problem is thought of as a four stage process: Analyze the source data Extract/transform the data into the target formats Validate and cleanse the data Load the data into the target However, because of the migration challenges discussed previously, this four stage project methodology often fails miserably. The challenge begins during the initial analysis of the source data when most of the assumptions about the data are proved wrong. Since there is never enough time planned for analysis, any mapping specification from the mainframe to Oracle is effectively an intelligent guess. Based on the initial mapping specification, extractions, and transformations developed run into changing target data requirements, requiring additional analysis and changes to the mapping specification. Validating the data according to various integrity and quality constraints will typically pose a challenge. If the validation fails, the project goes back to further analysis and then further rounds of extractions and transformations. When the data is finally ready to be loaded into Oracle, unexpected data scenarios will often break the loading process and send the project back for more analysis, more extractions and transformations, and more validations. Approaching migration as a four stage process means continually going back to earlier stages due to the five challenges of data migration. The biggest problem with migration project methodology is that it does not support the iterative nature of migrations. Further complicating the issue is that the technology used for data migration often consists of general-purpose tools repurposed for each of the four project stages. These tools are usually non-integrated and only serve to make difficult processes more difficult on top of a poor methodology. The ideal model for successfully managing a data migration project is not based on multiple independent tools. Thus, a cohesive method enables you to cycle or spiral your way through the migration process—analyzing the data, extracting and transforming the data, validating the data, and loading it into targets, and repeating the same process until the migration is successfully completed. This approach enables target-driven analysis, validating assumptions, refining designs, and applying best practices as the project progresses. This agile methodology uses the same four stages of analyze, extract/transform, validate and load. However, the four stages are not only iterated, but also interconnected with one another. An iterative approach is best achieved through a unified toolset, or platform, that leverages automation and provides functionality which spans all four stages. In an iterative process, there is a big difference between using a different tool for each stage and one unified toolset across all four stages. In one unified toolset, the results of one stage can be easily carried into the next, enabling faster, more frequent and ultimately less iteration which is the key to success in a migration project. A single platform not only unifies the development team across the project phases, but also unifies the separate teams that may be handling each different source system in a multi-source migration project. Architectures: federated versus shared Federated data integration can be very complicated. This is especially the case for distributed environments where several heterogeneous remote databases are to be synchronized using two-phase commit. Solutions that provide federated data integration access and maintain the data in the place wherever it resides (such as in a mainframe data store associated with legacy applications). Data access is done 'transparently' for example, the user (or application) interacts with a single virtual or federated relational database under the control of the primary RDBMS, such as Oracle. This data integration software is working with the primary RDBMS 'under the covers' to transform and translate schemas, data dictionaries, and dialects of SQL; ensure transactional consistency across remote foreign databases (using two-phase commit); and make the collection of disparate, heterogeneous, distributed data sources appear as one unified database. The integration software carrying out these complex tasks needs to be tightly integrated with the primary RDBMS in order to benefit from built-in functions and effective query optimization. The RDBMS must also provide all the other important RDBMS functions, including effective query optimization. Data sharing integration Data sharing-based integration involves the sharing of data, transactions, and events among various applications in an organization. It can be accomplished within seconds or overnight, depending on the requirement. It may be done in incremental steps, over time, as individual one-off implementations are required. If one-off tools are used to implement data sharing, eventually the variety of data-sharing approaches employed begin to conflict, and the IT department becomes overwhelmed with an unmanageable maintenance, which increases the total cost of ownership. What is needed is a comprehensive, unified approach that relies on a standard set of services to capture, stage, and consume the information being shared. Such an environment needs to include a rules-based engine, support for popular development languages, and comply with open standards. GUI-based tools should be available for ease of development and the inherent capabilities should be modular enough to satisfy a wide variety of possible implementation scenarios. The data-sharing form of data integration can be applied to achieve near real-time data sharing. While it does not guarantee the level of synchronization inherent with a federated data integration approach (for example, if updates are performed using two-phase commit), it also doesn't incur the corresponding performance overhead. Availability is improved because there are multiple copies of the data. Considerations when choosing an integration approach There is a range in the complexity of data integration projects from relatively straightforward (for example, integrating data from two merging companies that used the same Oracle applications) to extremely complex projects such as long-range geographical data replication and multiple database platforms. For each project, the following factors can be assessed to estimate the complexity level. Pretend you are a systems integrator such as EDS trying to size a data integration effort as you prepare a project proposal. Potential for conflicts: Is the data source updated by more than one application? If so, the potential exists for each application to simultaneously update the same data. Latency: What is the required synchronization level for the data integration process? Can it be an overnight batch operation like a typical data warehouse? Must it be synchronous, and with two-phase commit? Or, can it be quasi-real-time, where a two or three second lag is tolerable, permitting an asynchronous solution? Transaction volumes and data growth trajectory: What are the expected average and peak transaction rates and data processing throughput that will be required? Access patterns: How frequently is the data accessed and from where? Data source size: Some data sources of such volume that back up, and unavailability becomes extremely important. Application and data source variety: Are we trying to integrate two ostensibly similar databases following the merger of two companies that both use the same application, or did they each have different applications? Are there multiple data sources that are all relational databases? Or are we integrating data from legacy system files with relational databases and realtime external data feeds? Data quality: The probability that data quality adds to overall project complexity increases as the variety of data sources increases. One point of this discussion is that the requirements of data integration projects will vary widely. Therefore, the platform used to address these issues must be a rich superset of the features and functions that will be applied to any one project.
Read more
  • 0
  • 0
  • 1309
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-oracle-tools-and-products
Packt
01 Aug 2011
8 min read
Save for later

Oracle Tools and Products

Packt
01 Aug 2011
8 min read
Readers in a DBA or database development role will most likely be familiar with SQL Loader, Oracle database external tables, Oracle GoldenGate, and Oracle Warehouse Builder. Application developers and architects will mostly likely be familiar with Oracle BPEL and the Oracle Service Bus. Database migration products and tools Data migration is the first step when moving your mission critical data to an Oracle database. The initial data loading is traditionally done using Oracle SQL Loader. As data volumes have increased and data quality has become an issue, Oracle Data Warehouse and Oracle Data Integrator have become more important, because of their capabilities to connect directly to source data stores, provide data cleansing and profiling support, and graphical drag and drop development. Now, the base addition of Oracle Data Warehouse Builder is a free, built-in feature of the Oracle 11g database, and price is no longer an issue. Oracle Warehouse Builder and Oracle Data Integrator have gained adoption as they are repository based, have built-in transformation functions, are multi-user, and avoid a proliferation of scripts throughout the enterprise that do the same or simpler data movement activity. These platforms provide a more repeatable, scalable, reusable, and model-based enterprise data migration architecture. SQL Loader SQL Loader is the primary method for quickly populating Oracle tables with data from external files. It has a powerful data parsing engine that puts little limitation on the format of the data in the data file. The tool is invoked, when you specify the sqlldr command or use the Oracle Enterprise Manager interface. SQL Loader has been around as long as the Oracle Database logon "scott/tiger" and is an integral feature of the Oracle database. It works the same on any hardware or software platform that Oracle supports. Therefore, it has become the de facto data migration and information integration tool for most Oracle partners and customers. This also makes it an Oracle legacy data migration and integration solution with all the issues associated with legacy tools, such as: difficult to move away from as the solution is embedded in the enterprise. The current solution has a lot of duplicated code, because it was written by many different developers before the use of structured programming and shared modules. The current solution is not built to support object-orientated development, Service Orientated Architecture products, or other new technologies such as web services and XML. The current solution is difficult and costly to maintain because the code is not structured, the application is not well documented, the original developers are no longer with the company, and any changes to the code cause other pieces of the application to either stop working or fail. SQL Loader is typically used in 'fat file' mode. This means the data is exported into a command-delimited fat file from the source database or arrives in an ASCII fat file. With the growth of data volumes, using SQL Loader with named pipes has become common practice. Named pipes eliminate the need to have temporary data storage mechanisms—instead data is moved in memory. It is interesting that Oracle does not have an SQL unload facility, as Sybase and SQL Server have the Bulk Copy Program (BCP). There are C, Perl, PL/SQL, and other SQL-based scripts to do this, but nothing official from Oracle. The SQL Loader source and target data sources along with development languages and tools supported are as follows: Data source - Any data source that can produce flat files. XML files can also be loaded using the Oracle XMLtype data type Data target - Oracle Development languages and tools - Proprietary SQL Loader control files and SQL Loader Command Line Interface (CLI) The most likely instances or use cases when Oracle SQL Loader would be the Oracle product or tool selected are: Bulk loading data into Oracle from any data source from mainframe to distributed systems. Quick, easy, one-time data migration using a free tool. Oracle external tables The external tables feature is a complement to the existing SQL Loader functionality. It enables you to access data in external sources as if it were in a table in the database. Therefore, standard SQL or Oracle PL/SQL can be used to load the external file (defined as an external table) into an Oracle database table. Customer benchmarks and performance tests have determined that in some cases the external tables are faster than the SQL Loader direct path load. In addition, if you know SQL well, then it is easier to code the external table load SQL than SQL Loader control files and load scripts. The external table source and target data sources along with development languages and tools supported are: Data source - Any data source that can produce flat files Data target - Oracle Development languages and tools -SQL, PL/SQL, Command Line Interface (CLI) The most likely instances or use cases when Oracle external tables would be the Oracle product or tool selected are: Migration of data from non-Oracle databases to the Oracle database. Fast loading of data into Oracle using SQL. Oracle Warehouse Builder Oracle Warehouse Builder (OWB) allows users to extract data from both Oracle and non-Oracle data sources and transform/load into a Data Warehouse, Operational Data Store (ODS) or simply to be used to migrate data to an Oracle database. It is part of the Oracle Business Intelligence suite and is the embedded Oracle Extract- Load-Transform (ELT) tool in this BI suite. With the usage of platform/product specific adapters it can extract data from mainframe/legacy data sources as well. Starting with Oracle Database 11g, the core OWB product is a free feature of the database. In a way, this is an attempt to address the free Microsoft entry level ELT tools like Microsoft Data Transformation Services (DTS) and SQL Server Integration Services (SSIS) from becoming de facto ELT standards, because they are easy to use and are cheap (free). The Oracle Warehouse Builder source and target data sources along with development languages and tools supported are: Data source - Can be used with the Oracle Gateways, so any data source that the Gateway supports Data target - Oracle, ODBC compliant data stores, and any data source accessible through Oracle Gateways, flat files, XML Development languages and tools -OWB GUI development tool, PL/SQL, SQL, CLI The most likely instances or use cases when OWB would be the Oracle product or tool selected are: Bulk loading data on a continuous, daily, monthly or yearly basis. Direct connection to ODBC compliant databases for data migration, consolidation and physical federation, including data warehouses and operational data stores. Low cost (free) data migration that offers a graphical interface, scheduled data movement, data quality, and cleansing. SQL Developer Migration Workbench Oracle SQL Developer Migration Workbench is a tool that enables you to migrate a database, including the schema objects, data, triggers, and stored procedures, to an Oracle Database 11g using a simple point-and-click process. It also generates scripts necessary to perform the migration in batch mode. Its tight integration into SQL Developer (an Oracle database development tool) provides the user with a single- stop tool to explore third-party databases, carry out migrations, and to manipulate the generated schema objects and migrated data. Oracle SQL Developer is provided free of charge and is the first tool used by Oracle employees to migrate Sybase, DB2, MySQL and SQL Server databases to Oracle. SQL Developer Migration Workbench 3.0 was released 2011 and includes support for C application code migration from Sybase and SQL Server DB-Library and CT- Library, a Command Line Interface (CLI), a host of reports that can be used for fixing items that did not migrate, estimating and scoping, and database analysis, and a pluggable framework to support identification and changes to SQL in Java, Powerbuilder, Visual Basic, Perl, or any programming language. SQL Developer Migration Workbench actually started off as a set of Unix scripts and a crude database procedural language parser based on SED and AWK. This solution was first made an official Oracle product in 1996. Since then, the parser has been totally rewritten in Java and the user interface integrated with SQL Developer. SQL Developer Migration Workbench source and target data sources along with development languages and tools supported are: Data source - DB2 LUW, MySQL, Informix, SQL Server, Sybase Data target - Oracle Development languages and tools - SQL Developer GUI development tool, Command Line Interface (CLI) The most likely instances or use cases when SQL Developer Migration Workbench would be the Oracle product or tool selected are: Data migration from popular LUW RDBMS systems to Oracle using fat files or JDBC connectivity. RDBMS object (stored procedures, triggers, views) translation from popular LUW RDBMS to Oracle.
Read more
  • 0
  • 0
  • 1434

article-image-integrating-kettle-and-pentaho-suite
Packt
14 Jul 2011
13 min read
Save for later

Integrating Kettle and the Pentaho Suite

Packt
14 Jul 2011
13 min read
  Pentaho Data Integration 4 Cookbook Over 70 recipes to solve ETL problems using Pentaho Kettle       Introduction Kettle, also known as PDI, is mostly used as a stand-alone application. However, it is not an isolated tool, but part of the Pentaho Business Intelligence Suite. As such, it can also interact with other components of the suite; for example, as the datasource for a report, or as part of a bigger process. This chapter shows you how to run Kettle jobs and transformations in that context. The article assumes a basic knowledge of the Pentaho BI platform and the tools that made up the Pentaho Suite. If you are not familiar with these tools, it is recommended that you visit the wiki page (wiki.pentaho.com) or the Pentaho BI Suite Community Edition (CE) site: http://community.pentaho.com/. As another option, you can get the Pentaho Solutions book (Wiley) by Roland Bouman and Jos van Dongen that gives you a good introduction to the whole suite. A sample transformation The different recipes in this article show you how to run Kettle transformations and jobs integrated with several components of the Pentaho BI suite. In order to focus on the integration itself rather than on Kettle development, we have created a sample transformation named weather.ktr that will be used through the different recipes. The transformation receives the name of a city as the first parameter from the command line, for example Madrid, Spain. Then, it consumes a web service to get the current weather conditions and the forecast for the next five days for that city. The transformation has a couple of named parameters: The following diagram shows what the transformation looks like: It receives the command-line argument and the named parameters, calls the service, and retrieves the information in the desired scales for temperature and wind speed. You can download the transformation from the book's site and test it. Do a preview on the next_days, current_conditions, and current_conditions_normalized steps to see what the results look like. The following is a sample preview of the next_days step: The following is a sample preview of the current_conditions step: Finally, the following screenshot shows you a sample preview of the current_conditions_normalized step: There is also another transformation named weather_np.ktr. This transformation does exactly the same, but it reads the city as a named parameter instead of reading it from the command line. The Getting ready sections of each recipe will tell you which of these transformations will be used. Avoiding consuming the web service It may happen that you do not want to consume the web service (for example, for delay reasons), or you cannot do it (for example, if you do not have Internet access). Besides, if you call a free web service like this too often, then your IP might be banned from the service. Don't worry. Along with the sample transformations on the book's site, you will find another version of the transformations that instead of using the web service, reads sample fictional data from a file containing the forecast for over 250 cities. The transformations are weather (file version).ktr and weather_np (file version).ktr. Feel free to use these transformations instead. You should not have any trouble as the parameters and the metadata of the data retrieved are exactly the same as in the transformations explained earlier. If you use transformations that do not call the web service, remember that they rely on the file with the fictional data (weatheroffline.txt). Wherever you copy the transformations, do not forget to copy that file as well. Creating a Pentaho report with data coming from PDI The Pentaho Reporting Engine allows designing, creating, and distributing reports in various popular formats (HTML, PDF, and so on) from different kind of sources (JDBC, OLAP, XML, and so on). There are occasions where you need other kinds of sources such as text files or Excel files, or situations where you must process the information before using it in a report. In those cases, you can use the output of a Kettle transformation as the source of your report. This recipe shows you this capability of the Pentaho Reporting Engine. For this recipe, you will develop a very simple report: The report will ask for a city and a temperature scale and will report the current conditions in that city. The temperature will be expressed in the selected scale. Getting ready A basic understanding of the Pentaho Report Designer tool is required in order to follow this recipe. You should be able to create a report, add parameters, build a simple report, and preview the final result. Regarding the software, you will need the Pentaho Report Designer. You can download the latest version from the following URL: http://sourceforge.net/projects/pentaho/files/Report%20Designer/ You will also need the sample transformation weather.ktr. The sample transformation has a couple of UDJE steps. These steps rely on the Janino library. In order to be able to run the transformation from Report Designer, you will have to copy the janino.jar file from the Kettle libext directory into the Report Designer lib directory. How to do it... In the first part of the recipe, you will create the report and define the parameters for the report: the city and the temperature scale. Launch Pentaho Report Designer and create a new blank report. Add two mandatory parameters: A parameter named city_param, with Lisbon, Portugal as Default Value and a parameter named scale_param which accepts two possible values: C meaning Celsius or F meaning Fahrenheit. Now, you will define the data source for the report: In the Data menu, select Add Data Source and then Pentaho Data Integration. Click on the Add a new query button. A new query named Query 1 will be added. Give the query a proper name, for example, forecast. Click on the Browse button. Browse to the sample transformation and select it. The Steps listbox will be populated with the names of the steps in the transformation. Select the step current_conditions. So far, you have the following: The specification of the transformation file name with the complete path will work only inside Report Designer. Before publishing the report, you should edit the file name (C:Pentahoreportingweather.ktr in the preceding example) and leave just a path relative to the directory where the report is to be published (for example, reportsweather.ktr). Click on Preview; you will see an empty resultset. The important thing here is that the headers should be the same as the output fields of the current_conditions step: city, observation_time, weatherDesc, and so on. Now, close that window and click on Edit Parameters. You will see two grids: Transformation Parameter and Transformation Arguments. Fill in the grids as shown in the following screenshot. You can type the values or select them from the available drop-down lists: Close the Pentaho Data Integration Data Source window. You should have the following: The data coming from Kettle is ready to be used in your report. Build the report layout: Drag and drop some fields into the canvas and arrange them as you please. Provide a title as well. The following screenshot is a sample report you can design: Now, you can do a Print Preview. The sample report above will look like the one shown in the following screenshot: Note that the output of the current_condition step has just one row. If for data source you choose the next_days or the current_condition_normalized step instead, then the result will have several rows. In that case, you could design a report by columns: one column for each field. How it works... Using the output of a Kettle transformation as the data source of a report is very useful because you can take advantage of all the functionality of the PDI tool. For instance, in this case you built a report based on the result of consuming a web service. You could not have done this with Pentaho Report Designer alone. In order to use the output of your Kettle transformation, you just added a Pentaho Data Integration datasource. You selected the transformation to run and the step that would deliver your data. In order to be executed, your transformation needs a command-line parameter: the name of the city. The transformation also defines two named parameters: the temperature scale and the wind scale. From the Pentaho Report Designer you provided both—a value for the city and a value for the temperature scale. You did it by filling in the Edit Parameter setting window inside the Pentaho Data Integration Data Source window. Note that you did not supply a value for the SPEED named parameter, but that is not necessary because Kettle uses the default value. As you can see in the recipe, the data source created by the report engine has the same structure as the data coming from the selected step: the same fields with the same names, same data types, and in the same order. Once you configured this data source, you were able to design your report as you would have done with any other kind of data source. Finally, when you are done and want to publish your report on the server, do not forget to fix the path as explained in the recipe—the File should be specified with a path relative to the solution folder. For example, suppose that your report will be published in my_solution/reports, and you put the transformation file in my_solution/reports/resources. In that case, for File, you should type resources/ plus the name of the transformation. There's more... Pentaho Reporting is a suite of Java projects built for report generation. The suite is made up of the Pentaho Reporting Engine and a set of tools such as the Report Designer (the tool used in this recipe), Report Design Wizard, and Pentaho's web-based Ad Hoc Reporting user interface. In order to be able to run transformations, the Pentaho Reporting software includes the Kettle libraries. To avoid any inconvenience, be sure that the versions of the libraries included are the same or newer than the version of Kettle you are using. For instance, Pentaho Reporting 3.8 includes Kettle 4.1.2 libraries. If you are using a different version of Pentaho Reporting, then you can verify the Kettle version by looking in the lib folder inside the reporting installation folder. You should look for files named kettle-core-<version>.jar, kettle-db-<version>.jar, and kettle-engine-<version>.jar. Besides, if the transformations you want to use as data sources rely on external libraries, then you have to copy the proper jar files from the Kettle libext directory into the Report Designer lib folder, just as you did with the janino.jar file in the recipe. For more information about Pentaho Reporting, just visit the following wiki website: http://wiki.pentaho.com/display/Reporting/Pentaho+Reporting+Community+Documentation Alternatively, you can get the book Pentaho Reporting 3.5 for Java Developers (Packt Publishing) by Will Gorman. Configuring the Pentaho BI Server for running PDI jobs and transformations Configuring the Pentaho BI Server for running PDI jobs and transformations The Pentaho BI Server is a collection of software components that provide the architecture and infrastructure required to build business intelligence solutions. With the Pentaho BI Server, you are able to run reports, visualize dashboards, schedule tasks, and more. Among these tasks, there is the ability to run Kettle jobs and transformations. This recipe shows you the minor changes you might have to make in order to be able to run Kettle jobs and transformations. Getting ready In order to follow this recipe, you will need some experience with the Pentaho BI Server. For configuring the Pentaho BI server, you obviously need the software. You can download the latest version of the Pentaho BI Server from the following URL: http://sourceforge.net/projects/pentaho/files/Business%20Intelligence%20Server/ Make sure you download the distribution that matches your platform. If you intend to run jobs and transformations from a Kettle repository, then make sure you have the name of the repository and proper credentials (user and password). How to do it... Carry out the following steps: If you intend to run a transformation or a job from a file, skip to the How it works section. Edit the settings.xml file located in the biserver-cepentaho-solutionssystemkettle folder inside the Pentaho BI Server installation folder. In the repository.type tag, replace the default value files with rdbms. Provide the name of your Kettle repository and the user and password, as shown in the following example: <kettle-repository> <!-- The values within <properties> are passed directly to the Kettle Pentaho components. --> <!-- This is the location of the Kettle repositories.xml file, leave empty if the default is used: $HOME/.kettle/repositories.xml --> <repositories.xml.file></repositories.xml.file> <repository.type>rdbms</repository.type> <!-- The name of the repository to use --> <repository.name>pdirepo</repository.name> <!-- The name of the repository user --> <repository.userid>dev</repository.userid> <!-- The password --> <repository.password>1234</repository.password> </kettle-repository> Start the server. It will be ready to run jobs and transformations from your Kettle repository. How it works... If you want to run Kettle transformations and jobs, then the Pentaho BI server already includes the Kettle libraries. The server is ready to run both jobs and transformations from files. If you intend to use a repository, then you have to provide the repository settings. In order to do this, you just have to edit the settings.xml file, as you did in the recipe. There's more... To avoid any inconvenience, be sure that the version of the libraries included are the same or newer than the version of Kettle you are using. For instance, Pentaho BI Server 3.7 includes Kettle 4.1 libraries. If you are using a different version of the server, then you can verify the Kettle version by looking in the following folder: biserver-cetomcatwebappspentahoWEB-INFlib This folder is inside the server installation folder. You should look for files named kettlecore-TRUNK-SNAPSHOT .jar, kettle-db-TRUNK-SNAPSHOT.jar, and kettleengine-TRUNK-SNAPSHOT.jar. Unzip any of them and look for the META-INFMANIFEST.MF file. There, you will find the Kettle version. You will see a line like this: Implementation-Version: 4.1.0. There is even an easier way: In the Pentaho User Console (PUC), look for the option 2. Get Environment Information inside the Data Integration with Kettle folder of the BI Developer Examples solution; run it and you will get detailed information about the Kettle environment. For your information, the transformation that is run behind the scenes is GetPDIEnvironment.ktr located in the biservercepentaho-solutionsbi-developersetl folder.
Read more
  • 0
  • 0
  • 5070

article-image-excel-2010-financials-identifying-profitability-investment
Packt
12 Jul 2011
5 min read
Save for later

Excel 2010 Financials: Identifying the Profitability of an Investment

Packt
12 Jul 2011
5 min read
  Excel 2010 Financials Cookbook Powerful techniques for financial organization, analysis, and presentation in Microsoft Excel         Read more about this book       (For more resources on this subject, see here.) Calculating the depreciation of assets Assets within a business are extremely important for a number of reasons. Assets can become investments for growth or investments in another line of business. Assets can also take on many forms such as computer equipment, vehicles, furniture, buildings, land, and so on. Assets are not only important within the business for which they are used, but they are also used as a method of reducing the tax burden on a business. As a financial manager, you are tasked with calculating the depreciation expense for a laptop computer with a useful life of five years. In this recipe, you will learn to calculate the depreciation of an asset over the life of the asset. Getting ready There are several different methods of depreciation. A business may use straight-line depreciation, declining depreciation, double-declining depreciation, or a variation of these methods. Excel has the functionality to calculate each of the methods with a slight variation to the function; however, in this recipe, we will use a straight-line depreciation. Straight-line depreciation provides equal reduction of an asset over its life. Getting ready We will first need to set up the Excel worksheet to hold the depreciation values for the laptop computer: In cell A5 list Year and in cell B5 list 1. This will account for the depreciation for the year that the asset was purchased. Continue this list until all five years are listed: In cell B2, list the purchase price of the Laptop computer; the purchase price is $2500: In cell B4, we will need to enter the salvage value of the asset. The salvage value will be the estimated resale value of the asset when it is useful life, as determined by generally accepted accounting principles, has elapsed. Enter $500 in cell B3: In cell B5 enter the formula =SLN($B$2,$B$3,5) and press Enter: Copy the formula from cell C5 and paste it through cell C9: Excel now has listed the straight-line depreciation expense for each of the five years. As you can see in this schedule, the depreciation expense remains consistent through each year of the asset's useful life. How it works... Straight-line depreciation calculates the value of the purchase price minus the salvage price, and divides the remainder across the useful life. The function accepts inputs as follows =SLN(purchase price, salvage price, useful life). There's more... Other depreciation methods are as follows: Calculating the future versus current value of your money When working within finance, accounting, or general business it is important to know how much money you have. However, knowing how much money you have now is only a portion of the whole financial picture. You must also know how much your money will be worth in the future. Knowing future value allows you to know truly how much your money is worth, and with this knowledge, you can decide what you need to do with it. As a financial manager, you must provide feedback on whether to introduce a new product line. As with any new venture, there will be several related costs including start-up costs, operational costs, and more. Initially, you must spend $20,000 to account for most start-up costs and you will potentially, for the sake of the example, earn a profit of $5500 for five years. You also know due to expenditures, you expect your cost of capital to be 10%. In this recipe, you will learn to use Excel functions to calculate the future value of the venture and whether this proves to be profitable. How to do it... We will first need to enter all known values and variables into the worksheet: In cell B2, enter the initial cost of the business venture: In cell B3, enter the discount rate, or the cost of capital of 10%: In cells B4 through B8, enter the five years' worth of expected net profit from the business venture: In cell B10, we will calculate the net present value: Enter the formula =NPV(B3,B4:B8) and press Enter We now see that accounting for future inflows, the net present value of the business venture is $20,849.33. Our last step is to account for the initial start-up costs and determine the overall profitability. In cell B11, enter the formula =B10/B2 and press Enter: As a financial manager, we now see that for every $1 invested in this venture, you will receive $1.04 in present value inflows. How it works... NPV or net present value is calculated in Excel using all of the inflow information that was entered across the estimated period. For the five years used in this recipe, the venture shows a profit of $5500 for each year. This number cannot be used directly, because there is a cost of making money. The cost in this instance pertains to taxes and other expenditures. In the NPV formula, we did not include the initial cost of start-up because this cost is exempt from the cost of capital; however, it must be used at the end of the formula to account for the outflow compared to the inflows. There's more... The $1.04 value calculated at the end of this recipe is also known as the profitability index. When this index is greater than one, the venture is said to be a positive investment.
Read more
  • 0
  • 0
  • 1404

article-image-getting-started-oracle-goldengate
Packt
12 Jul 2011
8 min read
Save for later

Getting Started with Oracle GoldenGate

Packt
12 Jul 2011
8 min read
What is GoldenGate? Oracle GoldenGate is Oracle's strategic solution for real-time data integration. GoldenGate software enables mission-critical systems to have continuous availability and access to real-time data. It offers a fast and robust solution for replicating transactional data between operational and analytical systems. Oracle GoldenGate captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across Oracle and heterogeneous environments with very low impact and preserved transaction integrity. The transaction data management provides read consistency, maintaining referential integrity between source and target systems. As a competitor to Oracle GoldenGate, data replication products, and solutions exist from other software companies and vendors. These are mainly storage replication solutions that provide a fast point in time data restoration. The following is a list of the most common solutions available today: EMC SRDF and EMC RecoverPoint IBM PPRC and Global Mirror (known together as IBM Copy Services) Hitachi TrueCopy Hewlett-Packard Continuous Access (HP CA) Symantec Veritas Volume Replicator (VVR) DataCore SANsymphony and SANmelody FalconStor Replication and Mirroring Compellent Remote Instant Replay Data replication techniques have improved enormously over the past 10 years and have always been a requirement in nearly every IT project in every industry. Whether for Disaster Recovery (DR), High Availability (HA), Business Intelligence (BI), or even regulatory reasons, the requirements and expected performance have also increased, making the implementation of efficient and scalable data replication solutions a welcome challenge. Oracle GoldenGate evolution GoldenGate Software Inc was founded in 1995. Originating in San Francisco, the company was named after the famous Golden Gate Bridge by its founders, Eric Fish and Todd Davidson. The tried and tested product that emerged quickly became very popular within the financial industry. Originally designed for the fault tolerant Tandem computers, the resilient and fast data replication solution was in demand. The banks initially used GoldenGate software in their ATM networks for sending transactional data from high street machines to mainframe central computers. The data integrity and guaranteed zero data loss is obviously paramount and plays a key factor. The key architectural properties of the product are as follows: Data is sent in "real time" with sub-second speed. Supports heterogeneous environments across different database and hardware types. "Transaction aware" —maintaining its read-consistent and referential integrity between source and target systems. High performance with low impact; able to move large volumes of data very efficiently while maintaining very low lag times and latency. Flexible modular architecture. Reliable and extremely resilient to failure and data loss. No single point of failure or dependencies, and easy to recover. Oracle Corporation acquired GoldenGate Software in September 2009. Today there are more than 500 customers around the world using GoldenGate technology for over 4000 solutions, realizing over $100 million in revenue for Oracle. Oracle GoldenGate solutions Oracle GoldenGate provides five data replication solutions: High Availability Live Standby for an immediate fail-over solution that can later re-synchronize with your primary source. Active-Active solutions for continuous availability and transaction load distribution between two or more active systems. Zero-Downtime Upgrades and Migrations Eliminates downtime for upgrades and migrations. Live Reporting Feeding a reporting database so as not to burden the source production systems with BI users or tools. Operational Business Intelligence (BI) Real-time data feeds to operational data stores or data warehouses, directly or via Extract Transform and Load (ETL) tools. Transactional Data Integration Real-time data feeds to messaging systems for business activity monitoring, business process monitoring, and complex event processing. Uses event-driven architecture and service-oriented architecture (SOA). The following diagram shows the basic architecture for the various solutions available from GoldenGate software: We have discovered there are many solutions where GoldenGate can be applied. Now we can dive into how GoldenGate works, the individual processes, and the data flow that is adopted for all. Oracle GoldenGate technology overview Let's take a look at GoldenGate's fundamental building blocks; the Capture process, Trail files, Data pump, Server collector, and Apply processes. In fact, the order in which the processes are listed depicts the sequence of events for GoldenGate data replication across distributed systems. A Manager process runs on both the source and the target systems that "oversee" the processing and transmission of data. All the individual processes are modular and can be easily decoupled or combined to provide the best solution to meet the business requirements. It is normal practice to configure multiple Capture and Apply processes to balance the load and enhance performance. Filtering and transformation of the data can be done at either the source by the Capture or at the target by the Apply processes. This is achieved through parameter files. The capture process (Extract) Oracle GoldenGate's capture process, known as Extract, obtains the necessary data from the databases' transaction logs. For Oracle, these are the online redo logs that contain all the data changes made in the database. GoldenGate does not require access to the source database and only extracts the committed transactions from the online redo logs. It can, however, read archived redo logs to extract the data from long-running transactions. The Extract process will regularly checkpoint its read and write position, typically to a file. The checkpoint data insures GoldenGate can recover its processes without data loss in the case of failure. The Extract process can have one of the following statuses: STOPPED STARTING RUNNING ABENDED The ABENDED status stems back to the Tandem computer, where processes either stop (end normally) or abend (end abnormally). Abend is short for an abnormal end. Trail files To replicate transactional data efficiently from one database to another, Oracle GoldenGate converts the captured data into a Canonical Format which is written to trail files, both on the source and the target system. The provision of the source and target trail files in the GoldenGates architecture eliminates any single point of failure and ensures data integrity is maintained. A dedicated checkpoint process keeps track of the data being written to the trails on both the source and target for fault tolerance. It is possible to configure GoldenGate not to use trail files on the source system and write data directly from the database's redo logs to the target server data collector. In this case, the Extract process sends data in large blocks across a TCP/IP network to the target system. However, this configuration is not recommended due to the possibility of data loss occurring during unplanned system or network outages. Best practice states, the use of local trail files would provide a history of transactions and support the recovery of data for retransmission via a Data Pump. Data pump When using trail files on the source system, known as a local trail, GoldenGate requires an additional Extract process called Data pump that sends data in large blocks across a TCP/IP network to the target system. As previously stated, this is best practice and should be adopted for all Extract configurations. Server collector The server collector process runs on the target system and accepts data from the source (Extract/Data Pump). Its job is to reassemble the data and write it to a GoldenGate trail file, known as a remote trail. The Apply process (Replicat) The Apply process, known in GoldenGate as Replicat, is the final step in the data delivery. It reads the trail file and applies it to the target database in the form of DML (deletes, updates, and inserts) or DDL*. (database structural changes). This can be concurrent with the data capture or performed later. The Replicat process will regularly checkpoint its read and write position, typically to a file. The checkpoint data ensures that GoldenGate can recover its processes without data loss in the case of failure. The Replicat process can have one of the following statuses: STOPPED STARTING RUNNING ABENDED * DDL is only supported in unidirectional configurations and non-heterogeneous (Oracle to Oracle) environments. The Manager process The Manager process runs on both source and target systems. Its job is to control activities such as starting, monitoring, and restarting processes; allocating data storage; and reporting errors and events. The Manager process must exist in any GoldenGate implementation. However, there can be only one Manager process per Changed Data Capture configuration on the source and target. The Manager process can have either of the following statuses: STOPPED RUNNING GGSCI In addition to the processes previously described, Oracle GoldenGate 10.4 ships with its own command line interface known as GoldenGate Software Command Interface (GGSCI). This tool provides the administrator with a comprehensive set of commands to create, configure, and monitor all GoldenGate processes. Oracle GoldenGate 10.4 is command-line driven. However, there is a product called Oracle GoldenGate Director that provides a GUI for configuration and management of your GoldenGate environment. Process data flow The following diagram illustrates the GoldenGate processes and their dependencies. The arrows largely depict replicated data flow (committed transactions), apart from checkpoint data and configuration data. The Extract and Replicat processes periodically checkpoint to a file for persistence. The parameter file provides the configuration data. As described in the previous paragraphs, two options exist for sending data from source to target; these are shown as broken arrows: Having discovered all the processes required for GoldenGate to replicate data, let's now dive a little deeper into the architecture and configurations.
Read more
  • 0
  • 0
  • 3230
article-image-building-financial-functions-excel-2010
Packt
07 Jul 2011
5 min read
Save for later

Building Financial Functions into Excel 2010

Packt
07 Jul 2011
5 min read
Excel 2010 Financials Cookbook Powerful techniques for financial organization, analysis, and presentation in Microsoft Excel         Till now, in the previous articles, we have focused on manipulating data within and outside of Excel in order to prepare to make financial decisions. Now that the data has been prepared, re-arranged, or otherwise adjusted, we are able to leverage the functions within Excel to make actual decisions. Utilizing these functions and the individual scenarios, we will be able to effectively eliminate the uncertainty due to poor analysis. Since this article utilizes financial scenarios for demonstrating the use of the various functions, it is important to note that these scenarios take certain "unknowns" for granted, and makes a number of assumptions in order to minimize the complexity of the calculation. Real-world scenarios will require a greater focus on calculating and accounting for all variables. Determining standard deviation for assessing risk In the recipes mentioned so far, we have shown the importance of monitoring and analyzing frequency to determine the likelihood that an event will occur. Standard deviation will now allow for an analysis of the frequency in a different manner, or more specifically, through variance. With standard deviation, we will be able to determine the basic top and bottom thresholds of data, and plot general movement within that threshold to determine the variance within the data range. This variance will allow the calculation of risk within investments. As a financial manager, you must determine the risk associated with investing capital in order to gain a return. In this particular instance, you will invest in stock. In order to minimize loss of investment capital, you must determine the risk associated between investing between two different stocks, Stock A, and Stock B. In this recipe, we will utilize standard deviation to determine which stock, either A or B, presents a higher risk, and hence a greater risk of loss. How to do it... We will begin by entering the selling prices of Stock A and Stock B in columns A and B, respectively: Within this list of selling prices, at first glance we can see that Stock B has a higher selling price. The stock opening price and selling price over the course of 52 weeks almost always remains above that of Stock A. As an investor looking to gain a higher return, we may wish to choose Stock B based on this cursory review; however, high selling price does not negate the need for consistency. In cell C2, enter the formula =STDEV(A2:A53) and press Enter: In cell C3, enter the formula =STDEV(B2:B53) and press Enter: We can see from the calculation of standard deviation, that Stock B has a deviation range or variance of over $20, whereas Stock A's variance is just over $9: Given this information, we can determine that Stock A presents a lower risk than Stock B. If we invest in Stock A, at any given time, utilizing past performance, our average risk of loss is $9, whereas in Stock B we an average risk of $20. How it works... The function of STDEV or standard deviation in Excel utilizes the given numbers as a complete population. This means that it does not account for any other changes or unknowns. Excel will use this data set as a complete set and determine the greatest change from high to low within the numbers. This range of change is your standard deviation. Excel also includes the function STDEVP that treats the data as a selection of a larger population. This function should be sed if you are calculating standard deviation on a subset of data (for example, six months out of an entire year). If we translate these numbers into a line graph with standard deviation bars, as shown in the following screenshot for Stock A, you can see the selling prices of the stock, and how they travel within the deviation range: If we translate these numbers into a line graph with standard deviation bars, as shown in the following screenshot for Stock B, you can see the selling prices of the stock, and understand how they travel within the deviation range: The bars shown on the graphs represent the standard deviation as calculated by Excel. We can see visually that not only does Stock B represent a greater risk with the larger deviation, but also many of the stock prices fall below our deviation, representing further risk to the investor. With funds to invest as a finance manager, Stock A represents a lower risk investment. There's more... Standard deviation can be calculated for almost any data set. For this recipe, we calculated deviation over the course of one year; however, if we expand the data to include multiple years we can further determine long-term risk. While Stock B represents high short-term risk, in the long-term analysis, Stock B may present as a less risky investment. Combining standard deviation with a five-number summary analysis, we can further gain risk and performance information.
Read more
  • 0
  • 0
  • 1538

article-image-designing-user-security-oracle-peoplesoft-applications
Packt
04 Jul 2011
12 min read
Save for later

Designing User Security for Oracle PeopleSoft Applications

Packt
04 Jul 2011
12 min read
Understanding User security Before we get into discussing the PeopleSoft security, let's spend some time trying to set the context for user security. Whenever we think of a complex system like PeopleSoft Financial applications with potentially hundreds of users, the following are but a few questions that face us: Should a user working in billing group have access to transactions, such as vouchers and payments, in Accounts Payable? Should a user who is a part of North America business unit have access to the data belonging to the Europe business unit? Should a user whose job involves entering vouchers be able to approve and pay them as well? Should a data entry clerk be able to view departmental budgets for the organization? These questions barely scratch the surface of the complex security considerations of an organization. Of course, there is no right or wrong answer for such questions, as every organization has its own unique security policies. What is more important is the fact that we need a mechanism that can segregate the access to system features. We need to enforce appropriate controls to ensure users can access only the features they need. Implementation challenge Global Vehicles' Accounts Payable department has three different types of users – Mary, who is the AP manager; Amy, who is the AP Supervisor; and Anton, who is the AP Clerk. These three types of users need to have the following access to PeopleSoft features: User typeAccess to featureDescriptionAP ClerkVoucher entryA clerk should be able to access system pages to enter vouchers from various vendors.AP SupervisorVoucher entry Voucher approval Voucher postingA supervisor also needs to have access to enter vouchers. He/she should review and approve each voucher entered by the clerk. Also, the supervisor should be able to execute the Voucher Post process that creates accounting entries for vouchers.AP ManagerPay cycle Voucher approvalAP manager should be the only one who can execute the Pay Cycle (a process that prints checks to issue payments to vendors). Manager (in addition to the Supervisor) should also have the authority to approve vouchers. Note that this is an extremely simplified scenario that does not really include all the required features in Accounts Payable. Solution We will design a security matrix that uses distinct security roles. We'll configure permission lists, roles, and user profiles to limit user access to required system features. PeopleSoft security matrix is a three-level structure consisting of Permission lists (at the bottom), Roles (in the middle) and User profiles (at the top). The following illustration shows how it is structured: We need to create a User Profile for each user of the system. This user profile can have as many Roles as needed. For example, a user can have roles of Billing Supervisor and Accounts Receivable Payment Processor, if he/she approves customer invoices as well as processes customer payments. Thus, the number of roles that a user should be granted depends entirely on his/her job responsibilities. Each role can have multiple permission lists. A Permission list determines which features can be accessed by a Role. We can specify which pages can be accessed, the mode in which they can be accessed (read only/add/update) in a permission list. In a nutshell, we can think of a Role as a grouping of system features that a user needs to access, while a Permission list defines the nature of access to those system features. Expert tip Deciding how many and which roles should be created needs quite a bit of thinking about. How easy will it be to maintain them in future? Think of the scenarios where a function needs to be removed from a role and added to another. How easy would it be to do so? As a rule of thumb, you should map system features to roles in such a way that they are non-overlapping. Similarly, map system pages to permission lists so that they are mutually exclusive. This simplifies user security maintenance to a large extent. Note that although it is advisable, it may not always be possible. Organizational requirements are sometimes too complicated to achieve this. However, a PeopleSoft professional should try to build a modular security design to the extent possible. Now, let's try to design our security matrix for the hypothetical scenario presented previously and test our thumb rule of mutually exclusive roles and permission lists. What do you observe as far as required system features for a role are concerned? You are absolutely right if you thought that some of system features (such as Voucher entry) are common across roles. Which roles should we design for this situation? Going by the principle of mutually exclusive roles, we can map system features to the permission lists (and in turn roles) without overlapping them. We'll denote our roles by the prefix 'RL' and permission lists by the prefix 'PL'. Thus, the mapping may look something like this: RolePermission listSystem featureRL_Voucher_EntryPL_Voucher_EntryVoucher EntryRL_Voucher_ApprovalPL_Voucher_ApprovalVoucher ApprovalRL_Voucher_PostingPL_Voucher_PostingVoucher PostingRL_Pay_CyclePL_Pay_CyclePay Cycle So, now we have created the required roles and permission lists, and attached appropriate permission lists to each of the roles. In this example, due to the simple scenario, each role has only a single permission list assigned to it. Now as the final step, we'll assign appropriate roles to each user's User profile. UserRoleSystem feature accessedMaryRL_Voucher_ApprovalVoucher ApprovalRL_Pay_CyclePay CycleAmyRL_Voucher_EntryVoucher EntryRL_Voucher_ApprovalVoucher ApprovalRL_Voucher_PostingVoucher PostingAntonRL_Voucher_EntryVoucher Entry Now, as you can see, each user has access to the appropriate system feature through the roles and in turn, permission lists attached to their user profiles. Can you think of the advantage of our approach? Let's say that a few months down the line, it is decided that Mary (AP Manager) should be the only one approving vouchers, while Amy (AP Supervisor) should also have the ability to execute pay cycles and issue payments to vendors. How can we accommodate this change? It's quite simple really – we'll remove the role RL_Voucher_Approval from Amy's user profile and add the role RL_Pay_Cycle to her profile. So now the security matrix will look like this: UserRoleSystem feature accessedMaryRL_Voucher_ApprovalVoucher ApprovalRL_Pay_CyclePay CycleAmyRL_Voucher_EntryVoucher EntryRL_Pay_CyclePay CycleRL_Voucher_PostingVoucher PostingAntonRL_Voucher_EntryVoucher Entry Thus, security maintenance becomes less cumbersome when we design roles and permission lists with non-overlapping system features. Of course, this comes with a downside as well. This approach results in a large number of roles and permission lists, thereby increasing the initial configuration effort. The solution that we actually design for an organization needs to balance these two objectives. Expert tip Having too many permission lists assigned to a User Profile can adversely affect the system performance. PeopleSoft recommends 10-20 permission lists per user. Configuring permission lists Follow this navigation to configure permission lists: PeopleTools | Security | Permissions & Roles | Permission Lists The following screenshot shows the General tab of the Permission List set up page. We can specify the permission list description on this tab. We'll go through some of the important configuration activities for the Voucher Entry permission list discussed previously. Can Start Application Server?: Selecting this field enables a user with this permission list to start PeopleSoft application servers (all batch processes are executed on this server). A typical system user does not need this option. Allow Password to be Emailed?: Selecting this field enables users to receive forgotten passwords through e-mail. Leave the field unchecked to prevent unencrypted passwords from being sent in e-mails. Time-out Minutes – Never Time-out and Specific Time-out: These fields determine the number of minutes of inactivity after which the system automatically logs out the user with this permission list. The following screenshot shows the Pages tab of the permission list page. This is the most important place where we specify the menus, components, and ultimately the pages that a user can access. In PeopleSoft parlance, a component means collection of pages that are related to each other. In the previous screenshot, you are looking at a page. You can also see other related pages used to configure permission lists – General, PeopleTools, Process, and so on. All these pages constitute a component. A menu is a collection of various components. It is typically related to a system feature, such as 'Enter Voucher Information' as you can see in the screenshot. Thus, to grant a page access, we need to enter its component and menu details. Expert tip In order to find out the component and menu for a page, press CTRL+J when it is open in the internet browser. Menu Name: This is where we need to specify all the menus to which a user needs to have access. Note that a permission list can grant access to multiple menus, but our simple example includes only one system feature (Voucher entry) and in turn, only one menu. Click the + button to add and select more menus. Edit Component hyperlink: Once we select a menu, we need to specify which components under it should be accessed by the permission list. Click the Edit Components hyperlink to proceed. The system opens the Component Permissions page, as shown in the following screenshot: In this page, the system shows all components under the selected menu. The previous screenshot shows only a part of the component list under the 'Enter Voucher Information' menu. Voucher entry pages for which we need to grant access exist under a component named VCHR_EXPRESS. Click the Edit Pages hyperlink to grant access to specific pages under a given component. The system opens the Page Permissions page, as shown in the following screenshot: The given screenshot shows the partial list of pages under the Voucher Entry component. Panel Item Name: This field shows the page name to which access is to be granted. Authorized?: In order to enable a user to access a page, select this checkbox. As you can see, we have authorized this permission list to access six pages in this component. Display Only: Clicking this checkbox allows the user to get read-only access for the given page. He/she cannot make any changes to the data on this page. Add: Selecting this checkbox enables the user to add a new transaction (in this case new vouchers). Update/Display: Selecting this checkbox enables the user to retrieve the current effective dated row. He/she can also add future effective dated rows in addition to modifying them. Update/Display All: This option gives the user all of the privileges of the Update/Display option. In addition he/she can retrieve past effective dated rows as well. Correction: This option enables the user to perform all the possible operations; that is, to retrieve, modify or add past, current, and future effective dated rows. Effective dates are usually relevant for master data set ups. It drives when a particular value comes into effect. A vendor may be set up with effective date of 1/1/2001, so thatit comes into effect from that date. Now assume that its name is slated to change on 1/1/2012. However, we can add a new vendor row with the new name and the appropriate effective date. The system automatically starts using the new name from 1/1/2012. Note that there can be multiple future effective dated rows, but only one current row. The next tab PeopleTools contains configuration options that are more relevant for technical developers. As we are concentrating on business users of the system, we'll not discuss them. Click the Process tab. As shown in the following screenshot, this tab is used to configure options for Process groups. A process group is a collection of batch processes belonging to a specific internal department or a business process. For example, PeopleSoft delivers a process group ARALL that includes all Accounts Receivable batch processes. PeopleSoft delivers various pre-configured process groups; however, we can create our own process groups depending on the organization's requirements. Click the Process Group Permissions hyperlink. The system opens a new page where we can select as many process groups as needed. When a process group is selected for a permission list, it enables the users to execute batch and online processes that are part of it. The following screenshot shows the Sign-on Times tab. This tab controls the time spans when a user with this permission list can sign-on to the system. We can enforce specific days or specific time spans for a particular day when users can sign-on. In the case of our permission list, there are no such limits and users with this permission list will be able to sign-on anytime on all days of the week. The next tab on this page is the Component Interface. The component interface is a PeopleSoft utility that automates bulk data entry into PeopleSoft pages. We can select component interface values on this tab, so that users with this permission list have access rights to use them. Due to the highly technical nature of the activities involved, we will not discuss the Web Libraries, Web Services, Mass change, Links, and Audit tabs. Oracle offers exhaustive documentation on PeopleSoft applications at the following here. The next important tab on the permission list page is Query. On this tab, the system shows two hyperlinks: Access Group Permissions and Query Profile. Click the Access Group Permissions hyperlink. The system opens the Permission List Access Groups page, as shown in the next screenshot. This page controls which database tables can be accessed by users to create database queries, using a PeopleSoft tool called Query Manager. Tree Name: A tree is a hierarchical structure of database tables. As shown in the screenshot, the tree QUERY_TREE_AP groups all AP tables. Access Group: Each tree has multiple nodes called access groups. These access groups are just logical groups of tables within a tree. In the screenshot, VOUCHERS is a group of voucher-related tables within the AP table tree. With this configuration, users will be able to create database queries on voucher tables in AP. Using the Query Profile hyperlink, we can set various options that control how users can create queries (such as if he/she can use joins, unions, and so on in queries, how many database rows can be fetched by the query, if the user can copy the query output to Microsoft Excel, and so on).
Read more
  • 0
  • 0
  • 3657

article-image-overview-oracle-peoplesoft-commitment-control
Packt
29 Jun 2011
9 min read
Save for later

An Overview of Oracle PeopleSoft Commitment Control

Packt
29 Jun 2011
9 min read
Oracle PeopleSoft Enterprise Financial Management 9.1 Implementation An exhaustive resource for PeopleSoft Financials application practitioners to understand core concepts, configurations, and business processes        Understanding commitment control Before we proceed further, let's take some time to understand the basic concepts of commitment control. Commitment control can be used to track expenses against pre-defined control budgets as well as to track recognized revenue against revenue estimate budgets. In this article, we'll concentrate more on the expense side of commitment control, as it is more widely used. Defining control budgets An organization may draw budgets for different countries in which it operates or for its various departments. Going further, it may then define budget amounts for different areas of spending, such as IT hardware, construction of buildings, and so on. Finally, it will also need to specify the time periods for which the budget applies, such as a month, a quarter, six months, or a year. In other words, a budget needs the following components: Account, to specify expense areas such as hardware expenses, construction expense, and so on One or more chartfields to specify the level of budget such as Operating unit, Department, Product, and so on Time period to specify if the budgeted amount applies to a month quarter or year, and so on Let's take a simple example to understand how control budgets are defined. An organization defines budgets for each of its departments. Budgets are defined for different expense categories, such as the purchase of computers and purchase of office supplies such as notepads, pens, and so on. It sets up budgets for each month of the year. Assume that following chartfield values are used by the organization: DepartmentDescriptionAccountDescription700Sales135000Hardware expenses900Manufacturing335000Stationery expenses Now, the budgets are defined for each period and a combination of chartfields as follows: PeriodDepartmentAccountBudget amountJanuary 2012700135000$100,000January 2012700335000$10,000January 2012900135000$120,000January 2012900335000$15,000February 2012700135000$200,000February 2012700335000$40,000February 2012900135000$150,000February 2012900335000$30,000 Thus, $100,000 has been allocated for hardware purchases for the Sales department for January 2012. Purchases will be allowed until this budget is exhausted. If a purchase exceeds the available amount, it will be prevented from taking place. Tracking expense transactions Commitment control spending transactions are classified into Pre-encumbrance, Encumbrance, and Expenditure categories. To understand this, we'll consider a simple procurement example. This involves the PeopleSoft Purchasing and Accounts Payable modules. In an organization, a department manager may decide that he/she needs three new computers for the newly recruited staff. A purchase requisition may be created by the manager to request purchase of these computers. Once it is approved by the appropriate authority, it is passed on to the procurement group. This group may refer to the procurement policies, inquire with various vendors about prices, and decide to buy these computers from a particular vendor. The procurement group then creates a purchase order containing the quantity, configuration, price, and so on and sends it to the vendor. Once the vendor delivers the computers, the organization receives the invoice and creates a voucher to process the vendor payment. Voucher creation takes place in the Accounts Payable module, while creation of requisition and purchase order takes place in the Purchasing module. In commitment control terms, Pre-encumbrance is the amount that may be spent in future, but there is no obligation to spend it. In the previous example, the requisition constitutes the pre-encumbrance amount. Note that the requisition is an internal document which may or may not get approved, thus there is no obligation to spend the money to purchase computers. Encumbrance is the amount for which there is a legal obligation to spend in future. In the previous example, the purchase order sent to the vendor constitutes the encumbrance amount, as we have asked the vendor to deliver the goods. Finally, when a voucher is created, it indicates the Expenditure amount that is actually being spent. A voucher indicates that we have already received the goods and, in accounting terms, the expense has been recognized. To understand how PeopleSoft handles this, think of four different buckets: Budget amount, Pre-encumbrance amount, Encumbrance amount, and Expenditure amount. Step 1 Budget definition is the first step in commitment control. Let's say that an organization has budgeted $50,000 for purchase of IT hardware at the beginning of the year 2011. At that time, these buckets will show the amounts as follows: BudgetPre-encumbranceEncumbranceExpenditureAvailable budget amount$50,000000$50,000 Available budget amount is calculated using the following formula: Available budget amount = Budget amount – (Pre-encumbrance + Encumbrance + Expenditure) Step 2 Now when the requisition for three computers (costing a total of $3,000) is created, it is checked against the available budget. It will be approved, as the entire $50,000 budget amount is available. After getting approved, the requisition amount of $3,000 is recorded as pre-encumbrance and the available budget is accordingly reduced. Thus, the budget amounts are updated as shown next: BudgetPre-encumbranceEncumbranceExpenditureAvailable budget amount$50,000$3,00000$47,000 Step 3 A purchase order can be created only after a requisition is successfully budget checked. When the purchase order i created (again for $3,000), it is once again checked against the available budget and will pass due to sufficient available budget. Thus, once approved, the purchase order amount of $3,000 is recorded as encumbrance, while the pre-encumbrance is eliminated. In other words, the pre-encumbrance amount is converted into an encumbrance amount, as now there is a legal obligation to spend it. A purchase order can be sent to the vendor only after it is successfully budget checked. Now, the amounts are updated as shown next: Budget Pre-encumbranceEncumbranceExpenditureAvailable budget amount$50,0000$3,0000$47,000 Step 4 When the voucher gets created (for $3,000), it is once again checked against the available budget and will pass, as the available budget is sufficient. Once approved, the voucher amount of $3,000 is recorded as expenditure, while the encumbrance is eliminated. In other words, the encumbrance amount is converted into actual expenditure amount. Now, the amounts are updated as shown next: BudgetPre-encumbranceEncumbranceExpenditureAvailable budget amount$50,00000$3,000$47,000 The process of eliminating the previous encumbrance or pre-encumbrance amount is known as liquidation. Thus, whenever a document (purchase order or voucher) is budget checked, the amount for the previous document is liquidated. Thus, a transaction will move successively through these three stages with the system checking if available budget is sufficient to process it. Whenever the transaction encounters insufficient budget, it is flagged as an exception. So, now the obvious question is how do we implement this in PeopleSoft? To put it simply, we need the following building blocks at the minimum: Ledgers to record budget, pre-encumbrance, encumbrance, and expenditure amounts Batch processes to budget check various transactions Of course, there are other configurations involved as well. We'll discuss them in the upcoming section. Commitment control configurations In this section, we'll go through the following important configurations needed to use the commitment control feature: Enabling commitment control Setting up the system-level commitment control options Defining the commitment control ledgers and ledger groups Defining the budget period calendar Configuring the control budget definition Linking the commitment control ledger group with the actual transaction ledger group Defining commitment control transactions Enabling commitment control Before using the commitment control feature for a PeopleSoft module, it needs to be enabled on the Installation Options page. Follow this navigation to enable or disable commitment control for an individual module: Setup Financials / Supply Chain | Install | Installation Options | Products The following screenshot shows the Installation Options – Products page: This page allows a system administrator to activate any installed PeopleSoft modules as well as to enable commitment control feature for a PeopleSoft module. The PeopleSoft Products section lists all PeopleSoft modules. Select the checkbox next to a module that is implemented by the organization. The Enable Commitment Control section shows the PeopleSoft modules for which commitment control can be enabled. Select the checkbox next to a module to activate commitment control and validate transactions in it against defined budgets. Setting up system-level commitment control options After enabling commitment control for desired modules, we need to set up some processing options at the system level. Follow this navigation to set up these system level options: Setup Financials / Supply Chain | Install | Installation Options | Commitment Control The following screenshot shows the Installation Options – Commitment Control page: Default Budget Date: This field specifies the default budget date that is populated on the requisitions, purchase orders and vouchers. The available options are Accounting Date Default (to use the document accounting date as the budget date) and Predecessor Doc Date Default (to use the budget date from the predecessor document). For example, the purchase order inherits requisition's budget date, and the voucher inherits the purchase order's budget date. Reversal Date Option: There are situations when changes are made to an already budget-checked transaction. Whenever this happens, the document needs to be budget checked again. This field determines how the changed transactions are posted. Consider a case where a requisition for $1,000 is created and successfully budget checked in January. In February, the requisition has to be increased to $1,200. It will need to be budget checked again. The available options are Prior Date (this option completely reverses pre-encumbrance entries for January for $1,000 and creates new entries for $1,200 in February) and Current Date (this option creates additional entries for $200 for February, while leaving $1,000 entries for January unchanged). BP Liquidation Option: We already saw that system liquidates the pre-encumbrance and encumbrance amounts while budget checking purchase orders and vouchers. This field determines the period when the previous amount is liquidated, if the transactions occur in different periods. The available options are Current Document Budget period (liquidate the amounts in the current period) and Prior Document Budget period (liquidate the amounts in the period when the source document was created). Enable Budget Pre-Check: This is a useful option to test the expense transactions without actually committing the amounts (pre-encumbrance, encumbrance, and expenditure) in respective ledgers. We may budget check a transaction and find that it fails the validation. Rather than budget checking and then handling the exception, it is much more efficient to simply do a budget pre-check. The system shows all the potential error messages which can help us in correcting the transaction data appropriately. Select the checkbox next to a module to enable this feature.  
Read more
  • 0
  • 0
  • 4504
article-image-excel-2010-financials-adding-animations-excel-graphs
Packt
28 Jun 2011
3 min read
Save for later

Excel 2010 Financials: Adding Animations to Excel Graphs

Packt
28 Jun 2011
3 min read
  Excel 2010 Financials Cookbook Powerful techniques for financial organization, analysis, and presentation in Microsoft Excel      Getting ready We will start this recipe with a single column graph demonstrating profitability. The dataset for this graph is cell A1. Cell A1 has the formula =A2/100. Cell A2 contains the number 1000: How to do it... Press Alt + F11 on the keyboard to open the Excel Visual Basic Editor (VBE). Once in the VBE, choose Insert | Module from the file menu. Enter the following code: Sub Animate() Dim x As Integer x = 0 Range("A2").Value = x While x < 1000 x = x + 1 Range("A2").Value = Range("A2").Value + 1 For y = 1 To 500000 Next DoEvents Wend End Sub The code should be formatted within the VBE code window as follows: Save your work and close the VBE. Once back at the worksheet, from the Ribbon, choose View | Macros | View Macros. Select the Animate macro and choose Run. The graph for profitability will now drop to 0 and slowly rise to account for the full profitability. How it works... The graph was set to use the data within cell A1 as the dataset to graph. While there was a value within the cell, the graph shows the column and corresponding value. If you were to manually change the value of cell A1 and press enter, the graph would also change to the new value. It is this update event that we harness within the VBA macro that was added. Sub Animate() Dim x As Integer x = 0 Here we first declare and set any variables that we will need. The value of x will hold the overall profit: Range("A2").Value = x While x < 1000 x = x + 1 Range("A2").Value = Range("A2").Value + 1 For y = 1 To 500000 Next DoEvents Wend Next, we choose cell A2 from the worksheet and set its value to x (which begins as 0). Since cell A1 is set to be A2/100, A1 (which is the graph dataset) is now zero. Using a "while" clause in Excel, we take x and add 1 to its value, then pause a moment using DoEvents, to allow Excel to update the graph, then we repeat adding another value of 1. This is repeated until x is equal to 1000 which when divided by 100 as in the formula for A1, becomes 10. Now, the profitability we end with in our animation graph is 10. End Sub There's more... We can change the height of the profitability graph by simply changing the 1000 in the While x < 1000 line. By setting this number to 900, the graph will only grow to the profit level of nine. Further resources on this subject: Excel 2010 Financials: Using Graphs for Analysis [Article] Load Testing Using Visual Studio 2008 [Article] How to Manage Content in a List in Microsoft Sharepoint [Article] Sage ACT! 2011: Creating a Quick Report [Article] Tips and Tricks: Report Page in IBM Cognos 8 Report Studio [Article]
Read more
  • 0
  • 0
  • 2812

article-image-excel-2010-financials-using-graphs-analysis
Packt
28 Jun 2011
6 min read
Save for later

Excel 2010 Financials: Using Graphs for Analysis

Packt
28 Jun 2011
6 min read
  Introduction Graphing is one of the most effective ways to display datasets, financial scenarios, and statistical functions in a way that can be understood easily by the users. When you give an individual a list of 40 different numbers and ask him or her to draw a conclusion, it is not only difficult, it may be impossible without the use of extra functions. However, if you provide the same individual a graph of the numbers, they will most likely be able to notice trending, dataset size, frequency, and so on. Despite the effectiveness of graphing and visual modeling, financial and statistical graphing is often overlooked in Excel 2010 due to difficulty, or lack of native functions. In this article, you will learn to not only add reusable methods to automate graph production, but also how to create graphs and graphing sets that are not native to Excel. You will learn to use box and whisker plots, histograms to demonstrate frequency, stem and leaf plots, and other methods to graph financial ratios and scenarios. Charting financial frequency trending with a histogram Frequency calculations are used throughout financial analysis, statistics, and other mathematical representations to determine how often an event has occurred. Determining the frequency of financial events in a transaction list can assist in determining the popularity of an item or product, the future likelihood of an event to reoccur, or frequency of profitability of an organization. Excel, however, does not create histograms by default. In this recipe, you will learn how to use several functions including bar charts and FREQUENCY functions to create a histogram frequency chart within Excel to determine profitability of an entity. Getting ready When plotting histogram frequency, we are using frequency and charting to determine the continued likelihood of an event from past data visually. Past data can be flexible in terms of what we are trying to determine; in this instance, we will use the daily net profit (Sale income Versus Gross expenses) for a retail store. The daily net profit numbers for one month are as follows: $150, $237, -$94.75, $1,231, $876, $455, $349, -$173, -$34, -$234, $110, $83, -$97, -$129, $34, $456, $1010, $878, $211, -$34, -$142, -$87, $312 How to do it... Utilizing the profit numbers from above, we will begin by adding the data to the Excel worksheet: Within Excel, enter the daily net profit numbers into column A starting on row 2 until all the data has been entered: We must now create the boundaries to be used within the histogram. The boundary numbers will be the highest and the lowest number thresholds that will be included within the graph. The boundaries to be used in this instance will be of $1500, and -$1500. These boundaries will encompass our entire dataset, and it will allow padding on the higher and lower ends of the data to allow for variation when plotting larger datasets encompassing multiple months or years worth of profit. We must now create bins that we will chart against the frequency. The bins will be the individual data-points that we want to determine the frequency against. For instance, one bin will be $1500, and we will want to know how often the net profit of the retail location falls within the range of $1500. The smaller the bins chosen, the larger the chart area. You will want to choose a bin size that will accurately reflect the frequency of your dataset without creating a large blank space. Bin size will change depending on the range of data to be graphed. Enter the chosen bin number into the worksheet in Column C. The bins will be a $150 difference from the previous bin.The bin sizes needed to include an appropriate range in order to illustrate the expanse of the dataset completely. In order to encompass all appropriate bin sizes, it is necessary to begin with the largest negative number, and increment to the largest positive number: The last component for creating the frequency histogram actually determines the frequency of net profit to the designated bins. For this, we will use the Excel function FREQUENCY. Select rows D2 through D22, and enter the following formula: =FREQUENCY(A:A,C2:C22) After entering the formula, press Shift + Ctrl + Enter to finalize the formula as an array formula.Excel has now displayed the frequency of the net profit for each of the designated bins: The information for the histogram is now ready. We will now be able to create the actual histogram graph. Select rows C2 through D22. With the rows selected, using the Excel ribbon, choose Insert | Column: From the Column drop-down, choose the Clustered Column chart option: Excel will now attempt to determine the plot area, and will present you with a bar chart. The chart that Excel creates does not accurately display the plot areas, due to Excel being unable to determine the correct data range: Select the chart. From the Ribbon, choose Select Data: From the select Data Source window that has appeared, you will change the Horizontal Axis to include the Bins column (column C), and the Legend Series to include the Frequency (column D). Excel will also create two series entries within the Legend Series panel; remove Series2 and select OK: Excel now presents the histogram graph of net profit frequency: Using the Format graph options on the Excel Ribbon, reduce the bar spacing and adjust the horizontal label to reduce the chart area to your specifications: Using the histogram for financial analysis, we can now determine that the major trend of the retail location quickly and easily, which maintains a net profit within $0 - $150, while maintaining a positive net profit throughout the month. How it works... While a histogram graph/chart is not native to Excel, we were able to use a bar/column chart to plot the frequency of net profit within specific bin ranges. The FREQUENCY function of Excel follows the following format: =FREQUENCY(Data Range to plot, Bins to plot within) It is important to note that within the data range, we chose the range A:A. This range includes all of the data within the A column. If arbitrary or unnecessary data unrelated to the histogram were added into column A, this range would include them. Do not allow unnecessary data to be added to column A, or use a limited range such as A1:A5 if your data was only included in the first five cells of column A. We entered the formula by pressing Shift + Ctrl + Enter in order to submit the formula as an array formula. This allows Excel to calculate individual information within a large array of data. The graph modifications allowed the bins to show frequency in the vertical axis, or indicate how many times a specific number range was achieved. The horizontal axis displayed the actual bins. There's more... The amount of data for this histogram was limited; however, its usefulness was already evident. When this same charting recipe is used to chart a large dataset (for example, multiple year data), the histogram becomes even more useful in displaying trends.
Read more
  • 0
  • 0
  • 5146