Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-plsql-using-collections
Packt
17 May 2012
18 min read
Save for later

PL/SQL: Using Collections

Packt
17 May 2012
18 min read
Collections—an overview A collection is a homogeneous single dimensional structure, which constitutes an ordered set of elements of a similar type. Being a homogeneous structure, all elements are of the same data type. The structure of the element contains cells with a subscript. The elements reside in these cells to make the index as their location information. The subscript or cell index becomes identification of an element and is used for its access. Structure of a collection type, SPORT, is shown in the following diagram. Note the subscript and elements into it. A new element, GOLF, enters at the last empty location and is represented as SPORT [6]: A collection element can be of any valid SQL data type or a user-defined type. An element of the SQL primitive data type is a scalar value while an element of the user-defined type is an object type instance. A collection can be used within a PL/SQL program by declaring a PL/SQL variable of collection type. The local PL/SQL variable can hold the instances of its collection type. Besides, a database column in a table can also be of the schema collection type. The collections in Oracle are strictly one dimensional. They cannot be realized on two-dimensional coordinates. However, multidimensional arrays can be realized when the collection has an object type or collection type attribute. A collection can be bounded or unbounded. Bounded collections can accommodate a limited number of elements while unbounded collections have no upper limit for subscripts. Collections provide an efficient way to organize the data in an array or set format while making the use of object-oriented features. An instance of a nested table or varray collection type is accessed as an object while the data is still stored in database columns. Collections can be used to avail data caching in programs and boost up the performance of SQL operations. On dedicated server connections, a session always uses User Global Area (UGA), a component of PGA, for collection operations. On the other hand, for shared server mode, the collection operations are still carried out in UGA; but UGA is now a part of System Global Area (SGA), thus indirectly in SGA. This is because in shared server connections, multiple server processes can affect a session, thus UGA must be allocated out of the SGA. Categorization Collections are of two types—persistent and non-persistent. A collection is persistent if it stores the collection structure and elements physically in the database. Contrarily, a non-persistent collection is active for a program only that is, maximum up to a session. Apart from the preceding categories, a collection can be realized in three formats namely, associative array, nested table or varray. This categorization is purely based on their objective and behavioral properties in a PL/SQL program. The following diagram combines the abstract and physical classification of collections: We will take a quick tour of these collection types now and discuss them in detail in the coming sections: Associative array (index-by table): This is the simplest form of non- persistent unbounded collections. As a non-persistent collection, it cannot be stored in the database, but they are available within a PL/SQL block only. The collection structure and data of associative array cannot be retained once the program is completed. Initially, during the days of Oracle 7, it was known as PL/SQL tables. Later, Oracle 8 version released it as index-by tables as they used an index to identify an element. Nested table: This is a persistent form of unbounded collections which can be created in the database as well as in PL/SQL block. Varray (variable-size array): This is a persistent but bounded form of collection which can be created in the database as well as in PL/SQL. Similar to a nested table, a varray is also a unidimensional homogeneous collection. The collection size and storage scheme are the factors which differentiate varrays from nested tables. Unlike a nested table, a varray can accommodate only a defined (fixed) number of elements. Selecting an appropriate collection type Here are a few guidelines to decide upon the appropriate usage of collection types in programs: Use of associative arrays is required when: You have to temporarily cache the program data in an array format for lookup purpose. You need string subscripts for the collection elements. Note that it supports negative subscripts, too. Map hash tables from the client to the database. Use of nested tables is preferred when: You have to stores data as sets in the database. Database columns of nested table type can be declared to hold the data persistently. Perform major array operations such as insertion and deletion, on a large volume of data. Use of varrays is preferred when: You have to store calculated or predefined volume of data in the database. Varray offers limited and defined storage of rows in a collection. Order of the elements has to be preserved. Associative arrays Associative arrays are analogous to conventional arrays or lists which can be defined within a PL/SQL program only. Neither the array structure nor the data can be stored in the database. It can hold the elements of a similar type in a key-value structure without any upper bound to the array. Each cell of the array is distinguished by its subscript, index, or cell number. The index can be a number or a string. Associative arrays were first introduced in Oracle 7 release as PL/SQL tables to signify its usage within the scope of a PL/SQL block. Oracle 8 release identified the PL/SQL table as Index by table due to its structure as an index-value pair. Oracle 10g release recognized the behavior of index by tables as arrays so as to rename it as associative arrays due to association of an index with an array. The following diagram explains the physical lookup structure of an associative array: Associative arrays follow the following syntax for declaration in a PL/SQL declare block: TYPE [COLL NAME] IS TABLE OF [ELEMENT DATA TYPE] NOT NULL INDEX BY [INDEX DATA TYPE] In the preceding syntax, the index type signifies the data type of the array subscript. RAW, NUMBER, LONG-RAW, ROWID, and CHAR are the unsupported index data types. The suited index types are BINARY_INTEGER, PLS_INTEGER, POSITIVE, NATURAL, SIGNTYPE, or VARCHAR2. The element's data type can be one of the following: PL/SQL scalar data type: NUMBER (along with its subtypes), VARCHAR2 (and its subtypes), DATE, BLOB, CLOB, or BOOLEAN Inferred data: The data type inherited from a table column, cursor expression or predefined package variable User-defined type: A user defined object type or collection type For illustration, the following are the valid conditions of the associative array in a PL/SQL block: /*Array of CLOB data*/ TYPE clob_t IS TABLE OF CLOB INDEX BY PLS_INTEGER; /*Array of employee ids indexed by the employee names*/ TYPE empno_t IS TABLE OF employees.empno%TYPE NOT NULL INDEX BY employees.ename%type; The following PL/SQL program declares an associative array type in a PL/ SQL block. Note that the subscript of the array is of a string type and it stores the number of days in a quarter. This code demonstrates the declaration of an array and assignment of the element in each cell and printing them. Note that the program uses the FIRST and NEXT collection methods to display the array elements. The collection methods would be covered in detail in the PL/SQL collection methods section: /*Enable the SERVEROUTPUT on to display the output*/ SET SERVEROUTPUT ON /*Start the PL/SQL block*/ DECLARE /*Declare a collection type associative array and its variable*/ TYPE string_asc_arr_t IS TABLE OF NUMBER INDEX BY VARCHAR2(10); l_str string_asc_arr_t; l_idx VARCHAR2(50); BEGIN /*Assign the total count of days in each quarter against each cell*/ l_str ('JAN-MAR') := 90; l_str ('APR-JUN') := 91; l_str ('JUL-SEP') := 92; l_str ('OCT-DEC') := 93; l_idx := l_str.FIRST; WHILE (l_idx IS NOT NULL) LOOP DBMS_OUTPUT.PUT_LINE('Value at index '||l_idx||' is '||l_str(l_ idx)); l_idx := l_str.NEXT(l_idx); END LOOP; END; / Value at index APR-JUN is 91 Value at index JAN-MAR is 90 Value at index JUL-SEP is 92 Value at index OCT-DEC is 93 PL/SQL procedure successfully completed. In the preceding block, note the string indexed array. A string indexed array considerably improves the performance by using indexed organization of array values. In the last block, we noticed the explicit assignment of data. In the following program, we will try to populate the array automatically in the program. The following PL/SQL block declares an associative array to hold the ASCII values of number 1 to 100: /*Enable the SERVEROUTPUT on to display the output*/ SET SERVEROUTPUT ON /*Start the PL/SQL Block*/ DECLARE /*Declare an array of string indexed by numeric subscripts*/ TYPE ASCII_VALUE_T IS TABLE OF VARCHAR2(12) INDEX BY PLS_INTEGER; L_GET_ASCII ASCII_VALUE_T; BEGIN /*Insert the values through a FOR loop*/ FOR I IN 1..100 LOOP L_GET_ASCII(I) := ASCII(I); END LOOP; /*Display the values randomly*/ DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(5)); DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(15)); DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(75)); END; / 53 49 55 PL/SQL procedure successfully completed. The salient features of associative arrays are as follows: An associative array can exist as a sparse or empty collection Being a non-persistent collection, it cannot participate in DML transactions It can be passed as arguments to other local subprograms within the same block Sorting of an associative array depends on the NLS_SORT parameter An associative array declared in package specification behaves as a session-persistent array Nested tables Nested tables are a persistent form of collections which can be created in the database as well as PL/SQL. It is an unbounded collection where the index or subscript is implicitly maintained by the Oracle server during data retrieval. Oracle automatically marks the minimum subscript as 1 and relatively handles others. As there is no upper limit defined for a nested table, its size can grow dynamically. Though not an index-value pair structure, a nested table can be accessed like an array in a PL/SQL block. A nested table is initially a dense collection but it might become sparse due to delete operations on the collection cells. Dense collection is the one which is tightly populated. That means, there exists no empty cells between the lower and upper indexes of the collection. Sparse collections can have empty cells between the first and the last cell of the collection. A dense collection may get sparse by performing the "delete" operations. When a nested table is declared in a PL/SQL program, they behave as a one-dimensional array without any index type or upper limit specification. A nested table defined in a database exists as a valid schema object type. It can be either used in a PL/SQL block to declare a PL/SQL variable for temporarily holding program data or a database column of particular nested table type can be included in a table, which can persistently store the data in the database. A nested table type column in a table resembles a table within a table, but Oracle draws an out- of-line storage table to hold the nested table data. This scenario is illustrated in the following diagram: Whenever a database column of nested table type is created in a table (referred to as parent table), Oracle creates a storage table with the same storage options as that of the parent table. The storage table created by Oracle in the same segment carries the name as specified in the NESTED TABLE STORE AS clause during creation of the parent table. Whenever a row is created in the parent table, the following actions are performed by the Oracle server: A unique identifier is generated to distinguish the nested table instances of different parent rows, for the parent row The instance of the nested table is created in the storage table alongside the unique identifier of the parent row The Oracle server takes care of these nested table operations. For the programmer or user, the whole process is hidden and appears as a normal "insert" operation. A nested table definition in PL/SQL follows the following syntax: DECLARE TYPE type_name IS TABLE OF element_type [NOT NULL]; In the preceding syntax, element_type is a primitive data type or a user-defined type, but not as a REF CURSOR type. In a database, a nested table can be defined using the following syntax: CREATE [OR REPLACE] TYPE type_name IS TABLE OF [element_type] [NOT NULL]; / In the preceding syntax, [element_type] can be a SQL supported scalar data type, a database object type, or a REF object type. Unsupported element types are BOOLEAN, LONG, LONG-RAW, NATURAL, NATURALN, POSITIVE, POSITIVEN, REF CURSOR, SIGNTYPE, STRING, PLS_INTEGER, SIMPLE_INTEGER, BINARY_INTEGER and all other non-SQL supported data types. If the size of the element type of a database collection type has to be increased, follow this syntax: ALTER TYPE [type name] MODIFY ELEMENT TYPE [modified element type] [CASCADE | INVALIDATE]; The keywords, CASCADE or INVALIDATE, decide whether the collection modification has to invalidate the dependents or the changes that have to be cascaded across the dependents. The nested table from the database can be dropped using the DROP command, as shown in the following syntax (note that the FORCE keyword drops the type irrespective of its dependents): DROP TYPE [collection name] [FORCE] Nested table collection type as the database object We will go through the following illustration to understand the behavior of a nested table, when created as a database collection type: /*Create the nested table in the database*/ SQL> CREATE TYPE NUM_NEST_T AS TABLE OF NUMBER; / Type created. The nested table type, NUM_NEST_T, is now created in the database. Its metadata information can be queried from the USER_TYPES and USER_COLL_TYPES dictionary views: SELECT type_name, typecode, type_oid FROM USER_TYPES WHERE type_name = 'NUM_NEST_T'; TYPE_NAME TYPECODE TYPE_OID --------------- --------------- -------------------------------- NUM_NEST_T COLLECTION 96DE421E47114638A9F5617CE735731A Note that the TYPECODE value shows the type of the object in the database and differentiates collection types from user-defined object types: SELECT type_name, coll_type, elem_type_name FROM user_coll_types WHERE type_name = 'NUM_NEST_T'; TYPE_NAME COLL_TYPE ELEM_TYPE_NAME --------------- ---------- -------------------- NUM_NEST_T TABLE NUMBER Once the collection type has been successfully created in the database, it can be used to specify the type for a database column in a table. The CREATE TABLE statement in the following code snippet declares a column of the NUM_NEST_T nested table type in the parent table, TAB_USE_NT_COL. The NESTED TABLE [Column] STORE AS [Storage table] clause specifies the storage table for the nested table type column. A separate table for the nested table column, NUM, ensures its out-of-line storage. SQL> CREATE TABLE TAB_USE_NT_COL (ID NUMBER, NUM NUM_NEST_T) NESTED TABLE NUM STORE AS NESTED_NUM_ID; Table created. A nested table collection type in PL/SQL n PL/SQL, a nested table can be declared and defined in the declaration section of the block as a local collection type. As a nested table follows object orientation, the PL/SQL variable of the nested table type has to be necessarily initialized. The Oracle server raises the exception ORA-06531: Reference to uninitialized collection if an uninitialized nested table type variable is encountered during block execution. As the nested table collection type has been declared within the PL/SQL block, its scope, visibility, and life is the execution of the PL/SQL block only. The following PL/SQL block declares a nested table. Observe the scope and visibility of the collection variable. Note that the COUNT method has been used to display the array elements. /*Enable the SERVEROUTPUT to display the results*/ SET SERVEROUTPUT ON /*Start the PL/SQL block*/ DECLARE /*Declare a local nested table collection type*/ TYPE LOC_NUM_NEST_T IS TABLE OF NUMBER; L_LOCAL_NT LOC_NUM_NEST_T := LOC_NUM_NEST_T (10,20,30); BEGIN /*Use FOR loop to parse the array and print the elements*/ FOR I IN 1..L_LOCAL_NT.COUNT LOOP DBMS_OUTPUT.PUT_LINE('Printing '||i||' element: '||L_LOCAL_ NT(I)); END LOOP; END; / Printing 1 element: 10 Printing 2 element: 20 Printing 3 element: 30 PL/SQL procedure successfully completed. Additional features of a nested table In the earlier sections, we saw the operational methodology of a nested table. We will now focus on the nested table's metadata. Furthermore, we will demonstrate a peculiar behavior of the nested table for the "delete" operations. Oracle's USER_NESTED_TABLES and USER_NESTED_TABLE_COLS data dictionary views maintain the relationship information of the parent and the nested tables. These dictionary views are populated only when a database of a nested table collection type is included in a table. The USER_NESTED_TABLES static view maintains the information about the mapping of a nested table collection type with its parent table. The structure of the dictionary view is as follows: SQL> desc USER_NESTED_TABLES Name Null? Type ----------------------- -------- --------------- TABLE_NAME VARCHAR2(30) TABLE_TYPE_OWNER VARCHAR2(30) TABLE_TYPE_NAME VARCHAR2(30) PARENT_TABLE_NAME VARCHAR2(30) PARENT_TABLE_COLUMN VARCHAR2(4000) STORAGE_SPEC VARCHAR2(30) RETURN_TYPE VARCHAR2(20) ELEMENT_SUBSTITUTABLE VARCHAR2(25) Let us query the nested table relationship properties for the TAB_USE_NT_COL table from the preceding view: SELECT parent_table_column, table_name, return_type, storage_spec FROM user_nested_tables WHERE parent_table_name='TAB_USE_NT_COL' / PARENT_TAB TABLE_NAME RETURN_TYPE STORAGE_SPEC ---------------------------------------------------------------------- NUM NESTED_NUM_ID VALUE DEFAULT In the preceding view query, RETURN_TYPE specifies the return type of the collection. It can be VALUE (in this case) or LOCATOR. Another column, STORAGE_SPEC, signifies the storage scheme used for the storage of a nested table which can be either USER_SPECIFIED or DEFAULT (in this case). The USER_NESTED_TABLE_COLS view maintains the information about the collection attributes contained in the nested tables: SQL> desc USER_NESTED_TABLE_COLS Name Null? Type ----------------------- -------- --------------- TABLE_NAME NOT NULL VARCHAR2(30) COLUMN_NAME NOT NULL VARCHAR2(30) DATA_TYPE VARCHAR2(106) DATA_TYPE_MOD VARCHAR2(3) DATA_TYPE_OWNER VARCHAR2(30) DATA_LENGTH NOT NULL NUMBER DATA_PRECISION NUMBER DATA_SCALE NUMBER NULLABLE VARCHAR2(1) COLUMN_ID NUMBER DEFAULT_LENGTH NUMBER DATA_DEFAULT LONG NUM_DISTINCT NUMBER LOW_VALUE RAW(32) HIGH_VALUE RAW(32) DENSITY NUMBER NUM_NULLS NUMBER NUM_BUCKETS NUMBER LAST_ANALYZED DATE SAMPLE_SIZE NUMBER CHARACTER_SET_NAME VARCHAR2(44) CHAR_COL_DECL_LENGTH NUMBER GLOBAL_STATS VARCHAR2(3) USER_STATS VARCHAR2(3) AVG_COL_LEN NUMBER CHAR_LENGTH NUMBER CHAR_USED VARCHAR2(1) V80_FMT_IMAGE VARCHAR2(3) DATA_UPGRADED VARCHAR2(3) HIDDEN_COLUMN VARCHAR2(3) VIRTUAL_COLUMN VARCHAR2(3) SEGMENT_COLUMN_ID NUMBER INTERNAL_COLUMN_ID NOT NULL NUMBER HISTOGRAM VARCHAR2(15) QUALIFIED_COL_NAME VARCHAR2(4000) We will now query the nested storage table in the preceding dictionary view to list all its attributes: SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, HIDDEN_COLUMN FROM user_nested_table_cols where table_name='NESTED_NUM_ID' / COLUMN_NAME DATA_TYP DATA_LENGTH HID ------------------------------ ---------- ----------- ----- NESTED_TABLE_ID RAW 16 YES COLUMN_VALUE NUMBER 22 NO We observe that though the nested table had only number elements, there is two- columned information in the view. The COLUMN_VALUE attribute is the default pseudo column of the nested table as there are no "named" attributes in the collection structure. The other attribute, NESTED_TABLE_ID, is a hidden unique 16-byte system generated raw hash code which latently stores the parent row identifier alongside the nested table instance to distinguish the parent row association. If an element is deleted from the nested table, it is rendered as parse. This implies that once an index is deleted from the collection structure, the collection doesn't restructure itself by shifting the cells in a forward direction. Let us check out the sparse behavior in the following example. The following PL/SQL block declares a local nested table and initializes it with a constructor. We will delete the first element and print it again. The system raises the NO_DATA_FOUND exception when we query the element at the index 1 in the collection:   /*Enable the SERVEROUTPUT to display the block messages*/ SQL> SET SERVEROUTPUT ON /*Start the PL/SQL block*/ SQL> DECLARE /*Declare the local nested table collection*/ TYPE coll_method_demo_t IS TABLE OF NUMBER; /*Declare a collection variable and initialize it*/ L_ARRAY coll_method_demo_t := coll_method_demo_t (10,20,30,40,50); BEGIN /*Display element at index 1*/ DBMS_OUTPUT.PUT_LINE('Element at index 1 before deletion:'||l_ array(1)); /*Delete the 1st element from the collection*/ L_ARRAY.DELETE(1); /*Display element at index 1*/ DBMS_OUTPUT.PUT_LINE('Element at index 1 after deletion:'||l_ array(1)); END; / Element at index 1 before deletion:10 DECLARE * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 15
Read more
  • 0
  • 0
  • 4948

article-image-troubleshooting-intuit-quickbooks
Packt
08 May 2012
12 min read
Save for later

Troubleshooting with Intuit Quickbooks

Packt
08 May 2012
12 min read
The following table is the Recipe Reference Card for the keyboard shortcuts included in this article: Find Ctrl+F Delete the line Ctrl+Del Save and close Alt+A Advance to the next field Tab Regress to the previous field Shift+Tab Customize the report Alt+M   Clearing stale undeposited funds When the Undeposited Funds window includes customer payments, which you know have already been deposited, recorded, and reconciled, the Income or Unearned Income and Undeposited Funds accounts are overstated. You can use this recipe to efficiently combine the cleared deposit with the undeposited funds. Getting ready Verify that the appropriate bank account is reconciled for the period containing the stale undeposited funds. If not, this can be resolved simply by deleting the recorded deposit and recording the deposit of the undeposited funds. How to do it... With the Find tool (Edit | Find or Ctrl+F), use the Amount filter, along with the Date filter, if necessary, to bring up both the deposit and the customer payment already recorded. Open up the deposit, and click on the Payments button. Check off the appropriate transaction, and click on OK to add this to the deposit. Click on the line item for the deposit originally on the screen, that is, the duplicate of the payment that you just added to this screen. Click on Edit | Delete Line (Ctrl+Del), and then Save and Close (Alt+A). How it works… The only way to directly delete an item added to Undeposited Funds is to delete the underlying customer payment or sales receipt. However, this is not advisable, because these transactions are typically accurate representations of a real-world activity. Additionally, when the deposit was recorded, the related account duplicated the income or customer deposit from the original invoice or sales receipt. Therefore, the deposit itself needs to be modified to simultaneously remove the duplicate offset account, and resolve the outstanding Undeposited Funds item. There's more… For a printable and memorizable list of all outstanding items in the Undeposited Funds account, open the Undeposited Funds ledger. Click on Customize Report | Filters | Choose Filter | Cleared | No. On the Header/Footer tab, in the Report Title field, enter Undeposited Funds, and click on OK. Adjusting cash basis receivables or payables balances Does your cash basis balance sheet show balance in your receivable or payables accounts? This recipe will take you through the two-step process of resolving these items: Locate them Adjust them Getting ready To find out which customers and vendors are responsible for your cash basis accounts receivable or accounts payable balances, respectively, run the following report: Go to Reports | Custom Reports | Summary. Set Dates to All. If you desire a cut-off date, leave the From field blank, and enter your cut-off date in the To field. Set Report Basis to Cash. Set Display rows by to Customer or Vendor. Go to Advanced, and set Display Rows to Non-zero. Go to the Filters tab, and set Account to Accounts Receivable (or Accounts Payable). Go to the Header/Footer tab, and set Report Title to Cash Basis A/R by Customer or Cash Basis A/P by Vendor. The report total matches your balance sheet account total for the same cut-off date. How to do it... Double-click one of the account balances to reveal the detail, and remove the columns irrelevant to this effort. Scan the activity for patterns, unusual items, or clues about the cash basis balance. Gain an understanding of the transaction, and resolve the items by changing the accounts, changing a date, making a journal entry, noting that no adjustment is needed, or other action. Refresh the report, and confirm either a zero balance or an appropriate cash basis balance. How it works… Some of the most likely patterns to scan for include the following: A Balance column, which keeps returning to 0.00, and then stops returning to 0.00: An unusual transaction Type: A recurring figure in the Balance column. This lets you know that at least one culprit occurred before the recurrence began: A zero balance right before a transaction, which is also the aggregate balance sheet account balance: There's more… The most common reasons for a cash basis receivables or payable balance are: Payment date precedes bill or invoice date, and the report cut-off is between both the dates Offset account is a balance sheet account, and the bill or the invoice is unpaid Writing off stale receivables Making a journal entry to write off stale A/R in bulk is easy, but this makes it difficult to trace through the accounting records. The possible uses for more precise information include producing a trail for taxing authorities, internal or independent auditors, or banks. A separate spreadsheet may suffice, but it may be difficult to coordinate. This recipe focuses on straightforward ways to write off these balances in a detailed, but effi cient fashion. Getting ready Have your criteria ready for which invoices are to be written off. The A/R Aging Summary report may help (Reports | Customers & Receivables | A/R Aging Summary). To further analyze your oldest receivables: Go to Customize Report | Age through how many days?, and type 360. Go to Filters | Choose Filter | Aging | >=, and type 90. Go to Header/Footer, add the text: Older Than 90 days to the Report Title, and click on OK. How to do it... To write off a stale receivable: Go to the Customers page or Home Page | Receive Payments. From the Received From drop-down box, select the appropriate customer. If applicable, select the particular job instead. In the Date field, enter the effective date of the writeoff. Click on the Discount & Credits button. In the Discount and Credits pop-up window, fill in the amount to be written off, the writeoff account (generally Bad Debt), and the same class from the original invoice, if class tracking is used in the file: The completed screen should have 0.00 in the Amount and Payment fields. Include the amount written off in the Discount field: If an allowance for doubtful accounts is used against bad debt for writeoffs, then set up the Allowance account as an Accounts Receivable account type , and select the Allowance account from the drop-down box at the top of the Customer Payments screen. A set of journal entries can be used later, to remove the amounts from both Accounts Receivable and the Allowance account. There's more… This is the same procedure that can be used to record discounts, but the key is that an income or expense account must always be selected. This procedure is not appropriate for a balance sheet account to be selected, such as debiting a liability account while crediting A/R, or debiting the Allowance account while crediting A/R. This will cause a cash basis balance sheet report to be out of balance. If that combination of debits and credits is essential, then use a journal entry instead. Then, apply the journal entry to the original invoice, by opening the invoice, and clicking the Apply Credits button. When this recipe is used to write off receivables, Act. Revenue is reduced in the Job Profitability Summary report , and there is no effect on the Item Profitability Summary report. The same reporting results are attained if a journal entry is used to debit Bad Debt Expense and credit Accounts Receivable. If a Credit Memo is used instead, Act. Revenue is reduced in the Job Profitability Summary report as well as the Item Profitability Summary report. In order to increase the Act. Cost column in the Job Profitability Summary report instead, use the Write Checks screen in an unusual fashion: on the Items tab, use an Other Charge item called Bad Debt or Writeoffs. When you create this item, link it to the Bad Debt Expense account. On the Write Checks screen, be sure to enter the Customer:Job name as well as the writeoff amount. On the Expenses tab, select Accounts Receivable, and enter the writeoff amount as a negative number, so that the total amount of the check equals 0. Be sure that the check bears no check number, and clear it in the next bank reconciliation. This technique causes both the Job Profitability and Item Profitability reports to show the transaction as an expense, rather than as a reduction of revenue. It works because QuickBooks includes the Write Check transactions in the Act. Cost column of these reports. Writing off stale payables Making a journal entry to write off stale A/P in bulk is easy, but makes it difficult to trace through the accounting records. Possible uses for more precise information include producing a trail for taxing authorities, internal or independent auditors, or banks. A separate spreadsheet may suffice, but may be difficult to coordinate. This recipe focuses on straightforward ways to write off these balances in a detailed but efficient fashion. Getting ready Have your criteria ready for which bills are to be written off. The A/P Aging Summary report may help (Reports | Vendors & Payables | A/P Aging Summary). To further analyze your oldest payables: Go to Customize Report | Age through how many days?, and type 360. Go to Filters | Choose Filter | Aging | >=, and type 90. Go to Header/Footer, add the text: Older Than 90 days to the Report Title, and click on OK. How to do it... Go to Vendors or Home Page | Pay Bills. Consider using the Filter by drop-down list to only show bills from a particular vendor, and consider using the Sort by drop-down list to organize the payables list by Due Date. For one single vendor, check off the fi rst bill to be written off. Click on the Set Discount button . In the Discount and Credits pop-up window, fi ll in the amount to be written off, the writeoff account (generally the same expense account as the original bill), and the same class from the original bill, if class tracking is used in the file: Click on Done, and proceed to the next bill for the same vendor. Make sure the Payment Date field is the effective date of the write off. The completed screen should have 0.00 in the Amt. to Pay field. Include the amount written off in the Disc. Used field: When the writeoffs for that vendor are complete, click on Pay Selected Bills, followed by Pay More Bills for additional writeoffs. There's more… The advantage of this recipe is that the transaction is created and applied to the bill in a single step. However, the drawback is that it does not appear in the Job Profitability Summary or the Item Profitability Summary reports. For that to occur, create a vendor credit instead, by using the Enter Bills screen, and clicking on the Credit button. Then, use the Items tab to record the credit, using the same item that was used in the original bill. Additionally, use the Customer:Job field to apply the credit to a particular job. For a partial writeoff, after the Discount and Credits window is closed, be sure to manually input 0.00 into the Amt. to Pay field. The default is to include the remaining balance in that field, and this recipe assumes that the current action is only to record writeoffs, not payments to vendors. Balancing the balance sheet How can a balance sheet get out of balance in a software program? If you're reading this recipe, you may have already seen for yourself that the impossible can happen. The following is a procedure to root out the transaction which is causing this phenomenon. Getting ready A balance sheet prepared on the cash basis can be out of balance if certain transactions were saved, for example if the Discount feature was used with a balance sheet account. How to do it... Open a Balance Sheet Summary report. Click on the Customize Report button and in the Dates drop-down box, select All. If the report is on the accrual basis, change the Report Basis to Cash. In the Display columns by drop-down box, change the selection to Year, and click on OK. Look at the balance sheet, and identify the earliest year in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the year identified in the previous step. In the Display columns by drop-down box, change the selection to Month, and click on OK. Look at the balance sheet, and identify the earliest month in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the month identified in the previous step. In the Display columns by drop-down box , change the selection to Week, and click on OK. Look at the balance sheet, and identify the earliest week in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the week identified in the previous step. In the Display columns by drop-down box, change the selection to Day, and click on OK. Look at the balance sheet, and identify the earliest day in which the balance sheet is out of balance: Run a transaction journal (Reports | Accountant & Taxes | Journal), limit the transactions to that day, and scan the report for the transaction responsible. Delete the transaction which caused the imbalance, which is usually a Customer Payment or other A/R or A/P data entry screen, and make a journal entry instead, to cover the appropriate debit and credit. There's more… If the Discount feature was used to reclassify an Accounts Receivable balance to Retainage Receivable, make a journal entry to achieve the same General Ledger effect instead, and apply the transaction to the original invoice, by opening the invoice and using the Apply Credits button.
Read more
  • 0
  • 0
  • 1889

article-image-visualizations-made-easy-gnuplot
Packt
01 Mar 2012
12 min read
Save for later

Visualizations made easy with gnuplot

Packt
01 Mar 2012
12 min read
  This article, written by Lee Philips the author of gnuplot Cookbook contains the following :   Making a surface plot Using coordinate mappings Coloring the surface Making a contour plot Making a vector plot Making an image plot or heat map Combining contours and images Combining surfaces with images Plotting a path in 3D Drawing parametric surfaces Making a surface plot A surface plot represents the dependent quantity z, which depends on the two independent variables x and y, as a surface whose height indicates the value of z.   The previous figure is a perspective drawing of a surface representing the Bessel function J0(r), where r is the distance from (x=0, y=0). The height of the surface shows the value of J0, given on the vertical axis (unlabeled in this figure, but usually called z). The other two (unlabeled) axes defning the plane above which the surface is drawn are the x and y axes.     How to do it… The following code listing is the script that coaxed gnuplot into making the previous figure:   set isosamples 40unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1splot besj0(x**2+y**2)set view 29,53 #Done implicitly by mousing.set term pngcairo mono enhancedset out 'bessel.png'replot   How it works… There are several new commands in this recipe. The set isosamples command sets the isoline density. This is analogous to the set samples command when making 2D plots, but it sets the number of lines used in forming the surface. The number of isosamples can be set independently in each direction; if one number is specifed, it is used for both directions. The default of 10 usually creates surfaces that are far too coarse to be useful.   Turning to the second of the highlighted commands, the splot command is the 3D version of our old friend the plot command (it probably initially stood for "surface plot", but now can do several things besides plot surfaces, as we shall see in the rest of this article). It expects a function of x and y rather than x alone. Although we are interested in plotting something that has the type of symmetry that would be most conveniently expressed in polar (spherical or cylindrical) coordinates, these geometries are not available for function plots in 3D in gnuplot. (They are available through the set mapping command for data plots, as we shall see later in this article.) Therefore in such cases, we are required to convert our expressions to the rectangular coordinate system. Instead of what we would call r in a cylindrical coordinate system, here we use the equivalent x**2 + y**2.   In this recipe, we would like to illustrate, as far as possible, the interactive approach to creating a fnal 3D plot. The next highlighted line, beginning with set view, can be entered on the command line or included in a script. The view is the orientation in degrees of the perspective drawing of the 3D plot. Naturally, it does arise in 2D. It is diffcult to determine what is the most useful view for a particular plot without looking at it and experimenting with it; therefore, even if our fnal product is intended to be a fle, a common workfow is to frst create the plot using an interactive terminal (x11 or wxt). Then we rotate the plot with the mouse, and possibly scale and zoom it using the middle mouse button, until we arrive at the desired appearance. This is what we mean by the comment in the set view command. Now we can reset the terminal to the fnal output device that we need, specify the output fle, and simply say replot. The view and scaling at which we left the interactive plot is retained as a set of global settings and will be refected in our fnal output fle. These settings are also displayed at the bottom of the interactive plot window, so we can record them if we are going to make similar plots in the future, or want a set of plots to be drawn with the same settings. Note that we also redefned the ztics value. This is because when the plot is tilted to the fnal view angle that we chose, the perspective causes the tic labels on the vertical axis to be crowded together; this is a common problem with 3D plots, and taking manual control of the tics on the z-axis is the solution.   There's more… Following is the same plot with one setting changed (aside from a slight adjustment in the view angle):   While the frst plot was essentially a wireframe that we could see through, this version has the appearance of a solid, opaque surface. All we need to do is to say set hidden3d. This, which only works when lines or linespoints are being used, makes the surface appear opaque by removing from the plot any part of the surface, other surfaces, and other plot elements such as the axes and tic labels, that are behind the surface from our point of view. The underside of the surface is shown in a contrasting color with a color output device, but the two sides of the surface are not distinguished in monochrome. The name of the setting refers to the technique of hidden line removal; gnuplot is justly famed for the quality of its hidden line removal algorithm, and is one reason this program is so well regarded for its 3D plotting ability.   Using coordinate mappings It is possible, when making 3D plots from data fles, for the data to be interpreted in spherical or cylindrical coordinates rather than the default Cartesian system. For details, type help set mapping. We will give an example of using the cylindrical mapping to conveniently draw a shape with cylindrical symmetry.   The previous figure is a perspective view of a surface that somewhat resembles a Christmas tree ornament. The relevant feature of this surface is that it has rotational symmetry around the z (vertical) axis, which means that it is most naturally expressed in cylindrical coordinates.   How to do it… Try the following script:   set mapping cylindricalunset ticsunset borderset hiddenset xrange [-pi : pi]set yrange [-pi : pi]set zrange [0 : pi]set iso 60unset keysplot '++' using 1:2:(sin($2)) with lines   How it works… There are several new ideas used in this recipe. Breaking it down, these are:   The set mapping command The frst, highlighted line contains the new command that is the subject of this recipe. When the default Cartesian (x-y-z) coordinate system is changed to cylindrical then the columns of data read in during a data plot are interpreted as θ-z-r, where θ is the angular coordinate, z is the vertical coordinate, and r is the radius. A spherical mapping is also available and explained in the gnuplot online help (help set mapping). If the data fle only has two columns, then the plot is drawn with r = 1.   In our example we don't want to plot from a data fle, however. We want to plot a function given directly in the script. This presents us with a problem, as gnuplot does not support cylindrical or spherical plots of functions in 3D. The solution is to use one of gnuplot's pseudofles.   The ++ pseudofle The "++" pseudofle creates rows of imaginary data with three columns x-y-z unless we change the coordinate mapping, which of course in this example we have. Setting the mapping to cylindrical means that the fctitious data columns will be interpreted as θ-z-r.   Now to plot a function, we use the using notation applied to the imaginary columns of data. We've done this in the fnal line of the script, where we plot the sine of the second column (z).   To clarify the use of "++" when plotting surfaces, note that, in Cartesian coordinates, the two commands "splot sin(x)+cos(y)" and "splot '++' using 1:2:(sin($1)+cos($2)) with lines" produce exactly the same plot.   Coordinate ranges We have also established ranges for all variables in the set xrange and two other commands following it. The ranges for the polar coordinates are taken from the corresponding Cartesian coordinates, that is, when we set the xrange, we are setting both the range of the x-axis displayed on the plot and the range of the variable θ in the cylindrical coordinate system. It is mandatory to set xrange and yrange when using the "++" flename.   This mixing of the coordinate system in which the function is calculated and the Cartesian system in which it is displayed can be confusing, but the example shows a strategy, which should make it possible to get predictable results. Setting the xrange and yrange as we've done puts the r = 0 point in the middle of the graph and prevents part of the plot from being cut off. It also sets up a full rotation of the angular coordinate over a range of 2 p.   If we wanted to plot, say, our shape with half of it sliced off by a vertical plane, the easiest way to do this is not to fddle with the coordinate ranges, but to apply a transformation to one of the fctitious data columns: splot '++' using ($1/2):2:(sin($2)) with lines, will do the trick without any surprising side effects. In this example the underlying angular coordinate (column 1) still passes through a full rotation, but we've divided it in half without changing the way the figure is projected onto the Cartesian display. Note that the 60 isolines will still be used in our reduced angular range, so we might want to set iso to a smaller value.   Completing the picture We've eliminated all of the graph adornments (unset tics, unset border, unset key) so we will be left with only the surface. The isosamples are set to give a suffciently smooth surface drawing that is nevertheless not too crowded with isosurface lines (see the previous recipe). set hidden ensures that we shall see only the outer surface of the shape.   Coloring the surface The wireframe splot with hidden line removal that we covered in the frst recipe of this article, Making a surface plot, gives the visual impression of a solid surface. The numerical value encoded into the surface's height can be visually estimated, roughly, by the perspective provided by the isolines in conjunction with the tics on the vertical axis. But gnuplot also has a way to draw real solid surfaces whose height is indicated by color or shade.   The previous figure shows the same mathematical function plotted in the frst recipe in this article (Making a surface plot). Now the numerical value of the function at any point is indicated by both the height of the surface and its shade; the surface is now drawn as an opaque membrane rather than as a network of curves.   How to do it…   To produce the previous figure, run the following in gnuplot:   set isosamples 100set samples 100unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1unset surfaceset pm3dsplot besj0(x**2+y**2) The surface will be drawn with a palette of colors when a color output device is being used and with a grayscale palette when using a monochrome terminal.     How it works… If you compare the previous script with the one in the Making a surface plot recipe at the beginning of this article, you will see that the only signifcant difference is the highlighted line. The pm3d mode colors the imaginary surface being plotted according to its height or z-value at every point, with the mapping between the height and the color or shade determined by the palette, which we shall discuss in some more detail shortly.   The other modifcations are to increase the number of isolines, in order to get a smoother surface, and to turn off the drawing of the individual isolines themselves with the command unset surface. We also need to set the sample frequency; generally we want this to be equal to the isosample frequency. In pm3d mode, the two orthogonal sets of isolines are drawn with two different spacings given by the two parameters. Although the gnuplot manual claims that the global hidden3d setting does not affect pm3d surface plots, it in fact seems to, and should not be turned on, as it appears to slightly degrade the drawing quality.   There's more… Sometimes we want both a colored surface and a set of isolines; in fact, this can often be the clearest type of quantitative 3D plot. The way to achieve the highest quality in this type of graph is to use the hidden3d linestyle option to pm3d, as we do in the following script:   set iso 30set samp 30unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1unset surfset style line 1 lt 4 lw .5set pm3d at s hidden3d 1splot besj0(x**2+y**2) This requires us to defne a user linestyle. Then the linestyle is referred to in an option to the set pm3d command. This will cause the isolines to be drawn using lines in this style, which allows us to have them in any color, thickness, or pattern supported by our terminal. Further, the isolines will be drawn with hidden line removal, so they will appear to be embedded in the opaque surface. As before, the global hidden3d option should not be turned on.   Note that we've also reduced the sample and isoline frequency, to keep our plot from being too crowded with isolines. (The at s component of the set pm3d command means at surface.)  
Read more
  • 0
  • 0
  • 5817
Visually different images

article-image-administrating-mysql-server
Packt
09 Feb 2012
10 min read
Save for later

Administrating the MySQL Server

Packt
09 Feb 2012
10 min read
(For more resources on MySQL, see here.) Managing users and their privileges The Privileges sub-page (visible only if we are logged in as a privileged user) contains dialogs to manage MySQL user accounts. It also contains dialogs to manage privileges on global, database, and table levels. This sub-page is hierarchical. When editing a user's privileges, we can see the global privileges as well as the database-specific privileges. Then, when viewing database-specific privileges for a user, we can view and edit this user's privileges for any table within this database. The user overview The first page displayed when we enter the Privileges sub-page is called User overview. This shows all user accounts and a summary of their global privileges, as shown in the following screenshot: (Move the mouse over the image to enlarge.) From this page, we can: Edit a user's privileges, via the Edit Privileges link for this user Export a user's privileges definition, via the Export link for this user Use the checkboxes to remove users, via the Remove selected users dialog Access the page where the Add a new User dialog is available The displayed users' list has columns with the following characteristics: Column Characteristic User The user account we are defining. Host The machine name or IP address, from which this user account will be connecting to the MySQL server. A % value here indicates all hosts. Password Contains Yes if a password is defined and No if it isn't. The password itself cannot be seen from phpMyAdmin's interface or by directly looking at the mysql.user table, as it is encrypted with a one-way hashing algorithm. Global privileges A list of the user's global privileges. Grant Contains Yes if the user can grant his/her privileges to others. Action Contains a link to edit this user's privileges or export them. Exporting privileges This feature can be useful when we need to create a user with the same password and privileges on another MySQL server. Clicking on Export for user marc produces the following panel: Then it's only a matter of selecting these GRANT statements and pasting them in the SQL box of another phpMyAdmin window, where we have logged in on another MySQL server. Privileges reload At the bottom of User overview page, this message is displayed: Note: phpMyAdmin gets the users' privileges directly from MySQL's privilege tables. The content of these tables may differ from the privileges the server uses, if they have been changed manually. In this case, you should reload the privileges before you continue. Here, the text reload the privileges is clickable. The effective privileges (the ones against which the server bases its access decisions) are the privileges that are located in the server's memory. Privilege modifications that are made from the User overview page are made both in memory and on disk in the mysql database. Modifications made directly to the mysql database do not have immediate effect. The reload the privileges operation reads the privileges from the database and makes them effective in memory. Adding a user The Add a new User link opens a dialog for user account creation. First, we see the panel where we will describe the account itself, as shown in the following screenshot: The second part of the Add a new User dialog is where we will specify the user's global privileges, which apply to the server as a whole (see the Assigning global privileges section of this article), as shown in the following screenshot: Entering the username The User name menu offers two choices. We can choose Use text field: and enter a username in the box, or we can choose Any user to create an anonymous user (the blank user). More details about the anonymous user are available at http://dev.mysql.com/doc/refman/5.5/en/connection-access.html. Let us choose Use text field: and enter bill. Assigning a host value By default, this menu is set to Any host, with % as the host value. The Local choice means localhost. The Use host table choice (which creates a blank value in the host field) means to look in the mysql.host table for database-specific privileges. Choosing Use text field: allows us to enter the exact host value we want. Let us choose Local. Setting passwords Even though it's possible to create a user without a password (by selecting the No password option), it's best to have a password. We have to enter it twice (as we cannot see what is entered) to confirm the intended password. A secure password should have more than eight characters, and should contain a mixture of uppercase and lowercase characters, digits, and special characters. Therefore, it's recommended to have phpMyAdmin generate a password—this is possible in JavaScript-enabled browsers. In the Generate password dialog, clicking on Generate button enters a random password (in clear text) on the screen and fills the Password and Re-type input fields with the generated password. At this point, we should note the password so that we can pass it on to the user. Understanding rights for database creation A frequent convention is to assign a user the rights to a database having the same name as this user. To accomplish this, the Database for user section offers the Create database with same name and grant all privileges radio button. Selecting this checkbox automates the process by creating both the database (if it does not already exist) and assigning the corresponding rights. Please note that, with this method, each user would be limited to one database (user bill, database bill). Another possibility is to allow users to create databases that have the same prefix as their usernames. Therefore, the other choice Grant all privileges on wildcard name (username_%) performs this function by assigning a wildcard privilege. With this in place, user bill could create the databases bill_test, bill_2, bill_payroll, and so on; phpMyAdmin does not pre-create the databases in this case. Assigning global privileges Global privileges determine the user's access to all databases. Hence, these are sometimes known as superuser privileges. A normal user should not have any of these privileges unless there is a good reason for this. Moreover, should a user account that has global privileges become compromised, the damage could be far greater. If we are really creating a superuser, we will select every global privilege that he or she needs. These privileges are further divided into Data, Structure, and Administration groups. In our example, bill will not have any global privileges. Limiting the resources used We can limit the resources used by this user on this server (for example, the maximum queries per hour). Zero means no limit. We will not impose any resources limits on bill. The following screenshot shows the status of the screen just before hitting Create user to create this user's definition (with the remaining fields being set to default): Editing a user profile The page used to edit a user's profile appears whenever we click on Edit Privileges for a user in the User overview page. Let us try it for our newly created user bill. There are four sections on this page, each with its own Go button. Hence, each section is operated independently and has a distinct purpose. Editing global privileges The section for editing the user's privileges has the same look as the Add a new User dialog, and is used to view and to change global privileges. Assigning database-specific privileges In this section, we define the databases to which our user has access, and his or her exact privileges on these databases. As shown in the previous screenshot, we see None because we haven't defined any privileges yet. There are two ways of defining database privileges. First, we can choose one of the existing databases from the drop-down menu as shown in the following screenshot: This assigns privileges only for the chosen database. Secondly, we can also choose Use text field: and enter a database name. We could enter a non-existent database name, so that the user can create it later (provided we give him/her the CREATE privilege in the next panel). We can also use special characters, such as the underscore and the percent sign, for wildcards. For example, entering bill here would enable him to create a bill database, and entering bill% would enable him to create a database with any name that starts with bill. For our example, we will enter bill and click on Go. The next screen is used to set bill's privileges on the bill database, and create table-specific privileges. To learn more about the meaning of a specific privilege, we can hover the mouse over a privilege name (which is always in English), and an explanation about this privilege appears in the current language. We give SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, INDEX, and DROP privileges to bill on this database. We then click on Go. After the privileges have been assigned, the interface stays at the same place, so that we can refine these privileges further. We cannot assign table-specific privileges for the moment, as the database does not yet exist. To go back to the general privileges page of bill, click on the 'bill'@'localhost' title. This brings us back to the following, familiar page except for a change in one section: We see the existing privileges (we could click on Edit Privileges link to edit or on Revoke link to revoke them) on the bill database for user bill, and we can add privileges for bill on another database. We can also see that bill has no table-specific privilege on the bill database. Changing the password The Change password dialog is part of the Edit user page, and we can use it either to change bill's password or to remove it. Removing the password will enable bill to log in without a password. The dialog offers a choice of password hashing options, and it's recommended to keep the default of MySQL 4.1+ hashing. For more details about hashing, please visit http://dev.mysql.com/doc/refman/5.1/en/password-hashing.html. Changing login information or copying a user This dialog can be used to change the user's login information, or to copy his or her login information to a new user. For example, suppose that Bill calls and tells us that he prefers the login name billy instead of bill. We just have to add a y to the username, and then select delete the old one from the user tables radio button, as shown in the following screenshot: After clicking on Go, bill no longer exists in the mysql database. Also, all of his privileges, including the privileges on the bill database, will have been transferred to the new user—billy. However, the user definition of bill will still exist in memory, and hence it's still effective. If we had chosen the delete the old one from the user tables and reload the privileges afterwards option instead, the user definition of bill would immediately have ceased to be valid. Alternatively, we could have created another user based on bill, by making use of the keep the old one choice. We can transfer the password to the new user by choosing Do not change the password option, or change it by entering a new password twice. The revoke all active privileges… option immediately terminates the effective current privileges for this user, even if he or she is currently logged in. Removing a user Removing a user is done from the User overview section of the Privileges page. We select the user to be removed. Then (in Remove selected users) we can select the Drop the databases that have the same names as the users option to remove any databases that are named after the users we are deleting. A click on Go effectively removes the selected users.
Read more
  • 0
  • 0
  • 2024

article-image-integrating-ibm-cognos-tm1-ibm-cognos-8-bi
Packt
16 Dec 2011
4 min read
Save for later

Integrating IBM Cognos TM1 with IBM Cognos 8 BI

Packt
16 Dec 2011
4 min read
(For more resources on IBM, see here.) Before proceeding with the actual steps of the recipe, we will take a note of the following integration considerations: The measured Dimension in the TM1 Cube needs to be explicitly identified. The Data Source needs to be created in IBM Cognos Connection which points to the TM1 Cube. New Data Source can also be created from IBM Cognos Framework Manager, but for the sake of simplicity we will be creating that from IBM Cognos Connection itself. The created Data Source is used in IBM Cognos Framework Manager Model to create a Metadata Package and publish to IBM Cognos Connection. Metadata Package can be used to create reports, generate queries, slice and dice, or event management using one of the designer studios available in IBM Cognos BI. We will focus on each of the above steps in this recipe, where we will be using one of the Cubes created as part of demodata TM1 Server application and we will be using the Cube as a Data Source in the IBM Cognos BI layer. Getting ready Ensure that the TM1 Admin Server service is started and demodata TM1 Server is running. We should have IBM Cognos 8 BI Server running and IBM Cognos 8 Framework Manager installed. How to do it... Open the TM1 Architect and right-click on the Sales_Plan Cube. Click on Properties. In the Measures Dimension box, click on Sales_Plan_Measures and then for Time Dimension click on Months. Note that the preceding step is compulsory if we want to use the Cube as a Data Source for the BI layer. We need to explicitly define a measures dimension and a time dimension. Click on OK and minimize the TM1 Architect, keep the server running. Now from the Start menu, open IBM Cognos Framework Manager, which is desktop-based tool used to create metadata models. Create a new project from IBM Cognos 8 Framework Manager. Enter the Project name as Demodata and provide the Location where the model file will be located. Note that each project generates a .cpf file which can be opened in the IBM Cognos Framework Manager. Provide valid user credentials so that IBM Cognos Framework Manager can link to a running IBM Cognos BI Server setup. Users and roles are defined by IBM Cognos BI admin user. Choose English as the authoring language when the Select Language list comes up. This will open the Metadata Wizard - Select Metadata Source. We use the Metadata Wizard to create a new Data Source or point to an existing Data Source. In the Metadata Wizard make sure that Data Sources is selected and click on the Next button. In the next screen, click on the New button to create a new Data Source by the name of TM1_Demodata_Sales_Plan. This will open a New data source wizard, where we need to specify the name of the Data Source. On next screen, it will ask for the Data Source Type for which we will specify TM1 from the drop-down, as we want to create a new Data Source based on the TM1 Cube Sales_Data. On the next screen specify the connection parameters. For Administration Host we can specify a name or localhost, depending on the name of the server. In our case, we have specified name of the server as ankitgar, hence we are using an actual name instead of a localhost. In the case of TM1 sitting on another server within the network, we will provide the IP address or name of the host in UNC format. Test the connection to test whether the connection to the TM1 Cube is successful. Click on Close and proceed. Click on the Finish button to complete the creation of the Data Source. The new Data Source is created on the Cognos 8 Server and now can be used by anyone with valid privileges given by the admin user. It's just a connection to the Sales_Plan TM1 Cube which now can be used to create metadata models and, hence, reports and queries perform the various functions suggested in the preceding sections. Now it will return to Metadata Wizard as shown, with the new Data Source appearing with the list of already created Data Sources. Click on the newly created Data Source and on the Next button. It will display all available Cubes on the DemoData TM1 Server, the machine name being the server name (localhost/ankitgar). Click on the Sales_Plan cube and then on Next.
Read more
  • 0
  • 0
  • 2760

article-image-getting-started-apache-solr
Packt
02 Dec 2011
8 min read
Save for later

Getting Started with Apache Solr

Packt
02 Dec 2011
8 min read
  (For more resources on Apache, see here.) We're going to get started by downloading Solr, examine its directory structure, and then finally run it. This sets you up for the next section, which tours a running Solr server. Get Solr: You can download Solr from its website: http://lucene.apache.org/ solr/. The last Solr release this article was written for is version 3.4. Solr has had several relatively minor point-releases since 3.1 and it will continue. In general I recommend using the latest release since Solr and Lucene's code are extensively tested. Lucid Imagination also provides a Solr distribution called "LucidWorks for Solr". As of this writing it is Solr 3.2 with some choice patches that came after to ensure its stability and performance. It's completely open source; previous LucidWorks releases were not as they included some extras with use limitations. LucidWorks for Solr is a good choice if maximum stability is your chief concern over newer features. Get Java: The only prerequisite software needed to run Solr is Java 5 (a.k.a. java version 1.5) or later—ideally Java 6. Typing java –version at a command line will tell you exactly which version of Java you are using, if any. Use latest version of Java! The initial release of Java 7 included some serious bugs that were discovered shortly before its release that affect Lucene and Solr. The release of Java 7u1 on October 19th, 2011 resolves these issues. These same bugs occurred with Java 6 under certain JVM switches, and Java 6u29 resolves them. Therefore, I advise you to use the latest Java release. Java is available on all major platforms including Windows, Solaris, Linux, and Apple. Visit http://www.java.com to download the distribution for your platform. Java always comes with the Java Runtime Environment (JRE) and that's all Solr requires. The Java Development Kit (JDK) includes the JRE plus the Java compiler and various diagnostic utility programs. One such useful program is jconsole, and so the JDK distribution is recommended. Solr is a Java-based web application, but you don't need to be particularly familiar with Java in order to use it. Solr's installation directory structure When you unzip Solr after downloading it, you should find a relatively straightforward directory structure: client: Convenient language-specific client APIs for talking to Solr. Ignore the client directory Most client libraries are maintained by other organizations, except for the Java client SolrJ which lies in the dist/ directory. client/ only contains solr-ruby , which has fallen out of favor compared to rsolr —both of which are Ruby Solr clients. contrib: Solr contrib modules. These are extensions to Solr. The final JAR file for each of these contrib modules is actually in dist/; so the actual files here are mainly the dependent JAR files. analysis-extras: A few text analysis components that have large dependencies. There are some "ICU" Unicode classes for multilingual support, a Chinese stemmer, and a Polish stemmer. clustering: A engine for clustering search results. dataimporthandler: The DataImportHandler (DIH) —a very popular contrib module that imports data into Solr from a database and some other sources. extraction: Integration with Apache Tika– a framework for extracting text from common file formats. This module is also called SolrCell and Tika is also used by the DIH's TikaEntityProcessor. uima: Integration with Apache UIMA—a framework for extracting metadata out of text. There are modules that identify proper names in text and identify the language, for example. To learn more, see Solr's wiki: http://wiki.apache.org/solr/SolrUIMA. velocity: Simple Search UI framework based on the Velocity templating language. dist: Solr's WAR and contrib JAR files. The Solr WAR file is the main artifact that embodies Solr as a standalone file deployable to a Java web server. The WAR does not include any contrib JARs. You'll also find the core of Solr as a JAR file, which you might use if you are embedding Solr within an application, and Solr's test framework as a JAR file, which is to assist in testing Solr extensions. You'll also see SolrJ's dependent JAR files here. docs: Documentation—the HTML files and related assets for the public Solr website, to be precise. It includes a good quick tutorial, and of course Solr's API. Even if you don't plan on extending the API, some parts of it are useful as a reference to certain pluggable Solr configuration elements—see the listing for the Java package org.apache.solr.analysis in particular. example: A complete Solr server, serving as an example. It includes the Jetty servlet engine (a Java web server), Solr, some sample data and sample Solr configurations. The interesting child directories are: example/etc: Jetty's configuration. Among other things, here you can change the web port used from the pre-supplied 8983 to 80 (HTTP default). exampledocs: Sample documents to be indexed into the default Solr configuration, along with the post.jar program for sending the documents to Solr. example/solr: The default, sample Solr configuration. This should serve as a good starting point for new Solr applications. It is used in Solr's tutorial. example/webapps: Where Jetty expects to deploy Solr from. A copy of Solr's WAR file is here, which contains Solr's compiled code. Solr's home directory and Solr cores When Solr starts, the very first thing it does is determine where the Solr home directory is. There are various ways to tell Solr where it is, but by default it's the directory named simply solr relative to the current working directory where Solr is started. You will usually see a solr.xml file in the home directory, which is optional but recommended. It mainly lists Solr cores. For simpler configurations like example/solr, there is just one Solr core, which uses Solr's home directory as its core instance directory . A Solr core holds one Lucene index and the supporting Solr configuration for that index. Nearly all interactions with Solr are targeted at a specific core. If you want to index different types of data separately or shard a large index into multiple ones then Solr can host multiple Solr cores on the same Java server. A Solr core's instance directory is laid out like this: conf: Configuration files. The two I mention below are very important, but it will also contain some other .txt and .xml files which are referenced by these two. conf/schema.xml: The schema for the index including field type definitions with associated analyzer chains. conf/solrconfig.xml: The primary Solr configuration file. conf/xslt: Various XSLT files that can be used to transform Solr's XML query responses into formats such as Atom and RSS. conf/velocity: HTML templates and related web assets for rapid UI prototyping using Solritas. The soon to be discussed "browse" UI is implemented with these templates. data: Where Lucene's index data lives. It's binary data, so you won't be doing anything with it except perhaps deleting it occasionally to start anew. lib: Where extra Java JAR files can be placed that Solr will load on startup. This is a good place to put contrib JAR files, and their dependencies. Running Solr Now we're going to start up Jetty and finally see Solr running albeit without any data to query yet. We're about to run Solr directly from the unzipped installation. This is great for exploring Solr and doing local development, but it's not what you would seriously do in a production scenario. In a production scenario you would have a script or other mechanism to start and stop the servlet engine with the operating system—Solr does not include this. And to keep your system organized, you should keep the example directly as exactly what its name implies—an example. So if you want to use the provided Jetty servlet engine in production, a fine choice then copy the example directory elsewhere and name it something else. First go to the example directory, and then run Jetty's start.jar file by typing the following command: >>cd example >>java -jar start.jar The > > notation is the command prompt. These commands will work across *nix and DOS shells. You'll see about a page of output, including references to Solr. When it is finished, you should see this output at the very end of the command prompt: 2008-08-07 14:10:50.516::INFO: Started SocketConnector @ 0.0.0.0:8983 The 0.0.0.0 means it's listening to connections from any host (not just localhost, notwithstanding potential firewalls) and 8983 is the port. If Jetty reports this, then it doesn't necessarily mean that Solr was deployed successfully. You might see an error such as a stack trace in the output if something went wrong. Even if it did go wrong, you should be able to access the web server: http://localhost:8983. Jetty will give you a 404 page but it will include a list of links to deployed web applications, which will just be Solr for this setup. Solr is accessible at: http://localhost:8983/solr, and if you browse to that page, then you should either see details about an error if Solr wasn't loaded correctly, or a simple page with a link to Solr's admin page, which should be http://localhost:8983/solr/admin/. You'll be visiting that link often. To quit Jetty (and many other command line programs for that matter), press Ctrl+C on the keyboard.
Read more
  • 0
  • 0
  • 2284
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-article-oracle-bi-publisher-11g-learning-new-xpt-format
Packt
31 Oct 2011
4 min read
Save for later

Oracle BI Publisher 11g: Learning the new XPT format

Packt
31 Oct 2011
4 min read
(For more resources on topic_name, see here.) We will cover the following topics in this article: The Layout Editor presentation Designing a Layout Export options The Layout Editor First, you have to choose a predefined layout from the Create Report Interface. As you can see in the following screenshot, this interface displays a list of predefined layouts: 3180EN_05_01 image 01 You can add your own predefined layouts to this list and make them available for your later use or even for all the users. After choosing a layout from the Basic Templates or the Shared Templates group, the Layout Editor Interface is displayed. Designing a Layout In the Layout Editor Interface, as shown in the following screenshot, you have tools to perform activities such as: Insert a component: Select the desired component from the Components pane on the left or from the Insert tab of the toolbar and drag-and-drop it into the design area Set component properties: Set the component properties from the Properties pane on the left or from the component-specific tab of the toolbar (only the most commonly used components) Insert a data element: Drag the element from the Data Source pane to the design area Drop Value Here Drop Label Here Drop Series Here 3180EN_05_02 image 02(within bullets) As shown in the preceding screenshot, a precise dropping area is marked. For example, in a chart you have the following marked areas: Set page layout options: In order to set the page layout options, use the Page Layout tab and the Properties pane Save the Layout: Use the activity icons from the toolbar on the right side In the following sections, a few elements will be inserted to complete our report design. You will also see the steps that you need to follow when inserting and setting the properties of these components. Text elements In order to make changes in the settings of Text elements, follow the steps given here: Click on the Insert tab and choose the Text Item component from the toolbar, as shown in the following screenshot: 3180EN_05_03 image 03 Click on the Text tab and set a font color for your text using the Font Color icon from the toolbar, as shown in the following screenshot: 3180EN_05_04 image 04 Set the text margins using the Properties panel, as shown in the following screenshot: 3180EN_05_05 image 05 In this way, we obtain the desired report title, as you can see in the following screenshot: 3180EN_05_06 image 06 In order to insert data elements in our report's components, we will use the following Data Model: 3180EN_05_07 image 07 Charts In order to create Charts, follow the steps given here: Click on the Insert tab and choose the Chart component from the toolbar, as shown in the following screenshot. Select the newly inserted chart and go to the Chart tab on the toolbar to set the chart type (Vertical Bar in this example) and the chart style (Project in this example). Drag the LOAN_PERIOD and the PRICE fields from the Data Source (from the left pane) over the Drop Value Here area of the design view. Drop the TITLE field from the Data Source over the Drop Label Here area: 3180EN_05_08 image 08 Data tables In order to create Data tables , follow the steps given here: Click on the Insert tab and choose the Data Table component from the toolbar, as shown in the following screenshot. Drag the fields LOAN_DATE, LOAN_PERIOD, TITLE, and YEAR from the Data Source over the area marked as Drop a Data Item Here. Select the LOAN_DATE column and in the Properties pane set the Formatting Mask to yyyy-mm-dd. For each column of the table, enter a suitable value for Width, in the Properties pane. For example, the fi rst column has a width of 1.00 inch: 3180EN_05_09 image 09
Read more
  • 0
  • 0
  • 3207

article-image-oracle-bi-publisher-11g-working-multiple-data-sources
Packt
25 Oct 2011
5 min read
Save for later

Oracle BI Publisher 11g: Working with Multiple Data Sources

Packt
25 Oct 2011
5 min read
(For more resources on Oracle 11g, see here.) The Data Model Editor's interface deals with all the components and functionalities needed for the data model to achieve the structure you need. However, the main component is the Data Set. In order to create a data model structure in BIP, you can choose from a variety of data set types, such as: SQL Query MDX Query Oracle BI Analysis View Object Web Service LDAP Query XML file Microsoft Excel file Oracle BI Discoverer HTTP Taking advantage of this variety requires multiple Data Sources of different types to be defined in the BIP. In this article, we will see: How data sources are configured How the data is retrieved from different data sets How data set type characteristics and the links between elements influence the data model structure Administration Let's first see, how you can verify or configure your data sources. You must choose the Administration link found in the upper-right corner of any of the BIP interface pages, as shown in the following screenshot:     The connection to your database can be choosen from the following connection types: Java Database Connectivity (JDBC) Java Naming and Directory Interface (JNDI) Lightweight Directory Access Protocol (LDAP) Online Analytical Processing (OLAP) Available Data Sources To get to your data source, BIP offers two possibilities: YOu can use a connection. In order to use a connection, these are the available connection types: JDBC JNDI LDAP OLAP You can also use a file. In the following sections, the Data Source types&mdashJDBC, JNDI, OLAP Connections, and File&mdashwill be explained in detail. JDBC Connection Let's take the first example. To configure a Data Source to use JDBC, from the Administration page, choose JDBC Connection from the Data Sources types list, as shown in the following screenshot:     You can see the requested parameters for configuring a JDBC connection in the following screenshot: Data Source Name: Enter a name of your choice. Driver Type: Choose a type from the list. The relating parameters are: Database Driver Class: A driver, matching your database type. Connection String: Information containing the computer name on which your database server is running, for example, port, database name, and so on. Username: Enter a database username. Password: Provide the database user's password. The Use System User option allows you to use the operating system's credentials as your credentials. For example, in this case, your MS SQL Database Server uses Windows authentication as the only authentication method. When you have a system administrator in-charge of these configurations, all you have to do is to find which are the available Data Sources and eventually you can check if the connection works. Click on the Test Connection button at the bottom of the page to test the connection:     JNDI Connection JNDI Connection pool is in fact another way to access your JDBC Data Sources. Using a connection pool increases efficiency by maintaining a cache of physical connections that can be reused, allowing multiple clients to share a small number of physical connections. In order to configure a Data Source to use JNDI, from the Administration page, choose JNDI Connection from the Data Sources types list. The following screen will appear:     As you can see in the preceding screenshot, on the Add Data Source page you must enter the following parameters: Data Source Name: Enter a name of your choice JNDI Name: This is the JNDI location for the pool set up in your application server, for example, jdbc/BIP10gSource The users having roles included in Allowed Roles list only will be able to create reports using this Data Source. OLAP Connection Use the OLAP Connection to connect to OLAP databases. BI Publisher supports the following OLAP types: Oracle Hyperion Essbase Microsoft SQL Server 2000 Analysis Services Microsoft SQL Server 2005 Analysis Services SAP BW In order to configure a connection to an OLAP database, from the Administration page, choose OLAP Connection from the Data Sources types list. The following screen will appear:     On the Add Data Source page, the following parameters must be entered: Data Source Name: Enter a name of your choice OLAP Type: Choose a type from the list Connection String: Depending on the supported OLAP databases, the connection string format is as follows: Oracle Hyperion Essbase Format: [server name] Microsoft SQL Server 2000 Analysis Services Format: Data Source=[server];Provider=msolap;Initial Catalog=[catalog] Microsoft SQL Server 2005 Analysis Services Format: Data Source=[server];Provider=msolap.3;Initial Catalog=[catalog] SAP BW Format: ASHOST=[server] SYSNR=[system number] CLIENT=[client] LANG=[language] Username and Password: Used for OLAP database authentication File Another example of a data source type is File. In order to gain access to XML or Excel files, you need a File Data Source. In order to set up this kind of Data Source, only one step is required&mdashenter the path to the Directory in which your files reside. You can see in the following screenshot that demo files Data Source points to the default BIP files directory. The file needs to be accessible from the BI Server (not on your local machine):    
Read more
  • 0
  • 0
  • 5868

article-image-null-5
Packt
17 Oct 2011
10 min read
Save for later

Tips and Tricks: Report Page in IBM Cognos 8 Report Studio

Packt
17 Oct 2011
10 min read
(Read more interesting articles on IBM Cognos here.) Showing images dynamically (Traffic Light report) Let us suppose that we have a report which shows month-on-month difference in sales quantity Getting ready Please note that you will need administrator rights on the Cognos server to complete this recipe. If the server is installed on your personal machine, you will have these rights by default. For this recipe, we need to first create three icons or images for red, yellow, and green. They should be already available on the Cognos server under <Cognos Installation>webcontentsamplesimages folder. If not, then create them using any image editor software or use the images supplied with this book. Once you have the three images which you need to conditionally show on the report, place them on the Cognos server under <Cognos Installation>webcontentsamplesimages folder. (If the folder is not there, create one). Change the IIS security to allow 'Anonymous Read and Browse' accesses. Now open the report that shows the month-on-month running differences. Insert a new 'image' from the insertable objects pane on the list report, as a new column. Now go to Condition Explorer and create a new string variable. Define the expression as: if ([Query1].[Running Difference] > 0)then ('green')else if ([Query1].[Running Difference] < 0)then ('red')else ('yellow') Call the variable Traffic and define three possible values for the same (red, yellow, and green). Now go back to the report page. Select the image. Open its URL Source Variable dialog. Choose the variable Traffic and click OK. From Condition Explorer, choose 'red' condition. Now click on the image again. It will allow you to define the image URL for this condition. Set the URL to: ../samples/images/Red.jpg Similarly, define the URL for 'yellow' and 'green' conditions as ../samples/images/yellow.jpg and ../samples/images/green.jpg respectively. Run the report to test it. How it works... Cognos Report Studio allows you to put the images in the report by specifying the URL of the image. The images can be anywhere on the intranet or internet. They will be displayed properly as long as the URL is accessible from Cognos application server and gateway. In this recipe, we are using a report which already calculates the Running Difference. Hence, we just had to define conditional variable to trap different possible conditions. The Image component allows us to define the URL for different conditions by attaching it to the Traffic variable in step 8. There's more... In this case, though the URL of the image changes dynamically, it is not truly 100% dynamic. There are three static URLs already defined in the report and one is picked up depending on the condition. We can also use a data item or report expression as source of the URL value. In that case, it will be totally dynamic, and based on the values coming from database; Cognos will work out the URL of the image and display it correctly. This is useful when the image filenames and locations are stored in the database. For example, Product Catalogue kind of reports. More info This recipe works fine in HTML, PDF, and Excel formats.We have used relative URLs for the images, so that report can be easily deployed to other environments where Cognos installation might be in a different location. However, we need to ensure that the images are copied in all environments in the folder mentioned in step 2. Handling missing image issue In the previous recipe, we saw how to add images to the report. You will be using that technique in many cases, some involving hundreds of images (For example, Product Catalogue).There will often be a case in which database has a URL or image name, whereas the corresponding image is either missing or inaccessible. In such a case, the web browser shows an error symbol. This looks quite ugly and needs to be handled properly. In this recipe, we will see how to handle this problem gracefully. Getting ready We will use the report prepared in previous recipe. We need to delete the Green.jpg file (or rename it to something else) from the server, in order to create the missing image scenario. How to do it... In the previous recipe, we added an image object and defined its conditional URLs . We need to replace that image with an HTML Item. For that, unlock the report objects and delete the image component. Add an HTML Item in the same column. Select this HTML item and from the Properties pane, set its HTML Source Variable to 'Traffic'. (Please note that we already have this conditional variable in the last recipe). Now define the HTML for different conditions. Start with 'red'. Choose 'red' from conditional explorer and define the HTML as: For 'yellow', define the HTML as: <img src="../samples/images/yellow.jpg" alt="No Change" onError="img2txt(this)"/> For 'green', define HTML as: <img src="../samples/images/green.jpg" alt="Upsell" onError="img2txt(this)"/> Now go back to the No Variable state by double clicking on the green bar, and add another HTML item on the report. Put it just before the list. Define this HTML as: <script>function img2txt(img) {txt = img.alt;img.parentNode.innerHTML=txt;}</script> Now run the report to test it. As you can see, if the image is missing, the report will now handle it gracefully and show some text instead of an error image. How it works... Here we are using our custom code to display the image, instead of using CRS's in-built Image component.We have pulled an HTML item onto the report and defined it to display different images depending on the condition using the <img border="0" /> tag. This tag allows us to define an alternative text and onError event as well. We are using the onError event to call our custom made JavaScript function called img2txt .This function replaces the HTML item with a text which was originally defined as 'alternative text'. Hence, if green.jpg is missing, this function will replace it with a text item Upsell. There's more... As we are using HTML code and JavaScript in this technique, it works in HTML format only. This technique will be useful for a lot of graphical reports (dashboards, scorecards, online product catalogues, and so on). Dynamic links to external website (Google Map example) In this recipe, we will introduce you to the 'Hyperlink' component. A report shows retailers information by products. It shows various fields like Retailer name, Contact information, City, and Postal zone. Business wants to have a link to Google maps that will show retailer's place on the map using the Postal zone information. As the addresses might change in the backend, the technique needs to be dynamic to pick up the latest postal zone. Getting ready Create a simple list report that shows Retailers information by Product lines. How to do it... From the 'Insertable Objects' toolbox, drag a hyperlink object onto the report as a new column. Change its Text property to Map. Set the URL Source Type to Report Expression and define the report expression as follows: 'http://maps.google.com/maps?q=' + [Query1].[City] Run the report to test it. As you can see, there is a link for each retailer record. If you Shift+click on the link, it will open the Google map for corresponding postal zone in a new window. How it works... Here we are using the 'Hyperlink' component of CRS. We can define the URL as any static link. However, for our requirements, we have defined a report expression. This allows us to provide a dynamic link which picks up the latest postal zone from the database. We are passing the postal zone to Google Maps as part of a URL. The hyperlink component works in HTML as well as Excel and PDF formats of report. This object currently does not have the property to define whether the link target should open in a new window or the same window. Just clicking on the link, opens the target in same window; whereas Shift+click opens in a new window. There's more... You can use this technique to call any external website that accepts parameters within a URL. You can pass multiple parameters too.   Alternating drill link In this recipe, we will learn about a limitation of drill link and overcoming it using Render Variable. There is a crosstab report which shows sales quantity by month and order method. We need to provide drill-through facility from the intersection. However, the drill-through target needs to be different, depending on the order method. If order method is e-mail, the drill-through from intersection should go to a report called 'Alternating Drill Link—Drill Report 2'. For all other order methods, it should go to 'Alternating Drill Link—Drill Report 1'. Getting ready Create a crosstab report to serve as the main report. Drag Month (shipment) on rows, Order method on columns and Sales Quantity on the intersection. Create two list reports to serve as drill reports. In the sample provided with this book, we have used two list reports for this. One accepts the Order method and Month. The other accepts only month and is designed to work for the order method 'E-mail'. How to do it... Create a drill through to first drill the report from the crosstab intersection. Now make sure that the report objects are unlocked. Select the intersection text item (which now looks like hyperlink as there is already a drill-through defined). Hold the Ctrl key down and drag the text to its right within a cell. This should create a copy of the text item within that cell and will look like the following: Now select this copy of the text item. Hit the drill-through button to open definitions. Delete the existing drill-through to first report. Create a new drill to a second report. So, now we have two text items in the cell, each going to different drill reports. Create a string type of Conditional Variable. Define it as: if ([Query1].[Order method] = 'E-mail') then ('E-mail')else ('Other')Call it OrderMethod and define the two values to be E-mail and Other. Now go back to the report page. Select the first text item from intersection. Open its Render Variable property . Choose the OrderMethod variable and select to Render for: Other. Similarly, define Render Variable for the second text item, but choose to Render for: E-mail. Run the report to test it. You will see that clicking on the intersection numbers opens first drill report for any order method other than E-mail. Whereas for the numbers under E-mail, the second drill report opens.
Read more
  • 0
  • 0
  • 3467

article-image-sap-netweaver-accessing-mdm-system
Packt
28 Sep 2011
4 min read
Save for later

SAP Netweaver: Accessing the MDM System

Packt
28 Sep 2011
4 min read
  (For more resources on SAP, see here.) Accessing an MDM server involves mounting and unmounting operations which we discuss in the following section. Mounting and unmounting an MDM server MDM server installations are accessible on the console only after they have been mounted. Multiple servers can be mounted within a single console session. We have a choice of mounting only those servers which need to be accessed. The server may or may not be in a running state when mounted in your console session. No password is required to mount a server in your console session even if it is password protected. The MDM Console provides the option of saving the list of currently mounted servers to an MDM Console Settings file. We can load this settings file in the console session and automatically get the previously saved server(s) list mounted as a group. An MDM server can be mounted by multiple MDM Consoles. Once an MDM server is started from any console, it runs on the machine where it is installed and is seen as running by all MDM Consoles that have mounted it. We can mount an MDM server as follows: Right-click on the root node (SAP MDM Servers) in the hierarchy pane tree and choose Mount MDM Server… from the context menu: Alternatively you many select the root node (SAP MDM Servers) and choose MDM Servers | Mount MDM Server… from the main menu: MDM opens the Mount MDM Server dialog prompting for the MDM Server input as displayed next: In the drop-down list input the region displaying the text Enter or select an MDM Server, type the name of the MDM server (typically the name of the machine on which the server is running) you want mounted or select it from the drop-down list. Alternatively (for non-Windows installations), type the name or IP address of any remote machine into the edit box in the Mount MDM Server dialog. Click on the OK button: The drop-down list of MDM Servers shows only those servers that you have previously mounted. If a specific server is not in the list, click on … (Browse) button to open the Select MDM Server dialog (see next) and select the machine on which the MDM Server has been installed from the list of Windows machines visible on the local network.   On successful mounting of the MDM server, you will see a new server node added in the tree structure of the hierarchy pane. Depending on the state of the MDM server, the corresponding icon is displayed in the tree node. The different states and the respective icons of the server node are listed in the following table: Status icon State of MDM server   MDM server is stopped   MDM server is running   MDM server is in one of the following states*: Server Inaccessible Communication Error Start Server Failed Invalid   If the MDM server is inaccessible via the console even after the server has been started, you can try unmounting and remounting the MDM server in the console to restore connectivity. Next we see how to unmount an already mounted MDM server: In the hierarchy tree, right-click on the MDM server that you want to unmount and choose Unmount MDM Server from the context menu. Alternatively, you may unmount the server by first selecting its node in the tree and then clicking on MDM Servers | Unmount MDM Server from the main menu. Unmounting an MDM server is also possible by using the MDM Servers pane (top-right) when the root node (SAP MDM Servers) is selected in the hierarchy tree. Then you can right-click on the MDM Server in the objects pane and select Unmount MDM Server from the context menu. The MDM server node disappears from the tree in the hierarchy pane. Unmounting a running MDM server while it is still running keeps the MDM repositories mounted and loaded even while the unmounted server remains disconnected from the console session. Unmounting and again re-mounting an MDM server within the same MDM Console session requires the MDM server's password to be re-entered to perform any server-level operations (like starting and stopping the server).
Read more
  • 0
  • 0
  • 1728
article-image-sap-netweaver-mdm-scenarios-and-fundamentals
Packt
16 Sep 2011
11 min read
Save for later

SAP NetWeaver: MDM Scenarios and Fundamentals

Packt
16 Sep 2011
11 min read
  (For more resources on SAP, see here.) Master Data Management Master Data Management seeks to ensure consistent and high-quality master data in a heterogeneous system environment. It includes determination and future avoidance of duplicate and inconsistent data, thus allowing reliable global reporting and operational efficiency of business execution. Benefits of Master Data Management MDM enables an organization to link all of its critical reference or master data shared by several disparate IT systems and groups under one single version of truth. This ensures that multiple inconsistent versions of the same master data are not used in different parts of an organization's operations. By providing front-line employees with more accurate and complete data, instead of inconsistent, incomplete, and often inaccurate data, organizations can realize many added benefits. The business analytical capability of an organization can be increased by utilizing MDM to provide consistent master data across all its operational applications. By achieving this, the master data that flows into a data warehousing system would also be consistent thus allowing the organization to leverage company-wide analytics and reporting. The benefits of MDM increase as the number and diversity of organizational departments, worker roles, and computing applications expand. The implementation of MDM is especially useful when companies merge as it can minimize confusion and optimize the efficiency of the new, larger organization. In addition, companies with a global footprint having an independent region-wise ERP implementation tend to consolidate with one ERP solution for all countries. In this scenario, MDM proves to be the necessary solution by unifying master data from each ERP system into a single unified master data such as Supplier, Material master, or Customer. MDM scenarios SAP NetWeaver MDM scenarios can be easily implemented by the customers to utilize the core functionality of SAP NetWeaver in a phase-wise manner. It includes streamlined IT scenarios as well as product-content management and global data synchronization capabilities. SAP NetWeaver MDM scenarios are broadly classified into the following two categories: IT scenarios Business scenarios IT scenarios IT scenarios are based on the lines of viewing the system comprising of various IT components and the flow of data between these entities. These scenarios can be applied to specific master data objects based on a model-driven approach. The following IT scenarios are listed within SAP NetWeaver MDM: Master Data Consolidation Master Data Harmonization Central Master Data Management Master Data Consolidation Consolidation involves matching, normalizing, cleansing, and storage of master data imported from heterogeneous client systems. SAP NetWeaver MDM offers out-of-the-box models covering globally relevant attributes for the following: Material Product Retail article Supplier Customer Business partner Employee This allows customers to also model additional content on an ad-hoc basis. Organizations can cleanse and consolidate master data on materials, retail articles, suppliers, customers, and employees in an interactive manner within heterogeneous environments. The cleansed and consolidated master data can then be consumed to perform company-wide analytics, for example, global spend analysis. The key capabilities of Master Data Consolidation include: Identification of identical or similar objects spread across the local systems Cleansing of master data objects on a need basis Providing ID mapping for unified, company-wide analytics, and reporting Components required for implementing Master Data Consolidation Masterdata consolidation utilizes the following SAP NetWeaver components: Process Integration (PI) Components of MDM such as: MDM Import Manager (required for map creation and for manual data import) MDM Import Server (required for automated master data import) MDM Data Manager (required for updating master data) MDM Syndication Server (required for automated master data export) MDM Syndicator (for manual master data export) Business Intelligence (BI) (only if data needs to be consumed for consolidated analytics such as global spend analysis) In the following diagram, we illustrate Master Data Consolidation: How this scenario integrates with other scenarios? MasterData Consolidation is the prerequisite for subsequent phases lying within the incremental approach followed by SAP NetWeaver MDM. Subsequent scenarios that follow Master Data Consolidation are Master Data Harmonization and Central Master Data Management. Master Data Harmonization Harmonization involves distribution of cleansed and consolidated high-quality master data within heterogeneous system landscapes. Organizations can make use of the out-of-the-box models offered by SAP NetWeaver MDM to cover globally relevant attributes for the following: Material Product Retail article Supplier Customer Business partner Employee Additional content can also be modeled by the customers on an ad-hoc basis. This scenario includes Master Data Consolidation to ensure high-quality master data within connected business systems in an interactive manner. An added benefit in this scenario is that it allows client-specific control on master data. Organizations can utilize the consolidated and harmonized master data to perform company-wide analytics, for example, global spend analysis. The key capabilities of Master Data Harmonization include: Streamlined processes for data load, consolidation, and distribution High-quality cleansed and de-duplicated master data within a heterogeneous system landscape Components required for implementing Master Data Harmonization MasterData Harmonization utilizes the following SAP NetWeaver components: Process Integration (PI) Components of MDM such as: MDM Import Manager (required for map creation and for manual data import) MDM Import Server (required for automated master data import) MDM Data Manager (required for updating master data) MDM Syndication Server (required for automated master data export) MDM Syndicator (for manual master data export) Business Intelligence (BI) (only if data needs to be consumed for consolidated analytics such as global spend analysis) In the following diagram, we illustrate Master Data Harmonization: How this scenario integrates with other scenarios In SAP NetWeaver's incremental approach, Master Data Harmonization is preceded by the Master Data Consolidation scenario. You can also leverage the consolidation and harmonization capabilities of Business Objects Data Services. Central Master Data Management Allows centralized maintenance and storage of master data with distribution mechanisms that ensure master data is delivered to remote systems that need it. Central Master Data Management puts into place corporate master data governance policies that ensures the overall master data quality of an organization. The differentiating aspect in this scenario with reference to Master Data Harmonization is that master data is created centrally using a rich client. Information is then delivered to target remote systems in an interactive manner. The key capabilities of Central Master Data Management include: Achieving Central Data ownership resulting in dramatic quality improvements Empowers companies to set their own standards for master data management Guarantees client-specific control on master data via local completion SAP NetWeaver MDM offers out-of-the-box models covering globally relevant attributes for the following: Material Product Retail article Supplier Customer Business partner Employee This allows customers to also model additional content on an ad-hoc basis. Components required for implementing Central Master Data Management Central Master Data Management utilizes the following SAP NetWeaver components: Process Integration (PI) Components of MDM such as: MDM Data Manager (required for updating master data) MDM Syndication Server (required for automated master data export) Business Intelligence (BI) (only if data needs to be consumed for consolidated analytics such as global spend analysis) In the following diagram, we illustrate Central Master Data Management: How this scenario integrates with other scenarios In SAP NetWeaver's incremental approach, Master Data Consolidation is a prerequisite for subsequent Central Master Data Management. Business scenarios In addition to IT scenario variants, SAP NetWeaver MDM also features business scenarios. This allows flexibility in adapting SAP NetWeaver Master Data Management to whatever business process flow the customer wants. The following business scenarios are described: Rich Product-Content Management Global Data Synchronization Customer Data Integration Rich Product-Content Management This scenario targets requirements of a centralized product-content management and multi-channel catalog publishing. It allows for importing and exporting product data, centrally managing content, and publishing disparate product data across the enterprise and between trading partners. Organizations can create custom print catalogs, web catalogs, or expose an MDM product repository to a business application (for example SAP SRM) through the Open Catalog Interface (OCI). Consequently, the capabilities of MDM are extended with business processes such as product introduction, cataloging, and publishing. The key capabilities of Rich Product-Content Management are as follows: High-performing load, aggregation, and search of product data Multidimensional search Flexible taxonomy Intelligent imaging and Web/print publishing APIs for seamless, multiplatform integration Scalability (up to millions of products) Organizations can utilize the following key benefits of implementing Rich Product-Content Management: Manage or exchange product data locally and globally Manage internal content Search electronic catalogs Print customized catalogs Syndicate product catalog content through multiple channels such as OCI, Web, and Print Presents role-based interfaces through a portal Process flow This business scenario includes the following processes: The following section discusses each of these processes in detail. Importing product data Start the upload of product master data (flat files) from the specified remote systems, or product information from suppliers (in Excel or TXT format) to MDM. This process has the following prerequisites: The Repository has been set up using the MDM Console and import maps have been created using the MDM Import Manager The inbound port has been defined using the MDM Console The MDM Import Server is running The inbound source data is staged in the inbound port Once the data is delivered to a specific inbound port, it is automatically picked up within a configurable time interval and queued up for import processing. The MDM Import Server maps and matches the imported data to the repository structure as per the import maps defined in the MDM Import Manager. Re-categorizing and enriching product data In this process, you search and merge identical records interactively using the MDM Data Manager. It provides different search patterns such as tree search, keyword search, free search, and so on. After de-duplication you can check if new data has been attached to the correct category and re-categorize it, if necessary. You can also enrich additional information in the MDM Data Manager and custom validations can be applied to check master data updates. Workflows can also be configured which are triggered to support the change processes. Support for adding images as additional information for repository items is available in the MDM Image Manager. Images can be imported into the repository and image variants (example thumbnails) can be created (using the MDM Console) for each image in addition to the original copy. These images are linked to the corresponding product items in the repository using the MDM Data Manager. Providing catalog content Using this process, you can choose to syndicate the product data, apart from print publishing such as Web publishing or exposing the MDM product repository, to a business application (such as, SAP SRM) through the Open Catalog Interface (OCI). The SRM-MDM web catalog provided by SAP contains the web interfaces developed by SAP to access the MDM catalog. The implementation would require a deployment into an additional NetWeaver component called SAP Enterprise Portal. In the case of web publishing, a custom Web Catalog can be developed using the APIs. As a prerequisite, a web application should have been created and deployed on a web server with an open connection to the MDM catalog. An MDM API can be used to perform search, read, and maintain the repository content. On the other hand, if the MDM product repository needs to be exposed to a business application, we can provide the content via the OCI. Using the OCI you can search for products and add the required items to a selection list. The list is then transferred to the shopping cart of the business application and the order is completed. Enabling print publishing Using this process, you can compose and set up a printed product catalog using the MDM Publisher. In order to do this you need to first create a family table using the MDM Console to enable the initial partitioning. As catalog printing is based on category-dependent pages and different product groups in a category have different layouts, further category partitioning can be defined in the MDM Data Manager. We can partition such categories using the field or attribute values to create product families. With the help of the MDM Publisher, you can assign default settings to create a common layout structure for the publication. We can then arrange a specific layout for the given product family such as eliminate redundancies, apply printed version display name, and structure tables. In order to start the publishing activities, a collection of families or non-family based records can be defined as a publication. The publication hierarchy, thus created, is not limited to the repository's taxonomy unlike the family hierarchy. You can freely add, delete, move, and split nodes to create your own structure for the catalog. Spread editor will enable you to concentrate specifically on page layout and design such as creating layout templates for publication. The next step involves using the DTP plug-in to send the publication data from MDM to a Desktop Publishing (DTP) application such as Adobe InDesign. Using the DTP application, some specialized format changes can be done and saved with the publication in MDM. This can be re-used with the next publishing run. Finally, an index for the complete publication is generated using the MDM Indexer.
Read more
  • 0
  • 0
  • 2441

article-image-nhibernate-3-creating-sample-application
Packt
06 Sep 2011
8 min read
Save for later

NHibernate 3: Creating a Sample Application

Packt
06 Sep 2011
8 min read
  (For more resources on NHibernate, see here.)   Prepare our development environment We assume that you have a computer at hand which has Windows Vista, Windows 7, Windows Server 2003 or Windows Server 2008 installed. If you are using an Apple computer, then you can install, for example, Windows 7 as a virtual machine. First, install Microsoft Visual Studio 2010 Professional, Microsoft Visual C# 2010 Express or Microsoft Visual Basic 2010 Express on your system. The Express editions of Visual Studio can be downloaded from http://www.microsoft.com/express/windows. Note that NHibernate 3.x can also be used with the 2008 editions of Microsoft Visual Studio, but not with any older versions. NHibernate 3.x is based on the .NET framework version 3.5, and thus only works with IDEs that support this or a higher version of the .NET framework. Additionally, note that if you don't want to use Visual Studio, then there are at least two other free OSS options available to you: MonoDevelop is an IDE primarily designed for C# and other .NET languages. MonoDevelop makes it easy for developers to port .NET applications created with Visual Studio to Linux and to maintain a single code base for all platforms. MonoDevelop 2.4 or higher can be downloaded from http://monodevelop.com/download. SharpDevelop is a free IDE for C#, VB.NET, and Boo projects on Microsoft's .NET platform. It is open source. SharpDevelop 3.2 or higher can be downloaded from http://sharpdevelop.net/OpenSource/SD/Default.aspx. Furthermore, note that NHibernate also works on Mono: http://www.mono-project.com. Next, we need a relational database to play with. NHibernate supports all major relational databases like Oracle, MS SQL Server, MySQL, and so on. We will use MS SQL Server as our Relational Database Management System (RDBMS). Microsoft SQL Server is the most used RDBMS in conjunction with NHibernate and, in general, with .NET projects. The SQL Server driver for NHibernate is one of the most tested drivers in NHibernate's suite of unit tests, and when specific new features come out, it is likely that they will be first supported by this driver. Install the free Microsoft SQL Server 2008 R2 Express on your system if you have not already done so during the install of Visual Studio. You can download the express edition of MS SQL Server from here http://www.microsoft.com/express/Database/. For our samples, it really doesn't matter which version you download: the 32-bit or the 64-bit version. Just take the one that matches best with the bitness of your operating system. Make sure that you install SQL Server with the default instance name of SQL Express. Make sure you also download and install the free SQL Server Management Studio Express (SSMS) from the following link: http://www.microsoft.com/download/en/details.aspx?id=22985 Now, we are ready to tackle NHibernate. We can download NHibernate 3.1.0 GA from Source Forge http://sourceforge.net/projects/nhibernate/. The download consists of a single ZIP file containing the following content, as shown in the screenshot: The binaries that are always needed when developing an NHibernate based application can be found in the Required_Bins folder. Opening this folder, we find the files as shown in the following screenshot: Note that if you are downloading version 3.1 or newer of NHibernate, you will no longer find the two DLLs, Antlr3.Runtime.dll and Remotion.Data.Linq.dll, in the ZIP file that were present in version 3.0. The reason is that they have been IL merged into the NHibernate.dll. If we want to use lazy loading with NHibernate (and we surely will), then we also have to use some additional files which can be found in the Required_For_LazyLoading folder. Lazy loading is a technique that is used to load certain parts of the data only when really needed, which is when the code accesses it. There are three different options at hand. We want to choose Castle. The corresponding folder contains these files, as shown in the following screenshot: As we are also using Fluent NHibernate, we want to download the corresponding binaries too. Go grab the binaries from the Fluent NHibernate website and copy them to the appropriate location on your system. In either case, there is no installer available or needed. We just have to copy a bunch of files to a folder we define. Please download Fluent NHibernate, which also contains the binaries for NHibernate, from here (http://fluentnhibernate.org/downloads), as shown in the following screenshot. Make sure you download the binaries for NHibernate 3.1 and not an earlier version. Save the ZIP file you just downloaded to a location where you can easily find it for later usage. The ZIP file contains the files shown in the following screenshot: The only additional files regarding the direct NHibernate download are the FluentNHibernate.* files. On the other hand, we do not have the XSD schema files (nhibernate-configuration.xsd and nhibernate-mapping.xsd) included in this package and we'll want to copy those from the NHibernate package when implementing our sample.   Defining a model After we have successfully downloaded the necessary NHibernate and Fluent NHibernate files, we are ready to start implementing our first application using NHibernate. Let's first model the problem domain we want to create the application for. The domain for which we want to build our application is a product inventory system. With the application, we want to be able to manage a list of products for a small grocery store. The products shall be grouped by category. A category consists of a name and a short description. The product on the other hand has a name, a short description, a category, a unit price, a reorder level, and a flag to determine whether it is discontinued or not. To uniquely identify each category and product, they each have an ID. If we draw a class diagram of the model just described, then it would look similar to the following screenshot: Unfortunately, the class designer used to create the preceding diagram is only available in the professional version of Visual Studio and not in the free Express editions.   Time for action – Creating the product inventory model Let's implement the model for our simple product inventory system. First, we want to define a location on our system, where we will put all our code that we create. Create a folder called NH3BeginnersGuide on your file system. Inside this new folder, create another folder called lib. This is the place where we will put all the assemblies needed to develop an application using NHibernate and Fluent NHibernate. Locate the ZIP file containing the Fluent NHibernate files that you downloaded in the first section of this article. Extract all files to the lib folder created in the preceding step. Open Visual Studio and create a new project. Choose WPF Application as the project template. Call the project Chapter2. Make sure that the solution you create will be saved in the folder NH3BeginnersGuide you created in the preceding step. When using VS 2008 Pro, you can do this when creating the new project. If, on the other hand, you use the Express edition of Visual Studio, then you choose the location when you first save your project. (Move the mouse over the image to enlarge.) Add a new class to the project and call it Category. To this class, add a virtual (auto-) property called Id, which is of the int type. Also, add two other virtual properties of the string type, called Name and Description. The code should look similar to the following code snippet: namespace Chapter2{ public class Category { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } }} Downloading the example code You can download the example code files here. Add another class to the project and call it Product. To this class, add the properties, as shown in the following code snippet. The type of the respective property is given in parenthesis: Id (int), Name (string), Description (string), Category (Category), UnitPrice (decimal), ReorderLevel (int), and Discontinued (bool). The resulting code should look similar to the following code snippet: namespace Chapter2{ public class Product { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } public virtual Category Category { get; set; } public virtual decimal UnitPrice { get; set; } public virtual int ReorderLevel { get; set; } public virtual bool Discontinued { get; set; } }} What just happened? We have implemented the two classes Category and Product, which define our simple domain model. Each attribute of the entity is implemented as a virtual property of the class. To limit the amount of code necessary to define the entities, we use auto properties. Note that the properties are all declared as virtual. This is needed as NHibernate uses lazy loading by default.  
Read more
  • 0
  • 0
  • 2570

article-image-oracle-e-business-suite-entering-and-reconciling-bank-statements
Packt
23 Aug 2011
4 min read
Save for later

Oracle E-Business Suite: Entering and Reconciling Bank Statements

Packt
23 Aug 2011
4 min read
Oracle E-Business Suite 12 Financials Cookbook Take the hard work out of your daily interactions with E-Business Suite financials by using the 50+ recipes from this cookbook. Entering bank statements Bank statements are downloaded from the bank to a local directory. Once the file is received, the bank account balance and statement information can be loaded into the bank statement open interface tables, using the bank statement loader program or a custom loader program. The files can also be loaded automatically using an interface program or using the XML Gateway. Bank statements can also be entered manually. In this recipe, we will look at how to enter bank statements. Getting ready The bank statement shown next has been loaded into the open interface table: Let's review the transactions in the open interface: Select the Cash Management responsibility. Navigate to Bank Statements | Bank Statement Interface Lines. Select 95-6891-3074 in the Account field. Click on the Lines button to view the transactions in the interface tables. How to do it... Let's list the steps required to automatically enter the bank statements from the import and AutoReconciliation program: Select the Cash Management responsibility. Navigate to Other | Programs | Run, or select View | Requests from the menu. Click on the Submit a New Request button. Select Single Request from the Options. Click on the OK button. In the Submit Request form, select Bank Statement Import & AutoReconciliation from the list of values. Please note that we could run the Bank Statement Import program, to run only the import. Select the Parameters field, and select Kings Cross as the Bank Branch Name, select 95-6891-3074 as the Bank Account Number, and select 20110314-0001 as the parameter for the Statement Number From and the Statement Number To fields. Accept the default values for the remaining fields. Click on the OK button. We can schedule the program to run periodically, for example, every day. Click on the Submit button to submit the request. Let's review the imported bank statements: Navigate to Bank Statement | Bank Statements and Reconciliation. The imported statement is displayed. Click on the Review button. (Move the mouse over the image to enlarge it.) In the Bank Statement window, select the Lines button. The imported lines are displayed. How it works... Bank statements can be imported automatically, using a SQL*Loader script against the bank file to populate the bank statement open interface. The bank statement information is then imported into the Bank Statement windows using the Bank Statement Import program. There's more... Now, let's look at how to enter statements manually. Entering bank statements manually Let's enter the bank statement for the 15th of March manually. The lines on the statement are as follows: Payment of 213.80. Receipt of 3,389.89 from A.C. Networks. Credit of 7,500.00 for Non Sufficient Funds for the receipt from Advantage Corp. Bank Transfer payment of 1,000.00. Select the Cash Management responsibility. Navigate to Bank Statement | Bank Statements and Reconciliation. (Move the mouse over the image to enlarge it.) In the Reconcile Bank Statements window, click on the New button. In the Account Number field, enter 95-6891-3074, the other details are automatically entered. In the Date field enter 15-MAR-2011. In the Statement Number field enter 20110314-0002. In the Control Totals region, let's enter control totals based on our bank statement. The Opening Balance of 125,727.21 is entered based on the previous opening balance. In the Receipts field, enter 3,389.89 and 1 in the Lines field. In the Payments field, enter 8,713.80 and 3 in the Lines field. The Closing Balance of 98,495.56 is entered automatically. Let's enter the bank statement lines: Click on the Lines button. (Move the mouse over the image to enlarge it.) In the Bank Statements Lines form, enter 1 in the Line field. Select Payment as the Type. Enter 100 as the code. In the Transaction Date field, enter 15-MAR-2011. In the Amount field, enter 213.80. Select the next line, and enter 2 in the Line field. Select Receipt as the Type. Enter 200 as the code. In the Transaction Date field, enter 15-MAR-2011. In the Amount field, enter 3,389.89. Select the Reference tab, and enter A.C. Networks. Select the next line, and enter 3 in the Line field. Select NSF as the Type. Enter 500 as the code. In the Transaction Date field, enter 15-MAR-2011. In the Amount field, enter 7,500.00. Select the Reference tab, and enter Advantage Corp. Select the next line, and enter 4 in the Line field. Select Payment as the Type. Enter 140 as the code. In the Transaction Date field, enter 15-MAR-2011. In the Amount field, enter 1,000.00. Save the record.
Read more
  • 0
  • 0
  • 4067
article-image-oracle-e-business-suite-adjusting-items-inventory-and-classifying-items
Packt
19 Aug 2011
4 min read
Save for later

Oracle E-Business Suite: Adjusting Items in Inventory and Classifying Items

Packt
19 Aug 2011
4 min read
Oracle E-Business Suite 12 Financials Cookbook Adjusting Items in Inventory Item quantities can be adjusted in Inventory. In this recipe, we will use miscellaneous transactions to adjust Items. Let's try to enter transactions on some of the controls we have set up. We will try and enter a Miscellaneous Transaction for five paper widgets into Inventory. How to do it... Navigate to Transactions | Miscellaneous Transactions. In the Type field, select Miscellaneous receipt from the list of values. Click on the Transaction Lines button to enter the receipt. Search for the PRD20001 Item in the Item field. Select the Subinventory list of values and the field should be automatically populated with ACME-FG. In the Locator field, enter A1.1.2. The system should display an Error message to indicate that an invalid locator has been entered. Click on OK and enter A1.1.1–the system should accept this value. Enter a value of 5 in the Quantity field. In the account field, enter 01-000-1410-0000-000. This is the account that will be charged for the Inventory transaction. Select the Lot / Serial button. Enter the Lot number–LN10001. The expiration date is generated based on the setting in the Item definition. Enter the quantity of 5. Click on the Serial button. Enter SN10001 in the Start Serial Number field and press Tab on the keyboard. The SN10005 should be automatically populated in the End Serial Number field. Click on the Done button. Click on the Done button again in the next screen. Save the record. There's more... Let's search for the Items in Inventory. Searching for Items We will use the material workbench to search for the Items: Navigate to On-hand | Availability | On-hand Quantity. Enter PRD20001 in the Item / Revision field. Click on the Find button. Expand the Organizations tree to show LN10001. Review the Item details. Close the form. Classifying Items Items are grouped into logical classifications through categories. Categories can be further grouped into category sets. A default category set can be assigned to a functional area. When an Item is created, it is automatically added to the default category set. The groupings are mainly used for reporting purposes. Let's look at how to classify Items using categories. How to do it... Let's list the steps required to create category codes: Navigate to Setup | Items | Categories | Category Codes. Click on the New button to enter the Category. In the Structure Name, select Item Categories. In the Category field, enter BOOKS.MISC. In the Description field, enter Other Books. Save the record. Let's now create the Category Set, add the Category Codes to a new set called ACME Books, and assign it to the PRD20001 Item: Navigate to Setup | Items | Categories | Category Sets. Click on the New button to enter the category set. In the Name field, enter ACME Books. Enter a description, for example, ACME Paper Books. Select Item Categories for Flex Structure. Select Controlled At as the Org level. Enter BOOKS.MISC as the Default Category. Select the checkbox Allow Multiple Item Category Assignments. Add the following Category Codes to the list: BOOKS.MISC BOOKS.NEW BOOKS.USED Save the record. Let's now assign the categories to the PRD20001 Item: Navigate to Items | Organization Items. From the Menu, select Tools and then Categories. Select ACME Books in the Category Set. Enter BOOKS.NEW in the Category field. Save the record. How it works... The structure of the Item category is defined in the Item Flexfield structure and the values are held in the individual Value Sets. The combination of the individual values forms the category code. For example, the structure we used previously is made of two segments, defined in the Flexfield structure. The segments are Family and Class. BOOKS is a value in Family and MISC, NEW, and USED are individual values in Class.
Read more
  • 0
  • 0
  • 3645

article-image-oracle-e-business-suite-creating-bank-accounts-and-cash-forecasts
Packt
19 Aug 2011
3 min read
Save for later

Oracle E-Business Suite: Creating Bank Accounts and Cash Forecasts

Packt
19 Aug 2011
3 min read
  Oracle E-Business Suite 12 Financials Cookbook Take the hard work out of your daily interactions with E-Business Suite financials by using the 50+ recipes from this cookbook   Introduction Oracle E-business suite The liquidity of an organization is managed in Oracle Cash Management; this includes the reconciliation of the cashbook to the bank statements, and forecasting future cash requirements. In this article, we will look at how to create bank accounts and cash forecasts. Cash management integrates with Payables, Receivables, Payroll, Treasury, and General Ledger. Let's start by looking at the cash management process: The Bank generates statements. The statements are sent to the organization electronically or by post. The Treasury Administrator loads and verifies the bank statement into cash management. The statements can also be manually entered into cash management. The loaded statements are reconciled to the cash book transactions. The results are reviewed, and amended if required. The Treasury Administrator creates the journals for transactions in the General Ledger. Creating bank accounts Oracle Cash Management provides us with the functionality to create bank accounts. In this recipe, we will create a bank account for a bank called Shepherd Bank, for one of their branches called Kings Cross branch. Getting ready Log in to Oracle E-Business Suite R12 with the username and password assigned to you by the system administrator. If you are working on the Vision demonstration database, you can use OPERATIONS/WELCOME as the USERNAME/PASSWORD. We also need to create a bank before we can create the bank account. Let's look at how to create a bank and the branch: Select the Cash Management responsibility. Navigate to Setup | Banks | Banks.(Move the mouse over the image to enlarge it.) In the Banks tab, click on the Create button. Select the Create new bank option. In the Country field, enter United States. In the Bank Name field, enter Shepherds Bank. In the Bank Number field, enter JN316. Click on the Finish button. Let's create the branch and the address: (Move the mouse over the image to enlarge it.) Click the Create Branch icon: The Country and the Bank Name are automatically entered. Click on the Continue button.(Move the mouse over the image to enlarge it.) In the Branch Name field, enter Kings Cross. Select ABA as the Branch Type. Click on the Save and Next button to create the Branch address.(Move the mouse over the image to enlarge it.) In the Branch Address form, click on the create button. In the Country field, enter United States. In the Address Line 1 field, enter 4234 Red Eagle Road. In the City field, enter Sacred Heart. In the County field, enter Renville. In the State field, enter MN. In the Postal Code field, enter 56285. Ensure that the Status field is Active. Click on the Apply button. Click on the Finish button.
Read more
  • 0
  • 0
  • 4996