Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-unveil-power-your-business-data-oracle-discoverer
Packt
08 Apr 2010
4 min read
Save for later

Unveil the Power of Your Business Data with Oracle Discoverer

Packt
08 Apr 2010
4 min read
A quick guide to Oracle Discoverer packaging Before proceeding to get the Oracle Discoverer software, it’s important to realize what you actually need and what the Oracle Discoverer packaging provides. At the moment, there are two options when it comes to the current Oracle Discoverer software: Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 Portal, Forms, Reports and Discoverer suite, part of Oracle Fusion Middleware 11g Release 1 The first option – the Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 – includes the following components: Oracle Business Intelligence Discoverer Oracle HTTP Server Oracle Application Server Containers for J2EE (OC4J) Oracle Enterprise Manager 10g Application Server Control Oracle Application Server Web Cache Oracle Application Server Reports Services The first component, Oracle Business Intelligence Discoverer, in the above list represents actually a group of components whose name starts with Discoverer. The package includes: Discoverer Plus Discoverer Viewer Discoverer Services Discoverer Portlet Provider Note, however, that the above list does not include all the Discoverer components. For example, you won’t find the following Discoverer components there: Discoverer Administrator Discoverer Desktop The above components are included in a complementary package called Oracle Business Intelligence Tools. As mentioned at the beginning of this section, another option to take advantage of the Discoverer components is to install the Portal, Forms, Reports and Discoverer suite, which is part of Oracle Fusion Middleware 11g Release 1. This package includes the following components: HTTP Server WebCache Portal Forms Services Forms Builder Reports Services Report Builder/Compiler Discoverer Administrator Discoverer Plus Discoverer Viewer Discoverer Services Discoverer Desktop Enterprise Manager Fusion Middleware Control As you can see, the Portal, Forms, Reports and Discoverer suite, unlike Oracle Business Intelligence suite, does include Discoverer Administrator and Discoverer Desktop. So you won’t need to install another package to obtain these components. A major downside to choosing the Portal, Forms, Reports and Discoverer suite, though, is that it requires some additional software to be installed in your system. Here is the list of the required additional software components: WebLogic Server Repository Creation Utility Identity Management SSO Metadata Repository Creation Assistant Patch Scripts Identity Management 10gR3 Oracle Database Due to this reason – to save you the trouble of installing a lot of software – the "Installation process" section later in this article will cover the installation of Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 rather than the Portal, Forms, Reports and Discoverer suite of Oracle Fusion Middleware 11g Release 1. Getting the software Once you have decided on the package you want to install, you can go for it to the OTN’s Software Downloads page at http://www.oracle.com/technology/software/index.html. It’s important to remember that each software component available from this page comes with a Development License, which allows for free download and unlimited evaluation time. You can look at the license at http://www.oracle.com/technology/software/popup-license/standard-license.html. Later, if you so desire, you can always buy products with full-use licenses. So, the OTN’s Software Downloads page, go to the Middleware section and, assuming you want to download Oracle Business Intelligence suite, click the Business Intelligence SE link to proceed to the Oracle Application Server 10g Release 2 (10.1.2.0.2) page at http://www.oracle.com/technology/software/products/ias/htdocs/101202.html. On this page, go down to the Business Intelligence section and find the links to the packages provided for your operating system. Each package is supposed to be copied on a separate CD. The number of CDs and the size of packages to be copied on them may vary depending on the operating system. What you need to do is download the installation packages and then copy each to a CD. Looking through the links to the installation packages, you may notice that Tools CD – the link to the package containing the Oracle Business Intelligence Tools suite – is available only for Microsoft Windows operating system. This is because the components included in the Oracle Business Intelligence Tools suite are Windows-only applications. If, instead of the Oracle Business Intelligence suite, you decided on the Portal, Forms, Reports and Discoverer suite, you have to follow the Oracle Fusion Middleware 11g R1 link in the Middleware section on the OTN’s Software Downloads page. Following that link, you’ll be directed to the Oracle Fusion Middleware 11gR1 Software Downloads page at http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html. On this page, go down to the Portal, Forms, Reports and Discoverer section and pick up the distribution divided into several packages. Again, the number of packages within a distribution and their size may vary depending on the operating system.
Read more
  • 0
  • 0
  • 1401

article-image-installing-coherence-35-and-accessing-data-grid-part-2
Packt
31 Mar 2010
10 min read
Save for later

Installing Coherence 3.5 and Accessing the Data Grid: Part 2

Packt
31 Mar 2010
10 min read
Using the Coherence API One of the great things about Coherence is that it has a very simple and intuitive API that hides most of the complexity that is happening behind the scenes to distribute your objects. If you know how to use a standard Map interface in Java, you already know how to perform basic tasks with Coherence. In this section, we will first cover the basics by looking at some of the foundational interfaces and classes in Coherence. We will then proceed to do something more interesting by implementing a simple tool that allows us to load data into Coherence from CSV files, which will become very useful during testing. The basics: NamedCache and CacheFactory As I have briefly mentioned earlier, Coherence revolves around the concept of named caches. Each named cache can be configured differently, and it will typically be used to store objects of a particular type. For example, if you need to store employees, trade orders, portfolio positions, or shopping carts in the grid, each of those types will likely map to a separate named cache. The first thing you need to do in your code when working with Coherence is to obtain a reference to a named cache you want to work with. In order to do this, you need to use the CacheFactory class, which exposes the getCache method as one of its public members. For example, if you wanted to get a reference to the countries cache that we created and used in the console example, you would do the following: NamedCache countries = CacheFactory.getCache("countries"); Once you have a reference to a named cache, you can use it to put data into that cache or to retrieve data from it. Doing so is as simple as doing gets and puts on a standard Java Map: countries.put("SRB", "Serbia");String countryName = (String) countries.get("SRB"); As a matter of fact, NamedCache is an interface that extends Java's Map interface, so you will be immediately familiar not only with get and put methods, but also with other methods from the Map interface, such as clear, remove, putAll, size, and so on. The nicest thing about the Coherence API is that it works in exactly the same way, regardless of the cache topology you use. For now let's just say that you can configure Coherence to replicate or partition your data across the grid. The difference between the two is that in the former case all of your data exists on each node in the grid, while in the latter only 1/n of the data exists on each individual node, where n is the number of nodes in the grid. Regardless of how your data is stored physically within the grid, the NamedCache interface provides a standard API that allows you to access it. This makes it very simple to change cache topology during development if you realize that a different topology would be a better fit, without having to modify a single line in your code. In addition to the Map interface, NamedCache extends a number of lower-level Coherence interfaces. The following table provides a quick overview of these interfaces and the functionality they provide: The "Hello World" example In this section we will implement a complete example that achieves programmatically what we have done earlier using Coherence console—we'll put a few countries in the cache, list cache contents, remove items, and so on. To make things more interesting, instead of using country names as cache values, we will use proper objects this time. That means that we need a class to represent a country, so let's start there: public class Country implements Serializable, Comparable {private String code;private String name;private String capital;private String currencySymbol;private String currencyName;public Country() {}public Country(String code, String name, String capital,String currencySymbol, String currencyName) {this.code = code;this.name = name;this.capital = capital;this.currencySymbol = currencySymbol;this.currencyName = currencyName;}public String getCode() {return code;}public void setCode(String code) {this.code = code;}public String getName() {return name;}public void setName(String name) {this.name = name;}public String getCapital() {return capital;}public void setCapital(String capital) {this.capital = capital;}public String getCurrencySymbol() {return currencySymbol;}public void setCurrencySymbol(String currencySymbol) {this.currencySymbol = currencySymbol;}public String getCurrencyName() {return currencyName;}public void setCurrencyName(String currencyName) {this.currencyName = currencyName;}public String toString() {return "Country(" +"Code = " + code + ", " +"Name = " + name + ", " +"Capital = " + capital + ", " +"CurrencySymbol = " + currencySymbol + ", " +"CurrencyName = " + currencyName + ")";}public int compareTo(Object o) {Country other = (Country) o;return name.compareTo(other.name);}} There are several things to note about the Country class, which also apply to other classes that you want to store in Coherence: Because the objects needs to be moved across the network, classes that are stored within the data grid need to be serializable. In this case we have opted for the simplest solution and made the class implement the java.io.Serializable interface. This is not optimal, both from performance and memory utilization perspective, and Coherence provides several more suitable approaches to serialization. We have implemented the toString method that prints out an object's state in a friendly format. While this is not a Coherence requirement, implementing toString properly for both keys and values that you put into the cache will help a lot when debugging, so you should get into a habit of implementing it for your own classes. Finally, we have also implemented the Comparable interface. This is also not a requirement, but it will come in handy in a moment to allow us to print out a list of countries sorted by name. Now that we have the class that represents the values we want to cache, it is time to write an example that uses it: import com.tangosol.net.NamedCache;import com.tangosol.net.CacheFactory;import ch02.Country;import java.util.Set;import java.util.Map;public class CoherenceHelloWorld {public static void main(String[] args) {NamedCache countries = CacheFactory.getCache("countries");// first, we need to put some countries into the cachecountries.put("USA", new Country("USA", "United States","Washington", "USD", "Dollar"));countries.put("GBR", new Country("GBR", "United Kingdom","London", "GBP", "Pound"));countries.put("RUS", new Country("RUS", "Russia", "Moscow","RUB", "Ruble"));countries.put("CHN", new Country("CHN", "China", "Beijing","CNY", "Yuan"));countries.put("JPN", new Country("JPN", "Japan", "Tokyo","JPY", "Yen"));countries.put("DEU", new Country("DEU", "Germany", "Berlin","EUR", "Euro"));countries.put("FRA", new Country("FRA", "France", "Paris","EUR", "Euro"));countries.put("ITA", new Country("ITA", "Italy", "Rome","EUR", "Euro"));countries.put("SRB", new Country("SRB", "Serbia", "Belgrade","RSD", "Dinar"));assert countries.containsKey("JPN"): "Japan is not in the cache";// get and print a single countrySystem.out.println("get(SRB) = " + countries.get("SRB"));// remove Italy from the cacheint size = countries.size();System.out.println("remove(ITA) = " + countries.remove("ITA"));assert countries.size() == size - 1: "Italy was not removed";// list all cache entriesSet<Map.Entry> entries = countries.entrySet(null, null);for (Map.Entry entry : entries) {System.out.println(entry.getKey() + " = " + entry.getValue());}}} Let's go through this code section by section. At the very top, you can see import statements for NamedCache and CacheFactory, which are the only Coherence classes we need for this simple example. We have also imported our Country class, as well as Java's standard Map and Set interfaces. The first thing we need to do within the main method is to obtain a reference to the countries cache using the CacheFactory.getCache method. Once we have the cache reference, we can add some countries to it using the same old Map.put method you are familiar with. We then proceed to get a single object from the cache using the Map.get method , and to remove one using Map.remove. Notice that the NamedCache implementation fully complies with the Map.remove contract and returns the removed object. Finally, we list all the countries by iterating over the set returned by the entrySet method. Notice that Coherence cache entries implement the standard Map.Entry interface. Overall, if it wasn't for a few minor differences, it would be impossible to tell whether the preceding code uses Coherence or any of the standard Map implementations. The first telltale sign is the call to the CacheFactory.getCache at the very beginning, and the second one is the call to entrySet method with two null arguments. We have already discussed the former, but where did the latter come from? The answer is that Coherence QueryMap interface extends Java Map by adding methods that allow you to filter and sort the entry set. The first argument in our example is an instance of Coherence Filter interface. In this case, we want all the entries, so we simply pass null as a filter. The second argument, however, is more interesting in this particular example. It represents the java.util.Comparator that should be used to sort the results. If the values stored in the cache implement the Comparable interface, you can pass null instead of the actual Comparator instance as this argument, in which case the results will be sorted using their natural ordering (as defined by Comparable.compareTo implementation). That means that when you run the previous example, you should see the following output: get(SRB) = Country(Code = SRB, Name = Serbia, Capital = Belgrade,CurrencySymbol = RSD, CurrencyName = Dinar)remove(ITA) = Country(Code = ITA, Name = Italy, Capital = Rome,CurrencySymbol = EUR, CurrencyName = Euro)CHN = Country(Code = CHN, Name = China, Capital = Beijing, CurrencySymbol= CNY, CurrencyName = Yuan)FRA = Country(Code = FRA, Name = France, Capital = Paris, CurrencySymbol= EUR, CurrencyName = Euro)DEU = Country(Code = DEU, Name = Germany, Capital = Berlin,CurrencySymbol = EUR, CurrencyName = Euro)JPN = Country(Code = JPN, Name = Japan, Capital = Tokyo, CurrencySymbol =JPY, CurrencyName = Yen)RUS = Country(Code = RUS, Name = Russia, Capital = Moscow, CurrencySymbol= RUB, CurrencyName = Ruble)SRB = Country(Code = SRB, Name = Serbia, Capital = Belgrade,CurrencySymbol = RSD, CurrencyName = Dinar)GBR = Country(Code = GBR, Name = United Kingdom, Capital = London,CurrencySymbol = GBP, CurrencyName = Pound)USA = Country(Code = USA, Name = United States, Capital = Washington,CurrencySymbol = USD, CurrencyName = Dollar) As you can see, the countries in the list are sorted by name, as defined by our Country.compareTo implementation. Feel free to experiment by passing a custom Comparator as the second argument to the entrySet method, or by removing both arguments, and see how that affects result ordering. If you are feeling really adventurous and can't wait to learn about Coherence queries, take a sneak peek by changing the line that returns the entry set to: Set<Map.Entry> entries = countries.entrySet(new LikeFilter("getName", "United%"), null); As a final note, you might have also noticed that I used Java assertions in the previous example to check that the reality matches my expectations (well, more to demonstrate a few other methods in the API, but that's beyond the point). Make sure that you specify the -ea JVM argument when running the example if you want the assertions to be enabled, or use the run-helloworld target in the included Ant build file, which configures everything properly for you. That concludes the implementation of our first Coherence application. One thing you might notice is that the CoherenceHelloWorld application will run just fine even if you don't have any Coherence nodes started, and you might be wondering how that is possible. The truth is that there is one Coherence node—the CoherenceHelloWorld application. As soon as the CacheFactory.getCache method gets invoked, Coherence services will start within the application's JVM and it will either join the existing cluster or create a new one, if there are no other nodes on the network. If you don't believe me, look at the log messages printed by the application and you will see that this is indeed the case. Now that you know the basics, let's move on and build something slightly more exciting, and much more useful.
Read more
  • 0
  • 0
  • 1119

article-image-installing-coherence-35-and-accessing-data-grid-part-1
Packt
31 Mar 2010
10 min read
Save for later

Installing Coherence 3.5 and Accessing the Data Grid: Part 1

Packt
31 Mar 2010
10 min read
When I first started evaluating Coherence, one of my biggest concerns was how easy it would be to set up and use, especially in a development environment. The whole idea of having to set up a cluster scared me quite a bit, as any other solution I had encountered up to that point that had the word "cluster" in it was extremely difficult and time consuming to configure. My fear was completely unfounded—getting the Coherence cluster up and running is as easy as starting Tomcat. You can start multiple Coherence nodes on a single physical machine, and they will seamlessly form a cluster. Actually, it is easier than starting Tomcat. Installing Coherence In order to install Coherence you need to download the latest release from the Oracle Technology Network (OTN) website. The easiest way to do so is by following the link from the main Coherence page on OTN. At the time of this writing, this page was located at http://www.oracle.com/technology/products/coherence/index.html, but that might change. If it does, you can find its new location by searching for 'Oracle Coherence' using your favorite search engine. In order to download Coherence for evaluation, you will need to have an Oracle Technology Network (OTN) account. If you don't have one, registration is easy and completely free. Once you are logged in, you will be able to access the Coherence download page, where you will find the download links for all available Coherence releases: one for Java, one for .NET, and one for each of the supported C++ platforms. You can download any of the Coherence releases you are interested in while you are there, but for the remainder of this article you will only need the first one. The latter two (.NET and C++) are client libraries that allow .NET and C++ applications to access the Coherence data grid. Coherence ships as a single ZIP archive. Once you unpack it you should see the README.txt file containing the full product name and version number, and a single directory named coherence. Copy the contents of the coherence directory to a location of your choice on your hard drive. The common location on Windows is c:coherence and on Unix/Linux /opt/coherence, but you are free to put it wherever you want. The last thing you need to do is to configure the environment variable COHERENCE_HOME to point to the top-level Coherence directory created in the previous step, and you are done. Coherence is a Java application, so you also need to ensure that you have the Java SDK 1.4.2 or later installed and that JAVA_HOME environment variable is properly set to point to the Java SDK installation directory. If you are using a JVM other than Sun's, you might need to edit the scripts used in the following section. For example, not all JVMs support the -server option that is used while starting the Coherence nodes, so you might need to remove it. What's in the box? The first thing you should do after installing Coherence is become familiar with the structure of the Coherence installation directory. There are four subdirectories within the Coherence home directory: bin: This contains a number of useful batch files for Windows and shell scripts for Unix/Linux that can be used to start Coherence nodes or to perform various network tests doc: This contains the Coherence API documentation, as well as links to online copies of Release Notes, User Guide, and Frequently Asked Questions documents examples: This contains several basic examples of Coherence functionality lib: This contains JAR files that implement Coherence functionality Shell scripts on UnixIf you are on a Unix-based system, you will need to add execute permission to the shell scripts in the bin directory by executing the following command: $ chmod u+x *.sh Starting up the Coherence cluster In order to get the Coherence cluster up and running, you need to start one or more Coherence nodes. The Coherence nodes can run on a single physical machine, or on many physical machines that are on the same network. The latter will definitely be the case for a production deployment, but for development purposes you will likely want to limit the cluster to a single desktop or laptop. The easiest way to start a Coherence node is to run cache-server.cmd batch file on Windows or cache-server.sh shell script on Unix. The end result in either case should be similar to the following screenshot: There is quite a bit of information on this screen, and over time you will become familiar with each section. For now, notice two things: At the very top of the screen, you can see the information about the Coherence version that you are using, as well as the specific edition and the mode that the node is running in. Notice that by default you are using the most powerful, Grid Edition, in development mode. The MasterMemberSet section towards the bottom lists all members of the cluster and provides some useful information about the current and the oldest member of the cluster. Now that we have a single Coherence node running, let's start another one by running the cache-server script in a different terminal window. For the most part, the output should be very similar to the previous screen, but if everything has gone according to the plan, the MasterMemberSet section should reflect the fact that the second node has joined the cluster: MasterMemberSet ( ThisMember=Member(Id=2, ...) OldestMember=Member(Id=1, ...) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, ...) Member(Id=2, ...) )RecycleMillis=120000RecycleSet=MemberSet(Size=0, BitSetCount=0)) You should also see several log messages on the first node's console, letting you know that another node has joined the cluster and that some of the distributed cache partitions were transferred to it. If you can see these log messages on the first node, as well as two members within the ActualMemberSet on the second node, congratulations—you have a working Coherence cluster. Troubleshooting cluster start-up In some cases, a Coherence node will not be able to start or to join the cluster. In general, the reason for this could be all kinds of networking-related issues, but in practice a few issues are responsible for the vast majority of problems. Multicast issues By far the most common issue is that multicast is disabled on the machine. By default, Coherence uses multicast for its cluster join protocol, and it will not be able to form the cluster unless it is enabled. You can easily check if multicast is enabled and working properly by running the multicast-test shell script within the bin directory. If you are unable to start the cluster on a single machine, you can execute the following command from your Coherence home directory: $ . bin/multicast-test.sh –ttl 0 This will limit time-to-live of multicast packets to the local machine and allow you to test multicast in isolation. If everything is working properly, you should see a result similar to the following: Starting test on ip=Aleks-Mac-Pro.home/192.168.1.7,group=/237.0.0.1:9000, ttl=0Configuring multicast socket...Starting listener...Fri Aug 07 13:44:44 EDT 2009: Sent packet 1.Fri Aug 07 13:44:44 EDT 2009: Received test packet 1 from selfFri Aug 07 13:44:46 EDT 2009: Sent packet 2.Fri Aug 07 13:44:46 EDT 2009: Received test packet 2 from selfFri Aug 07 13:44:48 EDT 2009: Sent packet 3.Fri Aug 07 13:44:48 EDT 2009: Received test packet 3 from self If the output is different from the above, it is likely that multicast is not working properly or is disabled on your machine. This is frequently the result of a firewall or VPN software running, so the first troubleshooting step would be to disable such software and retry. If you determine that was indeed the cause of the problem you have two options. The first, and obvious one, is to turn the offending software off while using Coherence. However, for various reasons that might not be an acceptable solution, in which case you will need to change the default Coherence behavior, and tell it to use the Well-Known Addresses (WKA) feature instead of multicast for the cluster join protocol. Doing so on a development machine is very simple—all you need to do is add the following argument to the JAVA_OPTS variable within the cache-server shell script: -Dtangosol.coherence.wka=localhost With that in place, you should be able to start Coherence nodes even if multicastis disabled. Localhost and loopback addressOn some systems, localhost maps to a loopback address, 127.0.0.1. If that's the case, you will have to specify the actual IP address or host name for the tangosol.coherence.wka configuration parameter. The host name should be preferred, as the IP address can change as you move from network to network, or if your machine leases an IP address from a DHCP server. As a side note, you can tell whether the WKA or multicast is being used for the cluster join protocol by looking at the section above the MasterMemberSet section when the Coherence node starts. If multicast is used, you will see something similar to the following: Group{Address=224.3.5.1, Port=35461, TTL=4} The actual multicast group address and port depend on the Coherence version being used. As a matter of fact, you can even tell the exact version and the build number from the preceding information. In this particular case, I am using Coherence 3.5.1 release, build 461. This is done in order to prevent accidental joins of cluster members into an existing cluster. For example, you wouldn't want a node in the development environment using newer version of Coherence that you are evaluating to join the existing production cluster, which could easily happen if the multicast group address remained the same. On the other hand, if you are using WKA, you should see output similar to the following instead: WellKnownAddressList(Size=1, WKA{Address=192.168.1.7, Port=8088} ) Using the WKA feature completely disables multicast in a Coherence cluster, and is recommended for most production deployments, primarily due to the fact that many production environments prohibit multicast traffic altogether, and that some network switches do not route multicast traffic properly. That said, configuring WKA for production clusters is out of the scope of this article, and you should refer to Coherence product manuals for details. Binding issues Another issue that sometimes comes up is that one of the ports that Coherence attempts to bind to is already in use and you see a bind exception when attempting to start the node. By default, Coherence starts the first node on port 8088, and increments port number by one for each subsequent node on the same machine. If for some reason that doesn't work for you, you need to identify a range of available ports for as many nodes as you are planning to start (both UDP and TCP ports with the same numbers must be available), and tell Coherence which port to use for the first node by specifying the tangosol.coherence.localport system property. For example, if you want Coherence to use port 9100 for the first node, you will need to add the following argument to the JAVA_OPTS variable in the cache-server shell script: -Dtangosol.coherence.localport=9100
Read more
  • 0
  • 0
  • 3385
Visually different images

article-image-slowly-changing-dimension-scd-type-6
Packt
30 Mar 2010
6 min read
Save for later

Slowly Changing Dimension (SCD) Type 6

Packt
30 Mar 2010
6 min read
The Example We will apply SCD’s to maintain the history of Product dimension, specifically the history of changes of Product's Product Group. The PRODUCT_SK column is the surrogate key of the Product dimension table. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME 1 11 PENCIL 1 WRITING SUPPLY 2 22 PEN 1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPL SCD Type 1 We will apply SCD Type 1 to the PENCIL product in the Product dimension table. Let’s say PENCIL changes its product group into 4. Effecting this change by applying SCD Type 1 just updates the existing row of PENCIL on its product group. We do not have record of its previous product group; in other words, we do not maintain its product group history. The updated PENCIL’s product group is shown highlighted in blue. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME 1 11 PENCIL 4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY SCD Type 2 SCD Type 2 is essentially the opposite of Type 1. When we apply SCD Type 2, we never update or delete any existing product group. To apply SCD Type 2 we need an effective date and an expiry date. Effective date 31-Dec-99 means the row is not expired. It is the most current version of the product. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME EFFECTIVE_DATE EXPIRY_DATE 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99 Assuming the product group change of PENCIL is effective 1 April 2010, we update the expiry date of its existing row to 31 March 2010, one day before the effective date of the effective date of the change, and insert a new row that represents its new, current version. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_ GROUP _CODE PRODUCT_GROUP _NAME EFFECTIVE_DATE EXPIRY_DATE 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-09 31-Dec-99 SCD Type 3 With SCD Type 3 we maintain history but in one record only. We have one column for each version of the product group. You need to have as many columns as the number of versions you want to keep. One of the most common SCD Type 3 applications is to maintain two versions of product group: the original version and the current version. When there is no product group change yet, the current product group is the same as the original product group. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP_NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY When the pencil’s product group changes, let’s say on 1 April 2010, we expire its original product group by changing the expiry date to a day earlier (30 March 2010), and replace its current product group to the new product group. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_CODE PRODUCT_ GROUP_NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY When its product group changes again in the future, we will replace just the current product group with the new product group. The expiry date does not change. It gets updated once only the first time its product group changes. SCD Type 6 SCD Type 6 combines the three basic types. Type 6 is particularly applicable if you want to maintain complete history and would also like have an easy way to effect on current version. Let’s apply Type 6 instead of Type 3 only. We have applied Type 3 by having two versions of product group. When the pencil’s product group changes we update its existing current product group (that is Type 1 update). We also apply Type 2 by adding a new row. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP_ NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-10 31-Dec-99     4 NON ELECTRONIC SUPPLY On the next pencil’s product group change (1 July 2010), we will again apply all three SCD types. PRODUCT _SK PRODUCT _CODE PRODUCT _NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP _NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     5 LEGACY SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-10 30-Jun-10   5 LEGACY SUPPLY 6 11 PENCIL 5 LEGACY SUPPLY 1-Jul-10 31-Dec-99   5 LEGACY SUPPLY QUERY Let’s next see how our Type 6 in the Product dimension works on a sales fact. (In a real sales fact data you will have some other dimensions, meaning the fact table will have more surrogate key columns than just the product surrogate key) If our interest is in the current version, our SQL query will use the current product group column. An example SQL query will look like: SELECT current_product_group_name, SUM(sales_amt)FROM sales_fact s, product_dim pWHEREs.product_sk = p.product_skAND product_name = ‘PENCIL’GROUP BY current_product_group_code The output of the query will be: The reason of applying SCD Type 2 is to have a complete history that tracks changes. SQL queries that take into account dimension history use the product group column: SELECT product_group_name, SUM(sales_amt)FROM sales_fact s, product_dim p, date_dim dWHEREs.product_sk = p.product_skAND product_name = ‘PENCIL’GROUP BY product_group_code The output of the query will be: SUMMARY This article discusses what SCD Type 6 is, when to apply it, and how it works. The name Type 6 comes from the ‘sum’ of the three basic SCD types (6 = 1 + 2 + 3).
Read more
  • 0
  • 0
  • 8781

article-image-configuring-and-formatting-ireport-elements
Packt
29 Mar 2010
7 min read
Save for later

Configuring and Formatting iReport Elements

Packt
29 Mar 2010
7 min read
A complete report is structured by composing a set of sections called bands. Each band has its own configurable height, a particular position in the structure, and is used for a particular objective. The available bands are: Title, Page Header, Column Header, Detail 1, Column Footer, Page Footer, Last Footer, and Summary. A report structured with bands is shown in the following screenshot: Besides the mentioned bands, there are two special bands which are Background and No Data. Band Description Title Is the first band of the report and is printed only once. Title can be shown on a new page. You can configure this from the report properties discussed in the previous section of this chapter. Just to review-go to report Properties | More... and check the Title on a new page checkbox. Page Header Is printed on each page of the report and is used for setting up the page header. Column Header Is printed on each page, if there is a detail band on that page. This band is used for the column heading. Detail This band is repeatedly printed for each row in the data source. In the List of Products report, it is printed for each product record. Column Footer Is printed on each page if there is a detail band on that page. This band is used for the column heading. If the Floating column footer in report Properties is checked, then the column footer will be shown just below the last data of the column, otherwise it will be shown at the bottom of the page (above the page footer). Page Footer Is printed on each page except the last page, if Last Page Footer is set. If Last Page Footer is not set, then it is printed on the last page also. This band is a good place to insert page numbers. Last Page Footer Is printed only on the last page as a page footer. Summary Is printed only once at the end of the report. It can be printed on a separate page if it is configured from the report Properties. In the following chapters, we will produce some reports where you will learn about the suitability of this band. Background Is used for setting a page background. For example, we may want a watermark image for the report pages. No Data When no data is available for the reports, this band is printed if it is set as the When no data option in the report Properties. Showing/hiding bands and inserting elements Now, we are going to configure the report bands (setting height, visibility, and so on) and format the report elements. Select Column Footer from the Report Inspector. You will see the Column Footer - Properties on the right of the designer. Type 25 in the Band height field. Press Enter. Now you can see the Column Footer band in your report, which was invisible before you set the band height. A band becomes invisible in the report if its height is set to zero. We have already learned how to change the height of a band. We can also make a band invisible using the Print When Expression option. If we write new Boolean(false) in Print When Expression of a band, then that will make the band invisible, even though its height is set to greater than zero. If we write new Boolean(true), then the band will be visible. It is true by default. Drag a Static Text element from the Palette window and drop it on the Column Footer band. Double-click on Static Text and type End of Record, replacing the text Static Text. Select the static text element (End of Record). Go to Format | Position and then choose Center. Now the element has been positioned in the center of the Column Footer band. In the same way, insert two Line elements. Place one element at the left and another at the right of the static text. Select both the lines. Go to Format | Position, and then choose Center Vertically . The lines are now positioned in the center of the Column Footer vertically. Select both the lines and go to Format | Size and then choose Same Width. Now both the lines are equal in width. Select the static text element (End of Record) and the left line. Now go to Format | Position and choose Join Sides Right. This moves the line to the right, and it is now connected to the static text element. Repeat the previous step for the right line and finally choose Join Sides Left. Now the line has moved to the left and is connected with the static text element. In the same way, change the column headers as you want by double-clicking the labels on the Column Header band. Now, the columns may be Product Code, Name, and Description. Now your report design should look like the following screenshot: Preview the report, and you will see the lines and static text (End of Record) at the bottom of the column. By default, the Column Footer is placed at the bottom of the page. To show the Column Footer just below the table of data, the Float column footer option must be enabled from the report Properties window. Sizing elements We can increase or decrease the size of an element by dragging the mouse accordingly. Sometimes, we need to set the size of an element automatically based on other elements' sizes. There are various options for setting the automatic size of an element. These options are available in the format menu (Format | Size). Size Options Description Same Width This makes the selected elements of the same width. The width of the element that you select first is used as the new width of the selected elements. Same Width (max) The width of the largest of the selected elements is set as the width of all the selected elements. Same Width (min) The width of the smallest of the selected elements is set as the width of all the selected elements. Same Height This makes the selected elements of the same height. The height of the element that you select first is used as the new height of the selected elements. Same Height (max) The height of the largest of the selected elements is set as the height of all the selected elements. Same Height (min) The height of the smallest of the selected elements is set as the height of all the selected elements. Same Size Both the width and the height of the selected elements become the same. Position Description Center Horizontally (band/cell based) The selected element is placed in the center of the band horizontally. Center Vertically (band/cell based) The selected element is placed in the center of the band vertically. Center (in band/cell) The selected element is placed in the center of the band both horizontally and vertically. Center (in background) If the Background band is visible and if the element is on the Background band, then it will be placed in the center both horizontally and vertically. Join Left Joins two elements. For joining, one element will be moved to the left. Join Right Joins two elements. For joining, one element will be moved to the right. Align to Left Margin The selected element will be joined with the left margin of the report. Align to Right Margin The selected element will be joined with the right margin of the report.
Read more
  • 0
  • 0
  • 5782

article-image-setting-ireport-pages
Packt
29 Mar 2010
2 min read
Save for later

Setting Up the iReport Pages

Packt
29 Mar 2010
2 min read
Configuring the page format We can follow the listed steps for setting up report pages: Open the report List of Products. Go to menu Window | Report Inspector. The following window will appear on the left side of the report designer: Select the report List of Products, right-click on it, and choose Page Format…. The Page format… dialog box will appear, select A4 from the Format drop-down list, and select Portrait from the Page orientation section. You can modify the page margins if you need to, or leave it as it is to have the default margins. For our report, you need not change the margins. Press OK. Page size You have seen that there are many preset sizes/formats for the report, such as Custom, Letter, Note, Legal, A0 to A10, B0 to B5, and so on. You will choose the appropriate one based on your requirements. We have chosen A4. If the number of columns is too high to fit in Portrait, then choose the Landscape orientation. If you change the preset sizes, the report elements (title, column heading, fields, or other elements) will not be positioned automatically according to the new page size. You have to position each element manually. So be careful if you decide to change the page size. Configuring properties We can modify the default settings of report properties in the following way: Right-click on List of Products and choose Properties. We can configure many important report properties from the Properties window. You can see that there are many options here. You can change the Report name, Page size, Margins, Columns, and more. We have already learnt about setting up pages, so now our concern is to learn about some of the other (More…) options.
Read more
  • 0
  • 0
  • 6462
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-ireport-netbeans
Packt
19 Mar 2010
3 min read
Save for later

iReport in NetBeans

Packt
19 Mar 2010
3 min read
Creating different types of reports inside the NetBeans IDE The first step is to download the NetBeans IDE and the iReport plugin for this. The iReport plugin for NetBeans is available for free download at the following locations: https://sourceforge.net/projects/ireport/files or http://plugins.netbeans.org/PluginPortal/faces/PluginDetailPage.jsp?pluginid=4425 After downloading the plugin, follow the listed steps to install the plugin in NetBeans: Start the NetBeans IDE. Go to Tools | Plugins. Select the Downloaded tab. Press Add Plugins…. Select the plugin files. For iReport 3.7.0 the plugins are: iReport-3.7.0.nbm, jasperreports-components-plugin-3.7.0.nbm, jasperreportsextensions-plugin-3.7.0.nbm, and jasperserver-plugin-3.7.0.nbm. After opening the plugin files you will see the following screen: Check the Install checkbox of ireport-designer, and press the Install button at the bottom of the window. The following screen will appear: Press Next >, and accept the terms of the License Agreement. If the Verify Certificate dialog box appears, click Continue. Press Install, and wait for the installer to complete the installation. After the installation is done, press Finish and close the Plugins dialog. If the IDE requests for a restart, then do it. Now the IDE is ready for creating reports. Creating reports We have already learnt about creating various types of reports, such as reports without parameters, reports with parameters, reports with variables, subreports, crosstab reports, reports with charts and images, and so on. We have also attained knowledge associated with these types of reports. Now, we will learn quickly how to create these reports using NetBeans with the help of the installed iReport plugins. Creating a NetBeans database JDBC connection The first step is to create a database connection, which will be used by the report data sources. Follow the listed steps: Select the Services tab from the left side of the project window. Select Databases. Right-click on Databases, and press New Connection…. In the New Database Connection dialog, set the following under Basic setting, and check the Remember password checkbox: Option Value Driver Name MySQL (Connector/J Driver) Host localhost Port 3306 Database inventory User Name root Password packt Press OK. Now the connection is created, and you can see this under the Services | Databases section, as shown in the following screenshot: Creating a report data source The NetBeans database JDBC connection created previously will be used by a report data source that will be used by the report. Follow the listed steps to create the data source: From the NetBeans toolbar, press the Report Datasources button. You will see the following dialog box: Press New. Select NetBeans Database JDBC connection, and press Next >. Enter inventory in the Name field, and from the Connection drop-down list, select jdbc:mysql://localhost:3306/inventory [root on Default schema]. Press Test, and if the connection is successful, press Save and close the Connections / Datasources dialog box.
Read more
  • 0
  • 0
  • 19229

article-image-ms-access-queries-oracle-sql-developer-12-tool
Packt
17 Mar 2010
12 min read
Save for later

MS Access Queries with Oracle SQL Developer 1.2 Tool

Packt
17 Mar 2010
12 min read
In my previous article with the Oracle SQL Developer 1.1, I discussed the installation and features of this stand-alone GUI product which can be used to query several database products. Connecting to an Oracle Xe 10G was also described. The many things you do in Oracle 10G XE can also be carried out with the Oracle SQL Developer. It is expected to enhance productivity in your Oracle applications. You can use Oracle SQL Developer to connect, run, and debug SQL, SQL*Plus and PL/SQL. It can run on at least three different operating systems. Using this tool you can connect to Oracle, SQL Server, MySql and MS Access databases. In this article, you will learn how to install the Oracle SQL Developer 1.2 and connect to an MS Access database. The 1.2 version has several features that were not present in version 1.1 especially regarding Migration from other products. Downloading and installing the Oracle SQL Developer Go to the Oracle site (you need to be registered to download) and after accepting the license agreement you will be able to download sqldeveloper-1.2.2998.zip, a 77MB download if you do not have JDK1.5 already installed. You may place this in any directory. From the unzipped contents, double-click on the SQLDeveloper.exe. The User Interface On a Windows machine, you may get a security warning which you may safely override and click on Run. This opens up the splash window shown in the next picture followed by the Oracle SQL Developer interface shown in the picture that follows. Figure 1 The main window The main window of this tool is shown in the next picture. Figure 2 It has a main menu at the very top where you can access File, Edit, View, Navigate, Run, Debug, Source, Migration, Tools and Help menus. The menu item Migration has been added in this new version. Immediately below the main menu on the left, you have a tabbed window with two tabs, Connections, and Reports. This will be the item you have to contend with since most things start only after establishing a connection. The connection brings with it the various related objects in the databases. View Menu The next picture shows the drop-down of the View main menu, where you can see other details such as links to the debugger, reports, connections, and snippets. In this new version, many more items have been added such as Captured Objects, Converted Objects, and Find DB Object. Figure 3 Snippets are often-used SQL statements or clauses that you may want to insert. You may also save your snippets by clicking on the bright green plus sign in the window shown, which opens up the superposed Save Snippet window. Figure 4 In the Run menu item, you can run files as well as look at the Execution Profile. Debug Menu The debug menu item has all the necessary hooks to toggle break points: step into, step over, step out and step to End of Method, etc., including garbage collection and clean up as shown in the next picture. Figure 5 Tools Menu Tools give access to External Tools that can be launched, Exports both DDL and data, schema diff, etc. as shown in the next picture. Figure 6 Help gives you both full-text search and indexed search. This is an important area which you must visit; you can also update the help. Figure 7 About Menu The About drop-down menu item in the above figure opens up the following window where you have complete information about this product that includes version information, properties, and extensions. Figure 8 Migration Menu As mentioned earlier the Migration is a new item in the main menu and its drop-down menu elements are shown in the next picture. It even has a menu item to make a quick migration of all recent versions of MS Access (97, 2000, 2002, and 2003). The Repository Management item is another very useful feature. The MySQL and SQL Server Offline Capture menu item can capture database create scripts from several versions of MySQL and MS SQL Server by locating them on the machine. Figure 9 Connecting to a Microsoft Access Database If you are interested in Oracle 10G XE it will be helpful if you refresh your Oracle 10G XE knowledge or read the several Oracle 10G XE articles whose links are shown on the author’s blog. This is a good place for you to look at new developments, scripts, UI description, etc. This section, however, deals with connecting to an MS Access database. Click on the "Connections" icon with the bright green plus sign as shown in the next figure. Figure 10 This opens up the next window, New/Select Database Connection. This is where you must supply all the information. As you can see it has identified a resident (that is a local Oracle 10G XE server) Oracle 10G XE on the machine. Of course, you need to configure it further. In addition to Oracle, it can connect to MySQL, MS Access, and SQL Server as well. This interface has not changed much from version 1.1; you have the same control elements. Figure 11 On the left-hand side of this window, you will generate the Connection Name and Connection Details once you fill in the appropriate information on the right. Connection name is what you supply; to get connected you need to have a username and password as well. If you want, you can save the password to avoid providing it again and again. At the bottom of the screen, you can save the connection, test it and connect to it. There is also access to online help. In the above window, click on the tab, in the middle of the page, Access. The following window opens in which all you need to do is to use the Browse button to locate the Microsoft Access Database on your machine (windows default for mdb files is My Documents). Figure 12 Hitting the Browse button opens the window, Open with the default location, My Documents—the default directory for MDB files. Figure 13 Choosing a database Charts.mdb and clicking the Open button brings the file pointer to the New / Select Database Connection in the box to the left of the Browse button. When you click on the Test button if the connection is OK you should get an appropriate message. However for the Charts.mdb file you get the following error. Figure 14 The software is complaining about the lack of read access to the system tables. Providing read access to System tables. There are a couple of System tables in MS Access which are usually hidden but can be displayed using Tools option in MS Access. Figure 15 In the View tab if you place a check mark for System objects then you will see the following tables. The System tables are as shown in the red rectangle. Figure 16 If you need to modify the security settings for these tables you can do so as shown in the next figure by following the trail, Tools  Security User and Group permissions. Figure 17 Click on the User and Group Permissions menu item which opens the next window Users and Group Permissions shown here, Figure 18 For the user who is Admin, scroll through each of the system tables and place a check mark for Read Design and Read Data check boxes. Click on the OK button and close the application. Now you again use the Browse button to locate the Charts.mdb file after providing a name for the connection at the top of the New / Select Database Connection page. For this tutorial MyCharts was chosen as the name for the connection. Once this file name appears in the box to the left of the Browse button, click on the Test button. This message screen is very fast (appears and disappears). If there is a problem, it will bring up the message as before. Now click on the Connect button at the bottom of the screen in the New / Select Database Connection page window. This immediately adds the connection MyCharts to the Connections folder shown in the left. The + sign can be clicked to expand all the objects in the Charts.mdb database as shown in the next figure. Figure 19 You can further expand the Table nodes to show the data that the table contains as shown in the next figure for the Portfolio table. Figure 20 The Relationships tab in the above figure shows related and referenced objects as shown. This is just a lone table with no relationships established and therefore none showing. Figure 21 It may be noted that the Oracle SQL Developer can only connect to MDB files. It cannot connect to Microsoft Access projects (ADP files), or the new MS Access 2007 file types. Using the SQL Interface SQL Statements are run from the SQL Worksheet which can be displayed by right clicking the connection and choosing Open SQL Worksheet item from the drop-down list as shown. Figure 22 You will type in the SQL queries in area below Enter SQL Statement label in the above figure (now hidden behind the drop-down menu). Making a new connection with more tables and a relationship In order to run a few simple queries on the connected database, three more tables were imported into Charts.mdb after establishing a relationship between the new tables in the access database as shown in the following figure. Figure 23 Another connection named, NewCharts was created in Oracle SQL Developer. The connection string that SQL Developer will take for NewCharts is of the following format (some white spaces were introduced into the connection string shown to get rid of MS Word warnings). @jdbc:odbc: Driver= {Microsoft Access Driver (*.mdb)}; DBQ=C:Documents and SettingsJayMy DocumentsCharts.mdb; DriverID=22;READONLY=false} This string can be reviewed after a connection is established in the New / Select Database Connection window as shown in the next figure. Figure 24 A simple Query Let us look at a very simple query using the PrincetonTemp table. After entering the query you can execute the statement by clicking on the right pointing green arrowhead as shown. The result of running this query will appear directly below the Enter SQL Statement window as shown. Figure 25 Just above the Enter SQL Statement label is the SQL Toolbar displaying several icons (left to right) which are as follows with the associated key board access: Execute SQL Statement(F9) ->Green arrow head Run Script(F5) Commit(F11) Rollback(F12) Cancel(CTRL+Q) SQL History(F8) Execute Explain Plan(F6) Autotrace(F10) Clear(CTRL+D) It also displays time taken to execute ->0.04255061 seconds. The bottom pane is showing the result of the query in a table format. If you would run the same query with the Run Script (F5) button you would see the result in the Script Output tab of the bottom pane. In addition to SQL you can also use the SQL Worksheet to run SQL *PLUS and PL/SQL statements with some exceptions as long as they are supported by the provider used in making the connection. Viewing relationship between tables in the SQL Developer Three tables Orders, Order Details, and Products were imported into the Charts.mdb after enforcing referential integrity relationships in the Access database as seen earlier. Will the new connection NewCharts be able to see these relationships? This question is answered in the following figure.   Click on any one of these tables and you will see the Data as well as Relationships tabs in the bottom pane as shown. Figure 26 Now if you click on the Relationships tab for Products you will see the display just showing three empty columns as seen in an earlier figure. However if you click on the Order Details which really links the three tables you will see the following displayed. Figure 27 Query joining three tables The Orders, Order Details, and Products tables are related by relational integrity as seen above. The following query which chooses one or two columns from each table can be run in a new SQL worksheet. Select Products.ProductName,Orders.ShipName,Orders.OrderDate,[Order Details].Quantityfrom Products, Orders, [Order Details]where Orders.OrderID=[Order Details].OrderIDandProducts.ProductID=[Order Details] and Orders.OrdersDate >'12-31-1997' The result of running this query (only four rows of data shown) can be seen in the next figure. Figure 28 Note that the syntax must match the syntax required by the Provider ODBC, date has to be #12-31-1997# instead of ’12-21-1997’. Summary The article described the new version of the stand alone Oracle’s GUI SQL Developer tool. It can connect to couple of databases such MS Access, SQL Server, Oracle and MySQL. Its utility could have been far greater had it provided connectivity to ODBC and OLE DB. I am disappointed it did not, in this version as well. The connection to MS Access seems to bring in not only tables but the other objects except Data Access Pages, but the external applications that you can use are limited to Word, Notepad, IE etc but not a Report Viewer. These objects and their utility remains to be explored.  Only a limited number of features were explored in this article and it excluded new features like Migration and Translation Scratch Editor which translates MS ACCESS, SQL Server and My SQL syntaxes to PL / SQL. These will be considered in a future article.
Read more
  • 0
  • 0
  • 10121

article-image-building-your-first-ireport
Packt
02 Mar 2010
4 min read
Save for later

Building Your First iReport

Packt
02 Mar 2010
4 min read
So let's get on with it! Creating a connection/data source Before going to create the connection, a database should be set up. The SQL query for the database used for creating reports can be downloaded from the Packt website. Now, we are going to create a connection/data source in iReport and build our first report in some easy to follow steps: You need to create the connection/data source just once before developing the first report. This connection will be reused for the following reports. Start iReport. Press the Report Datasources button in the toolbar. You will see a dialog box similar to the following screenshot: Press the New button. Another dialog box will appear for selecting the data source type. There are several types to choose from, according to your requirement. For now, choose Database JDBC connection, and press Next >. Another dialog box will appear to set up the Database JDBC connection properties. Give a sensible name to the connection. In this case, it is inventory. Choose the JDBC Driver from the list, according to your connection type and/or your database. In this case, it is MySQL (com.mysql.jdbc.Driver). Write the JDBC URL, according to the driver you have chosen. For this tutorial, it is jdbc:mysql://localhost/inventory. In the previous code for connecting to a database from a Java program using JDBC—jdbc is the connection protocol, mysql is the subprotocol, localhost is the MySQL server if it runs on the same computer, and inventory is the database name. Enter the Username and Password. Generally, for a MySQL server, the username is root and you have set a customized password during the installation of the MySQL server. The screenshot is as follows: Press Test to confirm that you have set all the properties correctly. If all the settings are correct, then you will see a message that says Connection test successful!. You can save the password by checking the Save Password checkbox, but be warned that iReport stores passwords in clear text. Storing passwords in clear text is a bad thing for us, isn't it? If you do not specify a password now, iReport will ask you for one only when required and will not save it. Now save the connection. You will see that the newly created connection is listed in the Connections/Datasources window. If you have more than one connections, then you can set one as the default connection. In order to do this, select the connection and press Set as Default. Enter the Username and Password. Generally, for a MySQL server, the username is root and you have set a customized password during the installation of the MySQL server. The screenshot is as follows: When we execute the report with an active connection, the reports are filled with data from the database or other data sources. We can also see the report output with empty data sources, which has, by default, a single record with all fields set to null. An empty data source is used to print a static report. However, in order to choose the tables and columns from a database automatically using the Report Wizard, we need to connect to a database/data source first. To do this, we must create a connection/data source. Building your first report Having set up a connection, we are ready to build our first report. We will keep it very simple, just to be familiar with the steps required for building a report. We will create a report that lists out all the products; that is, we will show all the rows of the product table of our database. Follow the steps listed and build your first report: Go to the File menu and click New…. You will see a dialog box like the following screenshot: From the list of Report templates, select Simple Blue and press Launch Report Wizard. Enter Report name as List of Products and press Next >. Now you will specify the query to retrieve the report fields. Select your connection from the Connections / Data Sources drop-down list. Write the SQL query for the report you want to develop. In our case, it is SELECT ProductCode, Name, Description FROM Product.
Read more
  • 0
  • 0
  • 1793

article-image-working-aspnet-datalist-control
Packt
19 Feb 2010
8 min read
Save for later

Working With ASP.NET DataList Control

Packt
19 Feb 2010
8 min read
In this article by Joydip Kanjilal, we will discuss the ASP.NET DataList control which can be used to display a list of repeated data items. We will learn about the following: Using the DataList control Binding images to a DataList control dynamically Displaying data using the DataList control Selecting, editing and deleting data using this control Handling the DataList control events The ASP.NET DataList Control The DataList control like the Repeater control is a template driven, light weight control, and acts as a container of repeated data items. The templates in this control are used to define the data that it will contain. It is flexible in the sense that you can easily customize the display of one or more records that are displayed in the control. You have a property in the DataList control called RepeatDirection that can be used to customize the layout of the control. The RepeatDirection property can accept one of two values, that is, Vertical or Horizontal. The RepeatDirection is Vertical by default. However, if you change it to Horizontal, rather than displaying the data as rows and columns, the DataList control will display them as a list of records with the columns in the data rendered displayed as rows. This comes in handy, especially in situations where you have too many columns in your database table or columns with larger widths of data. As an example, imagine what would happen if there is a field called Address in our Employee table having data of large size and you are displaying the data using a Repeater, a DataGrid, or a GridView control. You will not be able to display columns of such large data sizes with any of these controls as the display would look awkward. This is where the DataList control fits in. In a sense, you can think the DataList control as a combination of the DataGrid and the Repeater controls. You can use templates with it much as you did with a Repeater control and you can also edit the records displayed in the control, much like the DataGrid control of ASP.NET. The next section compares the features of the three controls that we have mentioned so far, that is, the Repeater, the DataList, and the DataGrid control of ASP.NET. When the web page is in execution with the data bound to it using the Page_Load event, the data in the DataList control is rendered as DataListItem objects, that is, each item displayed is actually a DataListItem. Similar to the Repeater control, the DataList control does not have Paging and Sorting functionalities build into it. Using the DataList Control To use this control, drag and drop the control in the design view of the web form onto a web form from the toolbox. Refer to the following screenshot, which displays a DataList control on a web form: The following list outlines the steps that you can follow to add a DataList control in a web page and make it working: Drag and drop a DataList control in the web form from the toolbox. Set the DataSourceID property of the control to the data source that you will use to bind data to the control, that is, you can set this to an SQL Data Source control. Open the .aspx file, declare the <ItemTemplate> element and define the fields as per your requirements. Use data binding syntax through the Eval() method to display data in these defined fields of the control. You can bind data to the DataList control in two different ways, that is, using the DataSourceID and the DataSource properties. You can use the inbuilt features like selecting and updating data when using the DataSourceID property. Note that you need to write custom code for selecting and updating data to any data source that implements the ICollection and IEnumerable data sources. We will discuss more on this later. The next section discusses how you can handle the events in the DataList control. Displaying Data Similar to the Repeater control, the DataList control contains a template that is used to display the data items within the control. Since there are no data columns associated with this control, you use templates to display data. Every column in a DataList control is rendered as a <span> element. A DataList control is useless without templates. Let us now lern what templates are, the types of templates, and how to work with them. A template is a combination of HTML elements, controls, and embedded server controls, and can be used to customize and manipulate the layout of a control. A template comprises HTML tags and controls that can be used to customize the look and feel of controls like Repeater, DataGrid, or DataList. There are seven templates and seven styles in all. You can use templates for the DataList control in the same way you did when using the Repeater control. The following is the list of templates and their associated styles in the DataList control The Templates are as follows: ItemTemplate AlternatingItemTemplate EditItemTemplate FooterTemplate HeaderTemplate SelectedItemTemplate SeparatorTemplate The following screenshot illustrates the different templates of this control. As you can see from this figure, the templates are grouped under three broad categories. These are: Item Templates Header and Footer Templates Separator Template Note that out of the templates given above, the ItemTemplate is the one and only mandatory template that you have to use when working with a DataList control. Here is a sample of how your DataList control's templates are arranged: < asp:DataList id="dlEmployee" runat="server"><HeaderTemplate>...</HeaderTemplate><ItemTemplate>...</ItemTemplate><AlternatingItemTemplate>...</AlternatingItemTemplate><FooterTemplate>...</FooterTemplate></asp:DataList> The following screenshot displays a DataList control populated with data and with its templates indicated. Customizing a DataList control at run timeYou can customize the DataList control at run time using the ListItemType property in the ItemCreated event of this control as follows: private void DataList1_ItemCreated(objectsender, ...........System.Web.UI.WebControls.DataListItemEventArgs e){ switch (e.Item.ItemType) { case System.Web.UI.WebControls.ListItemType.Item : e.Item.BackColor = Color.Red; break; case System.Web.UI.WebControls.ListItemType. AlternatingItem : e.Item.BackColor = Color.Blue; break; case System.Web.UI.WebControls.ListItemType. SelectedItem : e.Item.BackColor = Color.Green; break; default : break; }} The Styles that you can use with the DataList control to customize the look and feel are: AlternatingItemStyle EditItemStyle FooterStyle HeaderStyle ItemStyle SelectedItemStyle SeparatorStyle You can use any of these styles to format the control, that is, format the HTML code that is rendered. You can also use layouts of the DataList control for formatting, that is, further customization of your user interface. The available layouts are as follows: FlowLayout TableLayout VerticalLayout HorizontalLayout You can specify your desired flow or table format at design time by specifying the following in the .aspx file. RepeatLayout = "Flow" You can also do the same at run time by specifying your desired layout using the RepeatLayout property of the DataList control as shown in the following code snippet: DataList1.RepeatLayout = RepeatLayout.Flow In the code snippet, it is assumed that the name of the DataList control is DataList1. Let us now understand how we can display data using the DataList control. For this, we would first drag and drop a DataList control in our web form and specify the templates for displaying data. The code in the .aspx file is as follows: <asp:DataList ID="DataList1" runat="server"> <HeaderTemplate> <table border="1"> <tr> <th> Employee Code </th> <th> Employee Name </th> <th> Basic </th> <th> Dept Code </th> </tr> </HeaderTemplate> <ItemTemplate> <tr bgcolor="#0xbbbb"> <td> <%# DataBinder.Eval(Container.DataItem, "EmpCode")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "EmpName")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "Basic")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "DeptCode")%> </td> </tr> </ItemTemplate> <FooterTemplate> </FooterTemplate></asp:DataList> The DataList control is populated with data in the Page_Load event of the web form using the DataManager class as usual. protected void Page_Load(object sender, EventArgs e) { DataManager dataManager = new DataManager(); DataList1.DataSource = dataManager.GetEmployees(); DataList1.DataBind(); } Note that the DataBinder.Eval() method has been used as usual to display the values of the corresponding fields from the data container in the DataList control. The data container in our case is the DataSet instance that is returned by the GetEmployees () method of the DataManager class. When you execute the application, the output is as follows:
Read more
  • 0
  • 1
  • 7707
article-image-author-podcast-aleksander-seovic-talks-about-oracle-coherence-35
Packt
17 Feb 2010
1 min read
Save for later

Author Podcast - Aleksander Seovic Talks About Oracle Coherence 3.5

Packt
17 Feb 2010
1 min read
Aleksander Seovic is the author of Oracle Coherence 3.5, which will help you to design and build scalable, reliable, high-performance applications using software of the same name. The book is due out in March, but you can get a flavour of it in his interview with Cameron Purdy, below. For more information on Aleksander's book, visit: http://www.packtpub.com/oracle-coherence-3-5/book. Listen Here      
Read more
  • 0
  • 0
  • 3020

article-image-n-way-replication-oracle-11g-streams-part-1
Packt
05 Feb 2010
4 min read
Save for later

N-Way Replication in Oracle 11g Streams: Part 1

Packt
05 Feb 2010
4 min read
N-way replication refers to a Streams environment where there are multiple sources. In this article, we will still use the STRM1 and STRM2 databases but with a little twist; making both databases the source. By making both STRM1 and STRM2 sources, we need to first consider a couple of unique situations and do a little more pre-planning, specifically for N-Way replication. The concepts and techniques used to configure a 2-way replication can then be used to scale to N-way replication. We all need to crawl before we run, the better you crawl (understand) this article, the easier it will be to scale up to N-way replication. Pay close attention and learn the technique so that you can implement it well. We need to repeat this—Streams is not Failover. We need to repeat this—Streams is not Failover. No, that is not a typo. The authors are passionate about Streams and want to see you successfully implement it. To successfully implement Streams, you need to know not to step into the trap of using it for Failover. Both authors have done some work where Failover was the requirement. Streams is not a Failover solution. Failover is handled by Oracle Data Guard, NOT Oracle Streams. Streams is about distributing the data to multiple locations. On more than one occasion, Streams was used as a Failover technology because it can distribute data to multiple locations. Do not fall into the trap of using the wrong tool for the wrong job. Streams distributes (replicates) data. As such, there will always be some difference between the databases in a Streams environment. All replication technology has this problem. The only time where all of the databases are in sync is, when there is no activity and all replication has been applied to all target locations. If you need Failover, then use the proper tool. Oracle Data Guard is for Failover. It has the necessary processes to guarantee a different level of failover from a primary site to a secondary site, whereas Streams is a Replication tool that distributes data. Just remember the following, when there is a discussion of Replication and Failover that comes up: Streams distributes data, it is built for replication Data Guard is built for Failover Pre-planning for N-way replication When we set up N-way replication, we must consider the possibility of a collision of data. Since we have multiple sources of data, it is possible for the exact same data to be inputted on any or all of the sources at the exact same time. When this happens, it is a conflict. This example is just one type of conflict that can happen in N-way replication environments. The types of conflict that can occur are as follows: Update conflict: When transactions from different databases try to update the same row at nearly the same time. Delete conflict: When one transaction deletes a row and the next transaction tries to update or delete the row. Transactions originate from different databases. Unique conflict: When transactions from different databases violate a primary or unique constraint, the first transaction is accepted. The second transaction obtains the conflict. Foreign key conflict : This happens when a transaction from a Source tries to insert a child record before the parent record exists. The good news is that Oracle has provided built-in conflict resolution in Streams that solves the most common situations. The built-in solutions are as follows: OVERWRITE DISCARD MAXIMUM MINIMUM We will provide an example of conflict resolution after we build our N-way replication. In our case, we will use MAXIMUM. As part of the pre-planning for N-way replication, we highly suggest creating a simple table such as the Setup Table. Avoiding Conflict As conflict requires additional pre-planning and configuration, one begins to wonder, "Are there techniques so that we can configure N-way replication without the possibility of conflict?" The simple answer to the question is "Yes". The not-so simple answer is that there is some configuration magic that needs to be done and the devil is in the details. Limiting who and what can be updated is one method of avoiding conflict. Think of it this way— there is no conflict if we agree to who and what can update the specific data. User 1 can only update his specific data and no one else can do that. Similarly, user 2 can only update his specific data. So, user 1 and user 2 can never cause a conflict. Now this may be a little bit difficult depending on the application. This can be implemented with the use of offset sequences. One sequence produces only odd values, and another produces only even values. We could also use a combination of sequence and some unique characteristics of the database.
Read more
  • 0
  • 0
  • 1772

article-image-oracle-11g-streams-rules-part-2
Packt
05 Feb 2010
10 min read
Save for later

Oracle 11g Streams: RULES (Part 2)

Packt
05 Feb 2010
10 min read
Rule based transformation—eat your heart out transformers! As with all good rules, some are made to be broken; or maybe changed. In some circumstances we need to have rules that govern change. In Advance Replication, a number one rule is that a replicated table must have the same structure at all master sites. The column names and data types have to be identical or the "apply" of a deferred transaction will fail. With Streams, we can now break this rule by adding a new rule that allows the LCR to "morph" to a new structure. We call this ability Rule Based Transformation; and it is done via complex rules and action context. When you plan your Rule Based Transformation design, you want to remember that Rule Based Transformation rules are only evaluated with positive Rule Sets. If the Rule Set is negative, the Rule Based Transformation is ignored. Declarative versus User Created In the real world, there are many ways to accomplish the same thing; just as there are many ways to model data. You may run into a situation where the table structure in one master database may be different from the structure of the table in another master database but data must be replicated between them. It could be that a table column at one master is a VARCHAR2, but is a DATE at another master site. Or perhaps the column does not exist at all. Rule Based Transformation provides the ability to capture the LCR and convert it to the necessary structure needed to apply it at the destination site. This is not to be confused with transformations accomplished via the DBMS_TRANSFORMATION package. That is a different fish (and doesn't swim in this stream). A special note concerning SUBSET Rules and transformations. A SUBSET Rule has an internal row_migration transformation assigned to it when it is created. This internal transformation will always be the first one executed before any other transformations. Another thing to keep in mind is the amount of "transformation" that will be applied to the LCR. If extensive transformations need to be made to the LCR, you may wish to consider using a custom DML handler instead to take advantage of the apply parallel capabilities. The remainder of this section is going to use the premise that we have an LCR that we need to change a column name for, before we send it out from the source site. The LCR is generated on a table which has a different column name than the corresponding table at all the other sites. This being the case, we are going to create the transformation at the Capture process. There are two ways to accomplish this; either by using a declarative transformation or a user created transformation. We will review each, and then apply the method to our LCR that needs a column name change. Depending on the Transformation type, you can use one of the following views to find information concerning the transformation: Declarative: DBA_STREAMS_TRANSFORMATIONS User Created: DBA_STREAMS_TRANSFORM_FUNCTION Declarative Transformation As of 10g, Oracle provides commonly used transformations in the DBMS_STREAMS_ADM package. These transformations are referred to as declarative transformations. Declarative transformations only work with row LCR's (aka DML LCR's). The row LCR can be a Streams captured LCR (basic or synchronous), or a user created message. The procedures allow you to add transformation rules to do the following: Add a column (DBMS_STREAMS_ADM.ADD_COLUMN) Delete a column (DBMS_STREAMS_ADM.DELETE_COLUMN) Rename a column (DBMS_STREAMS_ADM.RENAME_COLUMN) Rename a table (DBMS_STREAMS_ADM.RENAME_TABLE) Rename a schema (DBMS_STREAMS_ADM.RENAME_SCHEMA) Special considerations when DBMS_STREAMS_ADM.ADD_COLUMN Be aware that the DBMS_STREAMS_ADM.ADD_COLUMN procedure does not support a number of data types. These include: LOBS (BLOB, CLOB, NCLOB, BFILE, and so on) LONG, LONG RAW, and so on ROWID User-defined types (including object types, REFs, varrays, nested tables, and so on) Oracle-supplied types (including ANY types, XML types, spatial types, and media types) For more information on DBMS_STREAMS_ADM Declarative Transformation subprograms, please refer to the Oracle Database PL/SQL Packages and Types Reference. For our purposes, we want to use the DBMS_STREAMS_ADM.RENAME_COLUMN to create a declarative transformation. In our example, we will work with the JOB_HISTORY table from the Oracle Example HR Schema. We will assume that at our source database the HR.JOB_HISTORY table has a column named DEPARTMENT_ID, and at the destination database the corresponding column in the HR.JOB_HISTORY is DEPT_ID. Declarative Transformations can only be added to an existing rule. If the rules specifi ed do not exist, an error is raised. Also, the transformation will be owned by STRM_ADMIN so make sure you have explicitly granted all privileges on HR.JOB_HISTORY to STRM_ADMIN. First we find the rule to which we wish to add the declarative transformation, logged in as STRM_ADMIN we can look at the USER_RULES view: SQL> select * from user_rules;RULE_NAME------------------------------RULE_CONDITION-------------------------------------RULE_EVALUATION_CONTEXT_OWNER RULE_EVALUATION_CONTEXT_NAME------------------------------ ------------------------------RULE_ACTION_CONTEXT(ACTX_LIST(NVN_NAME, NVN_VALUE()))--------------------------------------------------------------RULE_COMMENT--------------------------------------------------------------HR1((:dml.get_object_owner() = 'HR') and :dml.get_source_database_name()= 'STRM1')SYS STREAMS$_EVALUATION_CONTEXTHR2((:ddl.get_object_owner() = 'HR' or :ddl.get_base_table_owner() ='HR') and :ddl.get_source_database_name() = 'STRM1' )SYS STREAMS$_EVALUATION_CONTEXT HR1 is our Row LCR (:dml) rule, so we will add To create our declarative transformation Rule, we issue the following command: beginDBMS_STREAMS_ADM.RENAME_COLUMN(rule_name => 'strm_admin.HR1',table_name => 'HR.JOB_HISTORY',from_column_name => 'DEPARTMENT_ID',to_column_name => 'DEPT_ID',value_type => '*', -- defaultstep_number => 0, --defaultoperation => 'ADD' -–default);end;/ We can now check the rule in the USER_RULES view: SQL> select * from user_rules where rule_name = 'HR1';RULE_NAME------------------------------RULE_CONDITION-------------------------------------------------------------RULE_EVALUATION_CONTEXT_OWNER RULE_EVALUATION_CONTEXT_NAME------------------------------ ------------------------------RULE_ACTION_CONTEXT(ACTX_LIST(NVN_NAME, NVN_VALUE()))-------------------------------------------------------------RULE_COMMENT-------------------------------------------------------------HR1((:dml.get_object_owner() = 'HR') and :dml.get_source_database_name()= 'STRM1')SYS STREAMS$_EVALUATION_CONTEXTRE$NV_LIST(RE$NV_ARRAY(RE$NV_NODE('STREAMS$_INTERNAL_TRANS',ANYDATA()))) Notice that the RULE_COMMENT now has an entry indicating the inclusion of the transformation rule. We can also look at the DBA_STREAMS_TRANSFORMATION view: SQL> select rule_owner, rule_name, transform_type,2 from_column_name, to_column_name, value_type,3 declarative_type, precedence, step_number4 from dba_streams_transformations;RULE_OWNER------------------------------RULE_NAME TRANSFORM_TYPE------------------------------ --------------------------FROM_COLUMN_NAME TO_COLUMN_NAME VAL-------------------- -------------------- ---DECLARATIVE_TYPE PRECEDENCE STEP_NUMBER-------------------- ---------- -----------STRM_ADMINHR1 DECLARATIVE TRANSFORMATIONDEPARTMENT_ID DEPT_ID *RENAME COLUMN 2 0 To remove the declarative transformation from the rule, we use the same procedure we used to create the transformation, but set the operation parameter to REMOVE: beginDBMS_STREAMS_ADM.RENAME_COLUMN(rule_name => 'strm_admin.HR1',table_name => 'HR.JOB_HISTORY',from_column_name => 'DEPARTMENT_ID',to_column_name => 'DEPT_ID',operation => 'REMOVE' -–default);end;/ Note: Removing the declarative transformation does not clear the RULE_COMMENT we see in the USER_RULES view. However, it does clear the entry from the DBA_STREAMS_TRANSFORMATION view . For more detailed information on using the DBMS_STREAMS_ADM.RENAME_COLUMN, and other declarative transformation procedures, please refer to the Oracle PL/SQL Packages and Types Reference, and the Oracle Streams Concepts and Administration Guide. User Created Rule Based Transformations (UCRBT) You can also create your own Rule Based Transformations. These transformations are referred to as user-created transformations (imagine that). The steps for creating a UCRBT are pretty basic. Create the PL/SQL function that performs the transformation. The function should receive the LCR as a SYS.ANYDATA IN parameter The function should return either an LCR a SYS.ANYDATA or STREAMS$_ANYDATA_ARRAY If the function returns a STREAMS$_ANYDATA_ARRAY, it can only be associated with a capture rule Grant the EXECUTE privilege on the function to the appropriate user as necessary. Create or locate the rules for which the transformation will be used. Set the custom rule-based transformation for each rule by running the SET_RULE_TRANSFORM_FUNCTION procedure. In this example, we will setup a UCRBT that makes the same transformation as the previous declarative transformation. The UCRBT is going to be owned by STRM_ADMIN so make sure you have explicitly granted all privileges on HR.JOB_HISTORY to STRM_ADMIN. The code for this example can be found in the UCRBT.sql code file. First we create the PL/SQL function to accomplish the transformation; STRM_ADMIN will be the function owner, so make sure you are logged in as STRM_ADMIN in this example: CREATE OR REPLACE FUNCTION DEPT_COLNAME_CHANGE (evt IN SYS.AnyData)RETURN SYS.AnyData ISlcr SYS.LCR$_ROW_RECORD;obj_name VARCHAR2(30);rc NUMBER;BEGINIF evt.GetTypeName='SYS.LCR$_ROW_RECORD' THENrc := evt.getObject(lcr);obj_name := lcr.GET_OBJECT_NAME();IF obj_name = 'JOB_HISTORY' THENlcr.RENAME_COLUMN('DEPARTMENT_ID','DEPT_ID','*');RETURN SYS.ANYDATA.ConvertObject(lcr);END IF;END IF;RETURN evt;END;/ Because STRM_ADMIN is the function owner, we do not need to grant EXECUTE on the function. If the function was created in a different schema, then we would want to explicitly grant execute on the function to STRM_ADMIN. Next we determine which rule to which to add the transformation function. You can either create a new rule at this point, or use an existing rule. We will use our HR1 rule from above (we can do this because we removed the Declarative RENAME_COLUMN transformation from the rule in our last step of the Declarative Transformation example). select * from dba_rules; Then, we use the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure to add the transformation function to the desired rule: BEGINDBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(rule_name => 'HR1',transform_function => 'strm_admin.DEPT_COLNAME_CHANGE');END;/ We will now see the transformation in the DBA/ALL_STREAMS_TRANSFORM_FUNCTION view: SQL> select * from all_streams_transform_function;RULE_OWNER------------------------------RULE_NAME VALUE_TYPE------------------------------ --------------------TRANSFORM_FUNCTION_NAME CUSTOM_TYPE----------------------------------- -----------STRM_ADMINHR1 SYS.VARCHAR2"STRM_ADMIN"."DEPT_COLNAME_CHANGE" ONE TO ONE For more detailed information on UCRBT, please reference the Usage Notes for the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure in the Oracle PL/SQL Packages and Types Reference, and the Oracle Streams Concepts and Administration Guide. Transformation Order of Execution It is possible to have a combination of declarative and user defined transformations assigned to a single rule. This being the case, how do you know which ones get executed when? Especially, if you have not assigned step numbers. There is a default order of execution for transformation that help keep the rule from running amuck. If the rule is a Subset rule, then Row Migration is always executed first Next are Declarative Rule based transformations These are further ordered by the step number specified for each transformation if they have been assigned. If the step numbers are not assigned, the transformations are executed in the following order: Delete_Column Rename_Column Add_Column Rename_Table Rename_Schema Last (but not the least), the User Created Rule-Based Transformation is executed.
Read more
  • 0
  • 0
  • 1898
article-image-oracle-11g-streams-rules-part-1
Packt
05 Feb 2010
7 min read
Save for later

Oracle 11g Streams: RULES (Part 1)

Packt
05 Feb 2010
7 min read
Streams is all about the rules; literally. The action context that a Streams process takes is governed by the rule conditions. When you create a rule, Oracle generates system conditions, and evaluation contexts, that are used to evaluate each LCR to determine if the action context for the process should be accomplished. We have already addressed a number of these system conditions during our TAG discussion; for instance INCLUDE_TAGGED_LCR=FALSE generates a system evaluation for theLCR$_ROW_RECORD_TYPE :dml.is_null_tag='Y' subprogram. For more information on LCR Types, reference Oracle Database PL/SQL Packages and Types Reference manual. You can control what system evaluations are included in the rule by the parameter values you specify, as well as add user-defined evaluations with the AND_CONDITION parameter. There is a lot going on under the calm surface water of rules. Understanding how this activity flows together will help you become more advanced in creating rules to manipulate your Streams throughout your current environment. So, let's grab our snorkels and masks, and stick our heads under the surface and take a look. Rule components Rules have three components: conditions, evaluation context, and action context. These components coordinate with the "when", "what", and "how" of the LCR being processed. The conditions tell the Streams process "when" the LCR should be processed, the evaluation context defines "what" data/information the Streams process uses to process the LCR, and the action context tells the Streams process "how" to handle the LCR. Rule conditions The rule condition is essentially the "where clause". The conditions are evaluated against the properties of the LCR and return either TRUE or FALSE The conditions can contain compound expressions and operators (AND, OR, NOT, and so on).The final evaluation returned from the condition (TRUE or FALSE) is the final result of all the compound expressions. An example of a system-generated condition would be that of our good friend :dml.is_null_tag = 'Y' (generated by the INCLUDE_TAGGED_LCR=FALSE parameter of the DBMS_STREAMS_ADM.ADD_*_RULE procedures). On rule creation, the condition is passed in as a string (so make sure to escape any single quotes within the string). ':dml.get_object_owner() = ''OE'' and :dml.get_tag() =HEXTORAW(''22'')' It is important to remember that you want to keep your rule conditions as simple as possible. Complex rule conditions can have a significant impact on performance. The rule condition created by our Sub-Setting example is an example of a complex rule as it includes a PL/SQL call to a function. Also, rule conditions that contain NOT, or != can also impact performance. Rule Evaluation Context The rule evaluation context defines data external to the LCR properties that can be referenced in the rule conditions. This is comparable to the SQL statement from clause. This reference is a database object that contains the external data. The evaluation context provides the rule conditions with the necessary information for interpreting and evaluating the conditions that reference external data. If the evaluation context references objects, the rule owner must have the proper privileges to reference the object (select and execute) as the rule condition is evaluated in the schema of the evaluation context owner. Information contained in an Evaluation Context might include table aliases used in the condition, variable names and types, and/or a function to use to evaluate the rules to which the evaluation context is assigned. Evaluation Context structure can get a bit confusing. To get a better feel of it, you may want to start by looking at the following database views: DBA/ALL/USER_EVALUATION_CONTEXT_TABLES: table alias used DBA/ALL/USER_EVALUATION_CONTEXT_VARS: variable types used DBA/ALL/USER_EVALUATION_CONTEXTS: functions used Streams system created rules (created using DBMS_STREAMS_ADM) will create rules using the standard Oracle-supplied SYS.STREAMS$_EVALUATION_CONTEXT rule evaluation context. This evaluation context is composed of a variable_types> list for the :dml and :ddl variables, and the evaluation function SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION as seen in the previous DBA views. You can create your own evaluation context using the DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT procedure: DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(evaluation_context_name IN VARCHAR2,table_aliases IN SYS.RE$TABLE_ALIAS_LIST DEFAULT NULL,variable_types IN SYS.RE$VARIABLE_TYPE_LIST DEFAULT NULL,evaluation_function IN VARCHAR2 DEFAULT NULL,evaluation_context_comment IN VARCHAR2 DEFAULT NULL); If you create a custom Evaluation Context that uses the SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION, it must include the same variables and types as in the SYS.STREAMS$_EVALUATION_CONTEXT (a.k.a. :dml and :ddl). Variable_types> can be defined using SYS.RE$VARIABLE_TYPE_LIST, which in turn accepts individual variable types defined using SYS.RE$VARIABLE_TYPE. Similarly, if you create a custom function to use as the evaluation function, it must have the following signature: FUNCTION evaluation_function_name(rule_set_name IN VARCHAR2,evaluation_context IN VARCHAR2,event_context IN SYS.RE$NV_LIST DEFAULT NULL,table_values IN SYS.RE$TABLE_VALUE_LIST DEFAULT NULL,column_values IN SYS.RE$COLUMN_VALUE_LIST DEFAULT NULL,variable_values IN SYS.RE$VARIABLE_VALUE_LIST DEFAULT NULL,attribute_values IN SYS.RE$ATTRIBUTE_VALUE_LIST DEFAULT NULL,stop_on_first_hit IN BOOLEAN DEFAULT FALSE,simple_rules_only IN BOOLEAN DEFAULT FALSE,true_rules OUT SYS.RE$RULE_HIT_LIST,maybe_rules OUT SYS.RE$RULE_HIT_LIST);RETURN BINARY_INTEGER; Where the returned BINARY_INTEGER value must be one of the following: DBMS_RULE_ADM.EVALUATION_SUCCESSDBMS_RULE_ADM.EVALUATION_CONTINUEDBMS_RULE_ADM.EVALUATION_FAILURE For more information on creating custom Evaluation Contexts and evaluation functions and Rule Types, refer to the Oracle Database PL/SQL Packages and Types Reference manual, and The Oracle Streams Extended Examples manual. Once an Evaluation Context is created it can be assigned to a rule or a rule set using the evaluation_context parameter of the appropriate DBMS_RULE_ADM procedure. The Evaluation Context for a Rule can be different than the Evaluation Context for a Rule Set to which the Rule might be assigned. The bottom line is that a Rule must be able to associate itself with an Evaluation Context at some level. We will revisit this concept as we discuss Rule Creation a little later on this section. Action Context The rule action context is just that, the action information that the rule evaluation engine returns to the client application, to be acted upon by the client application, when the rule evaluates to true. This is not the action itself, but values to be used by the action code that are specific to the rule. The action context is of the SYS.RE$NV_LIST type, which contains an array of name-value pairs and is associated to a rule condition. A rule condition can only have one action context. The action context itself is optional and can contain zero to many name-value pairs. The SYS.RE$NV_LIST has the following construct: TYPE SYS.RE$NV_LIST AS OBJECT(actx_list SYS.RE$NV_ARRAY); Subprograms are: ADD_PAIR (name IN VARCHAR2,value IN ANYDATA);GET_ALL_NAMES ()RETURN SYS.RE$NAME_ARRAY;GET_VALUE (name IN VARCHAR2)RETURN ANYDATA;REMOVE_PAIR (name IN VARCHAR2); For more information on creating and populating Action Contexts types, refer to the Oracle Database PL/SQL Packages and Types Reference manual. For more information on Rule components refer to the Oracle Streams Concepts and Administration manual.
Read more
  • 0
  • 0
  • 1726

article-image-n-way-replication-oracle-11g-streams-part-2
Packt
05 Feb 2010
7 min read
Save for later

N-Way Replication in Oracle 11g Streams: Part 2

Packt
05 Feb 2010
7 min read
Streaming STRM2 to STRM1 Now the plan for setting up Streams for STRM2. It is the mirror image of what we have done above, except for the test part. On STRM2, log in as STRM_ADMIN. -- ADD THE QUEUE, a good queue name is STREAMS_CAPTURE_Q -- ADD THE CAPTURE RULE -- ADD THE PROPAGATION RULE -- INSTANTIATE TABLE ACROSS DBLINK -- DBLINK TO DESTINATION is STRM1.US.APGTECH.COM -- SOURCE is STRM2.US.APGTECH.COM On STRM1 log in as STRM_ADMIN. -- ADD THE QUEUE: A good queue name is STREAMS_APPLY_Q -- ADD THE APPLY RULE Start everything up and test the Stream on STRM2. Then check to see if the record is STREAM'ed to STRM1. -- On STRM2 log in as STRM_ADMIN -- ADD THE QUEUE :A good queue name is STREAMS_CAPTURE_Q -- [email protected]>BEGINDBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => '"STREAMS_CAPTURE_QT"',queue_name => '"STREAMS_CAPTURE_Q"',queue_user => '"STRM_ADMIN"');END;/commit;-- ADD THE CAPTURE RULE-- [email protected]>BEGINDBMS_STREAMS_ADM.ADD_TABLE_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_type => 'capture',streams_name => '"STREAMS_CAPTURE"',queue_name => '"STRM_ADMIN"."STREAMS_CAPTURE_Q"',include_dml => true,include_ddl => true,include_tagged_lcr => false,inclusion_rule => true);END;/commit;-- ADD THE PROPAGATION RULE-- [email protected]>BEGINDBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_name => '"STREAMS_PROPAGATION"',source_queue_name =>'"STRM_ADMIN"."STREAMS_CAPTURE_Q"',destination_queue_name =>'"STRM_ADMIN"."STREAMS_APPLY_Q"@STRM1.US.APGTECH.COM',include_dml => true,include_ddl => true,source_database => 'STRM2.US.APGTECH.COM',inclusion_rule => true);END;/COMMIT; Because the table was instantiated from STRM1 already, you can skip this step. -- INSTANTIATE TABLE ACROSS DBLINK-- [email protected]>DECLAREiscn NUMBER; -- Variable to hold instantiation SCN valueBEGINiscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();[email protected](source_object_name => 'LEARNING.EMPLOYEES',source_database_name => 'STRM1.US.APGTECH.COM',instantiation_scn => iscn);END;/COMMIT; -- On STRM1, log in as STRM_ADMIN. -- ADD THE QUEUE, a good queue name is STREAMS_APPLY_Q-- [email protected]>BEGINDBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => '"STREAMS_APPLY_QT"',queue_name => '"STREAMS_APPLY_Q"',queue_user => '"STRM_ADMIN"');END;/COMMIT;-- ADD THE APPLY RULE-- [email protected]>BEGINDBMS_STREAMS_ADM.ADD_TABLE_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_type => 'apply',streams_name => '"STREAMS_APPLY"',queue_name => '"STRM_ADMIN"."STREAMS_APPLY_Q"',include_dml => true,include_ddl => true,include_tagged_lcr => false,inclusion_rule => true);END;/commit; Start everything up and Test. -- [email protected]>BEGINDBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'STREAMS_APPLY',parameter => 'disable_on_error',value => 'n');END;/COMMIT;-- [email protected]>DECLAREv_started number;BEGINSELECT DECODE(status, 'ENABLED', 1, 0) INTO v_startedFROM DBA_APPLY where apply_name = 'STREAMS_APPLY';if (v_started = 0) thenDBMS_APPLY_ADM.START_APPLY(apply_name => '"STREAMS_APPLY"');end if;END;/COMMIT;-- [email protected]>DECLAREv_started number;BEGINSELECT DECODE(status, 'ENABLED', 1, 0) INTO v_startedFROM DBA_CAPTURE where CAPTURE_NAME = 'STREAMS_CAPTURE';if (v_started = 0) thenDBMS_CAPTURE_ADM.START_CAPTURE(capture_name => '"STREAMS_CAPTURE"');end if;END;/ Then on STRM2: -- [email protected]>ACCEPT fname PROMPT 'Enter Your Mom's First Name:'ACCEPT lname PROMPT 'Enter Your Mom's Last Name:'Insert into LEARNING.EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME,TIME) Values (5, '&fname', '&lname', NULL);dbms_lock.sleep(10); --give it time to replicate Then on STRM1, search for the record. -- [email protected]>Select * from LEARNING.EMPLOYEES; We now have N-way replication. But wait, what about conflict resolution?Good catch; all of this was just to set up N-way replication. In this case, it is a 2-way replication. It will work the majority of the time; that is until there is conflict. Conflict resolution needs to be set up and in this example the supplied/built-in conflict resolution handler MAXIMUM will be used. Now, let us cause some CONFLICT! Then we will be good people and create the conflict resolution and ask for world peace while we are at it! Conflict resolution Conflict between User 1 and User 2 has happened. Unbeknown to both of them, they have both inserted the exact same row of data to the same table, at roughly the same time. User 1's insert is to the STRM1 database. User 2's insert is to the STRM2 database. Normally the transaction that arrives second will raise an error. It is most likely that the error will be some sort of primary key violation and that the transaction will fail. We do not want that to happen. We want the transaction that arrives last to "win" and be committed to the database. At this point, you may be wondering "How do I choose which conflict resolution to use?" Well, you do not get to choose, the Business Community that you support will determine the rules most of the time. They will tell you how they want conflict resolution handled. Your responsibility is to know what can be solved with built-in conflict resolutions and when you will need to create custom conflict resolution. Going back to User 1 and User 2. In this particular case, User 2's insert arrives later than User 1's insert. Now the conflict resolution is added using the DBMS_APPLY_ADM package, specifically the procedure DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_ HANDLER which instructs the APPLY process on how to handle the conflict. Scripts_5_1_CR.sql shows the conflict resolution used to resolve the conflict between User 1 and User 2. Since it is part of the APPLY process, this script is run by the Streams Administrator. In our case, that would be STRM_ADMIN. This type of conflict can occur on either STRM1 or STRM2 database, so the script will be run on both databases. The numbers to the left are there for reference reasons. They are not in the provided code. -- Scripts_5_1_CR.sql1. DECLARE2. cols DBMS_UTILITY.NAME_ARRAY;3. BEGIN4. cols(0) := 'employee_id';5. cols(1) := 'first_name';6. cols(2) := 'last_name';7. cols(3) := 'time';8. DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(9. object_name => 'learning.employees',10. method_name => 'MAXIMUM',11. resolution_column => 'time',12. column_list => cols);13. END;14. /15. Commit; So what do these 15 magical lines do to resolve conflict?Let us break it down piece by piece logically first, and look at the specific syntax of the code. Oracle needs to know where to look when a conflict happens. In our example, that is the learning.employees table. Furthermore, Oracle needs more than just the table name. It needs to know what columns are involved. Line 9 informs Oracle of the table. Lines 1 -7 relate to the columns. Line 8 is the actual procedure name. What Oracle is supposed to do when this conflict happens, is answered by Line 10. Line 10 instructs Oracle to take the MAXIMUM of the resolution_column and use that to resolve the conflict. Since our resolution column is time, the last transaction to arrive is the "winner" and is applied.
Read more
  • 0
  • 0
  • 1231