Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-setting-your-profile
Packt
06 Aug 2013
5 min read
Save for later

Setting Up Your Profile

Packt
06 Aug 2013
5 min read
(For more resources related to this topic, see here.) Setting up your profile (Simple) While it may seem trivial, setting up your profile is one of the most important steps in starting your Edmodo experience. Your profile gives others insight into the professional you! Remember that the users on Edmodo are fellow teachers, and the ability to connect with these educators around the world is an opportunity that should not be overlooked. Thus, it is important to take care to provide a snapshot of your educational expertise. Getting ready Create your teacher account at http://www.edmodo.com. How to do it... Creating an Edmodo account only takes minutes, but is the most important step as you begin your Edmodo journey. Click on I'm a Teacher. Choose a username and password. Connect to your educational institution to verify your teacher account. Upload a photo of yourself. Join online Edmodo communities (available now, but advisable to skip at this juncture). Find teacher connections. Fill in your About Me section. How it works... The Edmodo website looks like the following: On the Edmodo home page, http://www.edmodo.com, click on I'm a Teacher to begin your Edmodo journey. Set up your teacher account by using a unique username and a password that you can remember. If your first choice of username is not available, please choose another until you are notified of a successful selection. The e-mail address that you attach to your Edmodo account should be your school e-mail. You will also need to choose your school affiliation at this time. If your school is not listed as a choice on Edmodo, you may do a manual search for your school. Selecting your school will ensure that fellow teachers within your district can easily connect with you. This also provides Edmodo with the background to be able to make suggestions on other educators with whom you may want to connect. Additionally, once you become active in the Edmodo community, your school selection will provide better insight into your teaching background, and will provide a greater context for other teachers to potentially partner with you in collaborative endeavors. Once you have created your account, you will be prompted to upload a photo of yourself. This is advisable in order to make you easier to distinguish when you interact in the professional communities, and it literally puts a face to your name. Certainly you have the option of using one of Edmodo's generic pictures. However, you will inevitably be sharing this generic picture with thousands of other users. You will be prompted to create a unique URL. This will provide additional ease to search for you when making professional connections. Your username is probably the easiest option for this. Next, you have the option of joining an array of online professional communities. We will come back to this step later in the section on Edmodo Communities. However, you will notice that Edmodo has automatically enrolled you in their Help community. This community is designated with the question mark symbol and once you have been redirected to your home page, you will notice it located in the left section of your screen, directly below your established Groups. From your profile page, you can find teacher connections. You can choose from the teacher suggestions made by Edmodo. Edmodo makes these connection suggestions based on your school district selection. These suggestions are located on the left-hand side of the profile screen. Simply click on a teacher with whom you would like to make a connection. If you would like to connect to other teachers who are not on Edmodo, you may send them an invitation from your profile page. Simply hover over the link How to improve my profile? that is located on the right-hand side of the screen. From here, enter the e-mail addresses of those educators you would like to join Edmodo. Your profile page also provides you with the ability to write an About Me description. In this portion, include the courses you teach and any educational interests you might have that could potentially pique the interest of your fellow educators. Note my personal Edmodo About Me description as seen in the preceding screenshot. There's more... You have created the basics of your profile. However, in order to gain clout in the Edmodo online community, you may want to begin earning badges. Your first chance to do so is in your profile setting. Earning teacher badges You will notice on your Edmodo profile page that you have the opportunity to earn teacher badges. Simply having your teacher account verified as being one that belongs to an educator will earn you the Verified Teacher badge. However, you can collect many others. Joining one of the subject area communities will net you a Community Member badge and following a publisher community will score you the Publisher Collaborator badge. Connect to at least 10 other educators on Edmodo and you will find yourself awarded with the Connected badge. The more educators with whom you connect, the more ways you can earn differentiated levels of this badge. The other badge you may want to covet earning is your Librarian badge. This is possible when you begin sharing resources on Edmodo that other educators find to be useful. (See Sharing Resources for additional information on how to do so.) Summary This article provided details on getting started with a simple yet effective classroom environment based set up, Edmodo. This article also listed the procedure to set up your profile on Edmodo. Resources for Article: Further resources on this subject: Getting to Grips with the Facebook Platform [Article] Introduction to Moodle Modules [Article] Getting Started with Facebook Application Development using ColdFusion/Railo [Article]
Read more
  • 0
  • 0
  • 988

article-image-working-local-files-simple
Packt
05 Aug 2013
8 min read
Save for later

Working with local files (Simple)

Packt
05 Aug 2013
8 min read
(For more resources related to this topic, see here.) Getting ready Now that you have Mercurial set up on your computer, you are going to use it to track all the changes that can happen in a configuration directory, for example, in the configuration files of Apache2 (a web server). It could of course be used in any other type of directory and files: your home directory configuration files, some personal development projects, or some Mozilla Firefox configuration files such as user.js and prefs.js, and likewise. If you do not have Apache2 installed on your computer, you can do the same type of exercises, for instance, in $HOME/src. Also, with the owners of the /etc/apache2 files being root, you will need to change their owner to your account. For example, your username/group both being mary/mary: $ sudo chown -R mary:mary /etc/apache2/ How to do it... The first thing to do is to initialize a new repository by typing these lines: $ cd /etc/apache2/$ hg init You can then show the status of the files in your new repository, that is, which files are tracked and which are unknown to Mercurial, and also which files have changed, which have been deleted, and so on. This is done using the Mercurial status command: $ hg status? apache2.conf? conf.d/charset? […]? sites-available/default-ssl? sites-enabled/000-default As you can see, everything is untracked right now. The way to tell Mercurial which files you want to be version controlled and added to your newly created repository is by using the add command. For example, if you want to track changes in the apache2.conf file, and all the files of the sites-available directory, you need to type: $ hg add apache2.conf sites-available/ Showing the status again, Mercurial now tells us that three files have been added (by showing A, which denotes Added ; instead of ?—, denoting Unknown , in front of its path): $ hg statusA apache2.confA sites-available/defaultA sites-available/default-ssl? conf.d/charset[…] Now if you want to record the initial version of these files (the one provided by your distribution), you need to use the commit command and add a log message using the -m option: $ hg commit -m"initial version" Using the log command, you can print the entire history of your repository (here you used commit only once, so only one change—called a changeset —is listed): $ hg logchangeset: 0:7b3b5fcb16d0tag: tipuser: Mary <[email protected]> […date…]summary: initial version Whenever you change a file, you can record the modification in Mercurial with the commit command. Let's type the following sequence using the status, diff, commit, and log commands: $ vi apache2.conf..make some changes: add "# new line" at the end of the file$ hg status -mM apache2.conf$ hg diffdiff -r 7b3b5fcb16d0 apache2.conf […]+# new line$ hg commit -m"added a line at the end"$ hg logchangeset: 1:02704fcf58b1user: Mary <[email protected]>summary: added a line at the endchangeset: 0:7b3b5fcb16d0 […] You can also reverse the modifications; for instance, if you modify a line again in apache2.conf, such as changing # new line by # new linez, using the diff command, you can see your modification: $ hg diffdiff -r b0d2bfb95d81 apache2.conf […]-# new line+# new linez It is then possible to come back to the latest saved version using the revert command: $ hg revert apache2.conf$ hg status? apache2.conf.orig$ tail -1 apache2.conf# new line At this stage, you can't see the M apache2.conf file any more, and the value at the last line is the one that is not modified, but there is a new file called apache2.conf.orig; this is because the revert command saves your modifications in a backup file. You can either delete it afterward or you can use the --no-backup option if you know what you do. Mercurial has a conservative philosophy to avoid losing data. Finally, if you realize that you do not need a file any more, you can remove it from being version controlled and from your working directory by using the remove command. For example, if you don't need the SSL configuration file of Apache, you can remove it by typing: $ hg remove sites-available/default-ssl$ hg statusR sites-available/default-ssl Instant Mercurial Distributed SCM Essential How-to13$ hg commit -m"removed unused ssl conf file"$ ls sites-available/default Note that the file has disappeared from your working directory, and even if you create a file with the same name, it will not be version controlled anymore (unless you add it again). But, of course, it is still possible to get it back or to get a version of apache2.conf exactly how it came with the distribution. In order to do that, you need to use the update command and ask to switch to revision 0: $ hg update -C 0$ ls sites-available/default default-ssl$ tail -1 apache2.confInclude sites-enabled/ In order to switch back to the latest revision (it is coined the tip), you can just call update again with no revision passed as an argument: $ hg update -C2 files updated, 0 files merged, 1 files removed, 0 files unresolved How it works... With this sequence, you already know how to manage versions of your personal projects: save a version, undo a change, retrieve an older version, and so on. Let's take a closer look at what happens under the hood. After you type hg init, a directory called .hg is created: $ ls -a. apache2.conf envvars magic mods-enabled sites-availabl.. conf.d .hg mods-available ports.conf sites- This directory, called a repository, contains the history of your project and some Mercurial metadata. This is where Mercurial records all your revisions of tracked files (actually, it stores only the differences between each revision, like RCS/CVS/Subversion, and in compressed form, so the size is usually less than that of the the actual data!). And also, this is where it stores the commit messages, information about branches, tags, bookmarks, and so on. All the other files and directories beside .hg are your project files; they form what is called a working directory. If ever you would like to forget version control in this zone, you can just remove the .hg directory. Another interesting thing to note is that, contrary to Subversion for instance, in a local .hg repository, you do not have less information than in a repository in a central server! Your installation of Mercurial actually allows you to have both a client command (or GUI), with which you can work locally or with distant servers, and a server, with which you can share your work with colleagues. This is because, with DVCS ( Distributed VCS ), there is no difference between the data checked out to work with and the data ready to be published. When you either create a local repository or clone an existing one from a server, you have all the necessary information (in the .hg directory) to become a publisher yourself. There's more... This section will discuss supplementary commands or options. Other commands First of all, Mercurial has many commands (the list of which you can get with the command hg help); in addition to adding and removing files from being version controlled, you can also use the copy command to copy a file into a new file, and similarly, you have a rename command. The difference between the OS file copy and the one that Mercurial has is that when you receive a change made by somebody else to the original file, it will also be applied by Mercurial to the new file; but for this magic to work, you have to tell Mercurial about the copies/renames so it can track them. There are also convenient commands, such as addremove, which allows you to automatically add all new files and remove (from version control) files that have been deleted. Options Please refer to the built-in help system or to reference documentation, such as Mercurial: The Definitive Guide, available online at o http://hgbook.red-bean.com/, to get the complete list of options for each command. You have seen -m (or --message) in the commit command to specify a log message; if you omit this option, Mercurial will prompt you to get a message using $EDITOR. Also, you have seen -C (or --clean) in the update command; it will tell Mercurial to discard uncommitted changes when switching to another version (otherwise, local changes would automatically be merged into the requested version). Summary This article explained how to work with local files, either personal projects or files that you wanted to be version controlled (for example, source code or configuration files). You also learned how to create a new repository, make changes, and track them (selecting files to track, recording changes, and reversing those changes). Resources for Article : Further resources on this subject: Apache Continuum: Ensuring the Health of your Source Code (Part 1) [Article] Apache Continuum: Ensuring the Health of your Source Code (Part 2) [Article] Negotiation Strategy for Effective Implementation of COTS Software [Article]
Read more
  • 0
  • 0
  • 1082

article-image-classification-and-regression-trees
Packt
02 Aug 2013
23 min read
Save for later

Classification and Regression Trees

Packt
02 Aug 2013
23 min read
(For more resources related to this topic, see here.) Recursive partitions The name of the library package rpart, shipped along with R, stands for Recursive Partitioning . The package was first created by Terry M Therneau and Beth Atkinson , and is currently maintained by Brian Ripley . We will first have a peek at means recursive partitions are. A complex and contrived relationship is generally not identifiable by linear models. In the previous chapter, we saw the extensions of the linear models in piecewise, polynomial, and spline regression models. It is also well known that if the order of a model is larger than 4, then interpretation and usability of the model becomes more difficult. We consider a hypothetical dataset, where we have two classes for the output Y and two explanatory variables in X1 and X2. The two classes are indicated by filled-in green circles and red squares. First, we will focus only on the left display of Figure 1: A complex classification dataset with partitions , as it is the actual depiction of the data. At the outset, it is clear that a linear model is not appropriate, as there is quite an overlap of the green and red indicators. Now, there is a clear demarcation of the classification problem accordingly, as X1 is greater than 6 or not. In the area on the left side of X1=6, the mid-third region contains a majority of green circles and the rest are red squares. The red squares are predominantly identifiable accordingly, as the X2 values are either lesser than or equal to 3 or greater than 6. The green circles are the majority values in the region of X2 being greater than 3 and lesser than 6. A similar story can be built for the points on the right side of X1 greater than 6. Here, we first partitioned the data according to X1 values first, and then in each of the partitioned region, we obtained partitions according to X2 values. This is the act of recursive partitioning. Figure 1: A complex classification dataset with partitions Let us obtain the preceding plot in R. Time for action – partitioning the display plot We first visualize the CART_Dummy dataset and then look in the next subsection at how CART gets the patterns, which are believed to exist in the data. Obtain the dataset CART_Dummy from the RSADBE package by using data( CART_Dummy). Convert the binary output Y as a factor variable, and attach the data frame with CART_Dummy$Y <- as.factor(CART_Dummy$Y). attach(CART_Dummy) In Figure 1: A complex classification dataset with partitions , the red squares refer to 0 and the green circles to 1. Initialize the graphics windows for the three samples by using par(mfrow= c(1,2)). Create a blank scatter plot: plot(c(0,12),c(0,10),type="n",xlab="X1",ylab="X2"). Plot the green circles and red squares: points(X1[Y==0],X2[Y==0],pch=15,col="red") points(X1[Y==1],X2[Y==1],pch=19,col="green") title(main="A Difficult Classification Problem") Repeat the previous two steps to obtain the identical plot on the right side of the graphics window. First, partition according to X1 values by using abline(v=6,lwd=2). Add segments on the graph with the segment function: segments(x0=c(0,0,6,6),y0=c(3.75,6.25,2.25,5),x1=c(6,6,12,12),y1=c(3.75,6.25,2.25,5),lwd=2) title(main="Looks a Solvable Problem Under Partitions") What just happened? A complex problem is simplified through partitioning! A more generic function, segments, has nicely slipped in our program, which you may use for many other scenarios. Now, this approach of recursive partitioning is not feasible all the time! Why? We seldom deal with two or three explanatory variables and data points as low as in the preceding hypothetical example. The question is how one creates recursive partitioning of the dataset. Breiman, et. al. (1984) and Quinlan (1988) have invented tree building algorithms, and we will follow the Breiman, et. al. approach in the rest of book. The CART discussion in this book is heavily influenced by Berk (2008). Splitting the data In the earlier discussion, we saw that partitioning the dataset can benefit a lot in reducing the noise in the data. The question is how does one begin with it? The explanatory variables can be discrete or continuous. We will begin with the continuous (numeric objects in R) variables. For a continuous variable, the task is a bit simpler. First, identify the unique distinct values of the numeric object. Let us say, for example, that the distinct values of a numeric object, say height in cms, are 160, 165, 170, 175, and 180. The data partitions are then obtained as follows: data[Height<=160,], data[Height>160,] data[Height<=165,], data[Height>165,] data[Height<=170,], data[Height>170,] data[Height<=175,], data[Height>175,] The reader should try to understand the rationale behind the code, and certainly this is just an indicative one. Now, we consider the discrete variables. Here, we have two types of variables, namely categorical and ordinal . In the case of ordinal variables, we have an order among the distinct values. For example, in the case of the economic status variable, the order may be among the classes Very Poor, Poor, Average, Rich, and Very Rich. Here, the splits are similar to the case of continuous variable, and if there are m distinct orders, we consider m -1 distinct splits of the overall data. In the case of a categorical variable with m categories, for example the departments A to F of the UCBAdmissions dataset, the number of possible splits becomes 2m-1-1. However, the benefit of using software like R is that we do not have to worry about these issues. The first tree In the CART_Dummy dataset, we can easily visualize the partitions for Y as a function of the inputs X1 and X2. Obviously, we have a classification problem, and hence we will build the classification tree. Time for action – building our first tree The rpart function from the library rpart will be used to obtain the first classification tree. The tree will be visualized by using the plot options of rpart, and we will follow this up with extracting the rules of a tree by using the asRules function from the rattle package. Load the rpart package by using library(rpart). Create the classification tree with CART_Dummy_rpart <- rpart(Y~ X1+X2,data=CART_Dummy). Visualize the tree with appropriate text labels by using plot(CART_Dummy_rpart); text(CART_Dummy_rpart). Figure 2: A classification tree for the dummy dataset Now, the classification tree flows as follows. Obviously, the tree using the rpart function does not partition as simply as we did in Figure 1: A complex classification dataset with partitions , the working of which will be dealt within the third section of this chapter. First, we check if the value of the second variable X2 is less than 4.875. If the answer is an affirmation, we move to the left side of the tree; the right side in the other case. Let us move to the right side. A second question asked is whether X1 is lesser than 4.5 or not, and then if the answer is yes it is identified as a red square, and otherwise a green circle. You are now asked to interpret the left side of the first node. Let us look at the summary of CART_Dummy_rpart. Apply the summary, an S3 method, for the classification tree with summary( CART_Dummy_rpart). That one is a lot of output! Figure 3: Summary of a classification tree Our interests are in the nodes numbered 5 to 9! Why? The terminal nodes, of course! A terminal node is one in which we can't split the data any further, and for the classification problem, we arrive at a class assignment as the class that has a majority count at the node. The summary shows that there are indeed some misclassifications too. Now, wouldn't it be great if R gave the terminal nodes asRules. The function asRules from the rattle package extracts the rules from an rpart object. Let's do it! Invoke the rattle package library(rattle) and using the asRules function, extract the rules from the terminal nodes with asRules(CART_Dummy_rpart). The result is the following set of rules: Figure 4: Extracting "rules" from a tree! We can see that the classification tree is not according to our "eye-bird" partitioning. However, as a final aspect of our initial understanding, let us plot the segments using the naïve way. That is, we will partition the data display according to the terminal nodes of the CART_Dummy_rpart tree. The R code is given right away, though you should make an effort to find the logic behind it. Of course, it is very likely that by now you need to run some of the earlier code that was given previously. abline(h=4.875,lwd=2) segments(x0=4.5,y0=4.875,x1=4.5,y1=10,lwd=2) abline(h=1.75,lwd=2) segments(x0=3.5,y0=1.75,x1=3.5,y1=4.875,lwd=2) title(main="Classification Tree on the Data Display") It can be easily seen from the following that rpart works really well: Figure 5: The terminal nodes on the original display of the data What just happened? We obtained our first classification tree, which is a good thing. Given the actual data display, the classification tree gives satisfactory answers. We have understood the "how" part of a classification tree. The "why" aspect is very vital in science, and the next section explains the science behind the construction of a regression tree, and it will be followed later by a detailed explanation of the working of a classification tree. The construction of a regression tree In the CART_Dummy dataset, the output is a categorical variable, and we built a classification tree for it. The same distinction is required in CART, and we thus build classification trees for binary random variables, where regression trees are for continuous random variables. Recall the rationale behind the estimation of regression coefficients for the linear regression model. The main goal was to find the estimates of the regression coefficients, which minimize the error sum of squares between the actual regressand values and the fitted values. A similar approach is followed here, in the sense that we need to split the data at the points that keep the residual sum of squares to a minimum. That is, for each unique value of a predictor, which is a candidate for the node value, we find the sum of squares of y's within each partition of the data, and then add them up. This step is performed for each unique value of the predictor, and the value, which leads to the least sum of squares among all the candidates, is selected as the best split point for that predictor. In the next step, we find the best split points for each of the predictors, and then the best split is selected across the best split points across the predictors. Easy! Now, the data is partitioned into two parts according to the best split. The process of finding the best split within each partition is repeated in the same spirit as for the first split. This process is carried out in a recursive fashion until the data can't be partitioned any further. What is happening here? The residual sum of squares at each child node will be lesser than that in the parent node. At the outset, we record that the rpart function does the exact same thing. However, as a part of cleaner understanding of the regression tree, we will write raw R codes and ensure that there is no ambiguity in the process of understanding CART. We will begin with a simple example of a regression tree, and use the rpart function to plot the regression function. Then, we will first define a function, which will extract the best split given by the covariate and dependent variable. This action will be repeated for all the available covariates, and then we find the best overall split. This will be verified with the regression tree. The data will then be partitioned by using the best overall split, and then the best split will be identified for each of the partitioned data. The process will be repeated until we reach the end of the complete regression tree given by the rpart. First, the experiment! The cpus dataset available in the MASS package contains the relative performance measure of 209 CPUs in the perf variable. It is known that the performance of a CPU depends on factors such as the cycle time in nanoseconds (syct), minimum and maximum main memory in kilobytes (mmin and mmax), cache size in kilobytes (cach), and minimum and maximum number of channels (chmin and chmax). The task in hand is to model the perf as a function of syct, mmin, mmax, cach, chmin, and chmax. The histogram of perf—try hist(cpus$perf)—will show a highly skewed distribution, and hence we will build a regression tree for the logarithm transformation log10(perf). Time for action – the construction of a regression tree A regression tree is first built by using the rpart function. The getNode function is introduced, which helps in identifying the split node at each stage, and using it we build a regression tree and verify that we had the same tree as returned by the rpart function. Load the MASS library by using library(MASS). Create the regression tree for the logarithm (to the base 10) of perf as a function of the covariates explained earlier, and display the regression tree: cpus.ltrpart <- rpart(log10(perf)~syct+mmin+mmax+cach+chmin+chmax, data=cpus) plot(cpus.ltrpart); text(cpus.ltrpart) The regression tree will be indicated as follows: Figure 6: Regression tree for the "perf" of a CPU We will now define the getNode function. Given the regressand and the covariate, we need to find the best split in the sense of the sum of squares criterion. The evaluation needs to be done for every distinct value of the covariate. If there are m distinct points, we need m -1 evaluations. At each distinct point, the regressand needs to be partitioned accordingly, and the sum of squares should be obtained for each partition. The two sums of squares (in each part) are then added to obtain the reduced sum of squares. Thus, we create the required function to meet all these requirements. Create the getNode function in R by running the following code: getNode <- function(x,y) { xu <- sort(unique(x),decreasing=TRUE) ss <- numeric(length(xu)-1) for(i in 1:length(ss)) { partR <- y[x>xu[i]] partL <- y[x<=xu[i]] partRSS <- sum((partR-mean(partR))^2) partLSS <- sum((partL-mean(partL))^2) ss[i]<-partRSS + partLSS } return(list(xnode=xu[which(ss==min(ss,na.rm=TRUE))], minss = min(ss,na.rm=TRUE),ss,xu)) } The getNode function gives the best split for a given covariate. It returns a list consisting of four objects: xnode, which is a datum of the covariate x that gives the minimum residual sum of squares for the regressand y The value of the minimum residual sum of squares The vector of the residual sum of squares for the distinct points of the vector x The vector of the distinct x values We will run this function for each of the six covariates, and find the best overall split. The argument na.rm=TRUE is required, as at the maximum value of x we won't get a numeric value. We will first execute the getNode function on the syct covariate, and look at the output we get as a result: > getNode(cpus$syct,log10(cpus$perf))$xnode [1] 48 > getNode(cpus$syct,log10(cpus$perf))$minss [1] 24.72 > getNode(cpus$syct,log10(cpus$perf))[[3]] [1] 43.12 42.42 41.23 39.93 39.44 37.54 37.23 36.87 36.51 36.52 35.92 34.91 [13] 34.96 35.10 35.03 33.65 33.28 33.49 33.23 32.75 32.96 31.59 31.26 30.86 [25] 30.83 30.62 29.85 30.90 31.15 31.51 31.40 31.50 31.23 30.41 30.55 28.98 [37] 27.68 27.55 27.44 26.80 25.98 27.45 28.05 28.11 28.66 29.11 29.81 30.67 [49] 28.22 28.50 24.72 25.22 26.37 28.28 29.10 33.02 34.39 39.05 39.29 > getNode(cpus$syct,log10(cpus$perf))[[4]] [1] 1500 1100 900 810 800 700 600 480 400 350 330 320 300 250 240 [16] 225 220 203 200 185 180 175 167 160 150 143 140 133 125 124 [31] 116 115 112 110 105 100 98 92 90 84 75 72 70 64 60 [46] 59 57 56 52 50 48 40 38 35 30 29 26 25 23 17 The least sum of squares at a split for the best split value of the syct variable is 24.72, and it occurs at a value of syct greater than 48. The third and fourth list objects given by getNode, respectively, contain the details of the sum of squares for the potential candidates and the unique values of syct. The values of interest are highlighted. Thus, we will first look at the second object from the list output for all the six covariates to find the best split among the best split of each of the variables, by the residual sum of squares criteria. Now, run the getNode function for the remaining five covariates: getNode(cpus$syct,log10(cpus$perf))[[2]] getNode(cpus$mmin,log10(cpus$perf))[[2]] getNode(cpus$mmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[2]] getNode(cpus$chmin,log10(cpus$perf))[[2]] getNode(cpus$chmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[1]] sort(getNode(cpus$cach,log10(cpus$perf))[[4]],decreasing=FALSE) The output is as follows: Figure 7: Obtaining the best "first split" of regression tree The sum of squares for cach is the lowest, and hence we need to find the best split associated with it, which is 24. However, the regression tree shows that the best split is for the cach value of 27. The getNode function says that the best split occurs at a point greater than 24, and hence we take the average of 24 and the next unique point at 30. Having obtained the best overall split, we next obtain the first partition of the dataset. Partition the data by using the best overall split point: cpus_FS_R <- cpus[cpus$cach>=27,] cpus_FS_L <- cpus[cpus$cach<27,] The new names of the data objects are clear with _FS_R indicating the dataset obtained on the right side for the first split, and _FS_L indicating the left side. In the rest of the section, the nomenclature won't be further explained. Identify the best split in each of the partitioned datasets: getNode(cpus_FS_R$syct,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$cach,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[1]] sort(getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L$syct,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$cach,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[1]] sort(getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[4]], decreasing=FALSE) The following screenshot gives the results of running the preceding R code: Figure 8: Obtaining the next two splits Thus, for the first right partitioned data, the best split is for the mmax value as the mid-point between 24000 and 32000; that is, at mmax = 28000. Similarly, for the first left-partitioned data, the best split is the average value of 6000 and 6200, which is 6100, for the same mmax covariate. Note the important step here. Even though we used cach as the criteria for the first partition, it is still used with the two partitioned data. The results are consistent with the display given by the regression tree, Figure 6: Regression tree for the "perf" of a CPU . The next R program will take care of the entire first split's right side's future partitions. Partition the first right part cpus_FS_R as follows: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] Obtain the best split for cpus_FS_R_SS_R and cpus_FS_R_SS_L by running the following code: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] getNode(cpus_FS_R_SS_R$syct,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[1]] sort(getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_R_SS_L$syct,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[1]] sort(getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[4]],decreasing=FALSE) For the cpus_FS_R_SS_R part, the final division is according to cach being greater than 56 or not (average of 48 and 64). If the cach value in this partition is greater than 56, then perf (actually log10(perf)) ends in the terminal leaf 3, else 2. However, for the region cpus_FS_R_SS_L, we partition the data further by the cach value being greater than 96.5 (average of 65 and 128). In the right side of the region, log10(perf) is found as 2, and a third level split is required for cpus_FS_R_SS_L with cpus_FS_R_SS_L_TS_L. Note that though the final terminal leaves of the cpus_FS_R_SS_L_TS_L region shows the same 2 as the final log10(perf), this may actually result in a significant variability reduction of the difference between the predicted and the actual log10(perf) values. We will now focus on the first main split's left side. Figure 9: Partitioning the right partition after the first main split Partition cpus_FS_L accordingly, as the mmax value being greater than 6100 or otherwise: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] The rest of the partition for cpus_FS_L is completely given next. The details will be skipped and the R program is given right away: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$cach,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[1]] sort(getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L_SS_L$syct,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$cach,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[1]] sort(getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[4]],decreasing=FALSE) cpus_FS_L_SS_R_TS_R <- cpus_FS_L_SS_R[cpus_FS_L_SS_R$syct<360,] getNode(cpus_FS_L_SS_R_TS_R$syct,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$cach,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[1]] sort(getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$perf))[[4]],decreasing=FALSE) We will now see how the : Figure 10: Partitioning the left partition after the first main split We leave it to you to interpret the output arising from the previous action. What just happened? Using the rpart function from the rpart library we first built the regression tree for log10(perf). Then, we explored the basic definitions underlying the construction of a regression tree and defined the getNode function to obtain the best split for a pair of regressands and a covariate. This function is then applied for all the covariates, and the best overall split is obtained; using this we get our first partition of the data, which will be in agreement with the tree given by the rpart function. We then recursively partitioned the data by using the getNode function and verified that all the best splits in each partitioned data are in agreement with the one provided by the rpart function. The reader may wonder if the preceding tedious task was really essential. However, it has been the experience of the author that users/readers seldom remember the rationale behind using direct code/functions for any software after some time. Moreover, CART is a difficult concept and it is imperative that we clearly understand our first tree, and return to the preceding program whenever the understanding of a science behind CART is forgotten. Summary We began with the idea of recursive partitioning and gave a legitimate reason as to why such an approach is practical. The CART technique is completely demystified by using the getNode function, which has been defined appropriately depending upon whether we require a regression or a classification tree. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Graphical Capabilities of R [Article] Customizing Graphics and Creating a Bar Chart and Scatterplot in R [Article]
Read more
  • 0
  • 0
  • 1446
Banner background image

article-image-setting-environment-cucumber-bdd-rails
Packt
31 Jul 2013
4 min read
Save for later

Setting up environment for Cucumber BDD Rails

Packt
31 Jul 2013
4 min read
(For more resources related to this topic, see here.) Getting ready This article will focus on how to use Cucumber in daily BDD development on the Ruby on Rails platform. Please install the following software to get started: Ruby Version Manager Version 1.9.3 of Ruby Version 3.2 of Rails The latest version of Cucumber A handy text editor; Vim or Sublime Text How to do it... To install RVM, bundler, and Rails we need to complete the following steps: Install RVM (read the latest installation guide from http://rvm.io ). $ curl -L https://get.rvm.io | bash -s stable --ruby Install the latest version of Ruby as follows: $ rvm install ruby-1.9.3 Install bundler as follows: $ gem install bundler Install the latest version of Rails as follows: $ gem install rails Cucumber is a Ruby gem. To install it we can run the following command in the terminal: Cucumber contains two parts: features and step definitions. They are explained in the following section: $ gem install cucumber If you are using bundler in your project, you need to add the following lines into your Gemfile: gem 'cucumber' How it works... We will have to go through the following files to see how this recipe works: Feature files (their extension is .feature): Each feature is captured as a "story", which defines the scope of the feature along with its acceptance criteria. A feature contains a feature title and a description of one or more scenarios. One scenario contains describing steps. Feature: A unique feature title within the project scope with a description. Its format is as follows: Feature: <feature title><feature description> Scenario: This elaborates how the feature ought to behave. Its format is as follows: Scenario: <Scenario short description>Given <some initial context>When <an event occurs>Then <ensure some outcomes> Step definition files: A step definition is essentially a block of code associated with one or more steps by a regular expression (or, in simple cases, an exact equivalent string). Given "I log into system through login page" dovisit login_pagefill_in "User name", :with => "wayne"fill_in "Password", :with => "123456"click_button "Login"end When running a Cucumber feature, each step in the feature file is like a method invocation targeting the related step definition. Each step definition is like a Ruby method which takes one or more arguments (the arguments are interpreted and captured by the Cucumber engine and passed to the step method; this is essentially done by regular expression). The engine reads the feature steps and tries to find the step definition one by one. If all the steps match and are executed without any exceptions thrown, then the result will be passed; otherwise, if one or more exceptions are thrown during the run, the exception can be one of the following: Cucumber::Undefined: Step was an undefined exception Cucumber::Pending: Step was defined but is pending implementation Ruby runtime exception: Any kind of exception thrown during step execution Similar with other unit-testing frameworks, Cucumber runs will either pass or fail depending on whether or not exception(s) are thrown, whereas the difference is that according to different types of exceptions, running a Cucumber could result in the following four kinds: Passed Pending Undefined Failed The following figure demonstrates the flow chart of running a Cucumber feature: There's more... Cucumber is not only for Rails, and the Cucumber feature can be written in many other languages other than English. Cucumber in other languages/platforms Cucumber is now available on many platforms. The following is a list of a number of popular ones: JVM: Cucumber-JVM .NET: SpecFlow Python: RubyPython, Lettuce PHP: Behat Erlang: Cucumberl Cucumber in your mother language We can actually write Gherkin in languages other than English too, which is very important because domain experts might not speak English. Cucumber now supports 37 different languages. There are many great resources online for learning Cucumber: The Cucumber home page: http://cukes.info/ The Cucumber project on Github: https://github.com/cucumber/cucumber The Cucumber entry on Wikipedia: http://en.wikipedia.org/wiki/ Cucumber_(software) The Cucumber backgrounder: https://github.com/cucumber/cucumber/ wiki/Cucumber-Backgrounder Summary: In this article we saw what is Cucumber, how to use Cucumber in daily BDD development on the Ruby on Rails, how to install RVM, bundler, and Rails, running a Cucumber feature, and Cucumber in different language and platform. Resources for Article : Further resources on this subject: Introducing RubyMotion and the Hello World app [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article]
Read more
  • 0
  • 0
  • 2513

article-image-performance-testing-fundamentals
Packt
31 Jul 2013
12 min read
Save for later

Performance Testing Fundamentals

Packt
31 Jul 2013
12 min read
(For more resources related to this topic, see here.) The incident Up until recently, traffic on TrainBot had been light as it had only been opened to a handful of clients, since it was still in closed beta. Everything was fully operational and the application as a whole was very responsive. Just a few weeks ago, TrainBot was open to the public and all was still good and dandy. To celebrate the launch and promote its online training courses, Baysoft Training Inc. recently offered 75 percent off for all the training courses. However, that promotional offer caused a sudden influx on TrainBot, far beyond what the company had anticipated. Web traffic shot up by 300 percent and suddenly things took a turn for the worse. Network resources weren't holding up well, server CPUs and memory were at 90-95 percent and database servers weren't far behind due to high I/O and contention. As a result, most web requests began to get slower response times, making TrainBot totally unresponsive for most of its first-time clients. It didn't take too long after that for the servers to crash and for the support lines to get flooded. The aftermath It was a long night at BaySoft Training Inc. corporate office. How did this happen? Could this have been avoided? Why was the application and system not able to handle the load? Why weren't adequate performance and stress tests conducted on the system and application? Was it an application problem, a system resource issue or a combination of both? All of these were questions management demanded answers to from the group of engineers, which comprised software developers, network and system engineers, quality assurance (QA) testers, and database administrators gathered in the WAR room. There sure was a lot of finger pointing and blame to go around the room. After a little brainstorming, it wasn't too long for the group to decide what needed to be done. The application and its system resources will need to undergo extensive and rigorous testing. This will include all facets of the application and all supporting system resources, including, but not limited to, infrastructure, network, database, servers, and load balancers. Such a test will help all the involved parties to discover exactly where the bottlenecks are and address them accordingly. Performance testing Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It could also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite as this allows for easy, repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failure. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions. Why bother? Using Baysoft's case study mentioned earlier, it should be obvious why companies bother and go through great lengths to conduct performance testing. Disaster could have been minimized, if not totally eradicated, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this article, we will continue to explore the many benefits of effective performance testing. At a very high level, performance testing is always almost conducted to address one or more risks related to expense, opportunity costs, continuity, and/or corporate reputation. Conducting such tests help give insights to software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, just to name a few. Gathering estimated performance characteristics of application and system resources prior to the launch helps to address issues early and provides valuable feedback to stakeholders, helping them make key and strategic decisions. Performance testing covers a whole lot of ground including areas such as: Assessing application and system production readiness Evaluating against performance criteria Comparing performance characteristics of multiple systems or system configurations Identifying source of performance bottlenecks Aiding with performance and system tuning Helping to identify system throughput levels Testing tool Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the core activities in conducting performance tests: Identify the test environment: Becoming familiar with the physical test and production environments is crucial to a successful test run. Knowing things, such as the hardware, software, and network configurations of the environment help derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle. Identify acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, identify the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria is usually driven by stakeholders and it is important to continuously involve them as testing progresses as the criteria may need to be revised. Plan and design tests: Know the usage pattern of the application (if any), and come up with realistic usage scenarios including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help to put things in perspective and design variations in the test plan. Having said that, there may be times where the application under test is new and so no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible. Prepare the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools, while another may be responsible for configuring other aspects such as resource monitoring. In other organizations, a single team is responsible for setting up all aspects. Record the test plan: Using a testing tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial that do the job quite well, each having their pros and cons. Such tools include HP Load Runner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, LoadImpact, Rational Performance Tester, Testing Anywhere, OpenSTA, Loadstorm, and so on. Some of these are commercial while others are not as mature or as portable or extendable as JMeter is. HP Load Runner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses. It does offer a much nicer graphical interface and monitoring capability though. Gatling is the new kid on the block, is free and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing DSL (domain specific language) versus JMeter's verbose XML, nicer and more meaningful HTML reports, among others. Having said that, it still has only a tiny user base when compared with JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing. In this book, our tool of choice will be Apache JMeter to perform this step. That shouldn't be a surprise considering the title of the book. Run the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data , also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, could be indicative that something is wrong with the test scripts, application under test, system resource, or a combination of these. Analyze results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need addressing. These could be system, database, or application related. System-related bottlenecks may lead to infrastructure changes such as increasing the memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, and reconfiguring network settings. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, reducing application memory consumption and database round trips. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results compared with a previous run to see if the change made had any improved or worsened effect on results. This process repeats until the performance goals of the project have been met. Performance testing core activities Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting testing. Monitoring network utilization, database I/O and waits, top queries, and invocation counts, for example, helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts. Performance testing and tuning There is a strong relationship between performance testing and tuning, in the sense that one often leads to the other. Often, end-to-end testing unveils system or application bottlenecks that are regarded as incompatible with project target goals. Once those bottlenecks are discovered, the next step for most teams is a series of tuning efforts to make the application perform adequately. Such efforts normally include but are not limited to: Configuring changes in system resources Optimizing database queries Reducing round trips in application calls; sometimes leading to re-designing and re-architecting problematic modules Scaling out application and database server capacity Reducing application resource footprint Optimizing and refactoring code; including eliminating redundancy, and reducing execution time Tuning efforts may also commence if the application has reached acceptable performance but the team wants to reduce the amount of system resources being used, decrease volume of hardware needed, or further increase system performance. After each change (or series of changes), the test is re-executed to see whether performance has increased or declined as a result of the changes. The process will be continued until the performance results reach acceptable goals. The outcome of these test-tuning circles normally produces a baseline. Baselines Baseline is a process of capturing performance metric data for the sole purpose of evaluating the efficacy of successive changes to the system or application. It is important that all characteristics and configurations except those specifically being varied for comparison remain the same, in order to make effective comparisons as to which change (or series of changes) is the driving result towards the targeted goal. Armed with such baseline results, subsequent changes can be made to system configuration or application and testing results compared to see whether such changes were relevant or not. Some considerations when generating baselines include: They are application specific They can be created for system, application, or modules They are metrics/results They should not be over generalized They evolve and may need to be redefined from time to time They act as a shared frame of reference They are reusable They help identify changes in performance Load and stress testing Load testing is the process of putting demand on a system and measuring its response; that is, determining how much volume the system can handle. Stress testing is the process of subjecting the system to unusually high loads far beyond its normal usage pattern to determine its responsiveness. These are different from performance testing whose sole purpose is to determine the response and effectiveness of a system; that is, how fast is the system. Since load ultimately affects how a system responds, performance testing is almost always done in conjunction with stress testing.
Read more
  • 0
  • 0
  • 1856

article-image-debugging-sikuli-scripts
Packt
30 Jul 2013
3 min read
Save for later

Debugging Sikuli scripts

Packt
30 Jul 2013
3 min read
This is the last topic with test automation is the debugging of scripts. A portion of all time working on script development will be spent running the script trying to debug problems with the scripts to get them to run reliably. Once you have a collection of scripts that you run on a regular basis without supervision, identifying causes of errors can become much more difficult. There are two main techniques for debugging Sikuli scripts when running them in the test harness presented here. The first method is to look at the logs. If you look back over the test runner script, you can see that it logs a complete record of the console output to a file. These files end in .final.log. You can open these in your text editor, and see what your script did and get feedback about the errors. The errors in the logs will tell you what happened. For example, you might get something like this: This one is telling us that Sikuli couldn't find the requested image on the screen. Or, you might see errors with your Python code. In situations like this, it's handy to know that Sikuli scripts are just a collection of files in a directory. You can actually open it up and look at the images and Python code within it. Another handy technique is to record videos of your test runs. This allows you to review what happened during a test (passing or failing) to see what went wrong, or analyze the execution for possible improvements to execution speed. For Mac OS X, this can be done using QuickTime Player, which is included with the OS. For Windows or Linux, you will need to investigate a similar solution solution (the examples prepared for this book contain a working example for Windows), but the general technique should still apply. Let's see how this would work in practice. Firstly, we need to create two additional scripts, one to start recording and another to stop it. The script is broken into two parts, so they can be executed independently. Here's the startup script (see startcapture.sikuli): And here's the script to stop recording (see stopcapture.sikuli): These are then pretty easy to integrate with our test runner scripts (see runtests_withrecording.sikuli): Depending on your machine, you may also encounter some performance degradations when recording video along with your tests. To compensate, you can adjust the default amount of time that Sikuli will wait to find something from 5 seconds to 10 seconds, or more (you may need to experiment), by adding the following line to the end of your library.sikuli script: Summary This article helped you debug your Sikuli scripts either by looking at the logs or by recording videos of your test runs. Useful Links: Visual Studio 2008 Test Types Android Application Testing: Getting Started Python Testing: Installing the Robot Framework
Read more
  • 0
  • 0
  • 2654
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-sharepoint-2013-search
Packt
30 Jul 2013
11 min read
Save for later

SharePoint 2013 Search

Packt
30 Jul 2013
11 min read
(For more resources related to this topic, see here.) Features of SharePoint 2013 – Result types and design templates Both result types and design templates are new concepts introduced in SharePoint 2013. Kate Dramstad, a program manager from the SharePoint search team at Microsoft, describes both concepts in a single, easy-to-remember formula: result types + design templates = rich search experience . When we perform a search we get back results. Some results are documents, others are pictures, SharePoint items, or just about anything else. Up until SharePoint 2010, all results, no matter which type they were, looked quite the same. Take a look at the following screenshot showing a results page from FAST for SharePoint 2010: The results are dull looking, can't be told apart, and in order to find what you are looking for, you have to scan the results up and down with your eyes and zero in on your desired result. Now let's look at how results are displayed in SharePoint 2013: What a difference! The page looks much more alive and vibrant, with easy distinguishing of different result types and a whole new hover panel, which provides information about the hovered item and is completely customizable. Display templates Search, and its related web parts, makes heavy use of display templates instead of plain old XSLT ( Extensible Stylesheet Language Transformations ). Display templates are basically snippets of HTML and JavaScript, which control the appearance and behavior of search results. SharePoint ships with a bunch of display templates that we can use out of the box, but we can also create our own custom ones. Similar to master pages, it is recommended to copy an existing display template that is close in nature to what we strive to achieve and start our customization from it. Customizing a display template can be done on any HTML editor, or if you choose, even Notepad. Once we upload the HTML template, SharePoint takes care of creating the companion JavaScript file all by itself. If we tear apart the results page, we can distinguish four different layers of display templates: The layers are as follows: Filters layer : In the preceding screenshot they are highlighted with the green border on the left and numbered 1. This layer shows the new refinement panel area that is not limited to text alone, but also enables the use of UX elements such as sliders, sliders with graphs, and so on. Control layer : In the preceding screenshot they are highlighted with the red border in the middle and numbered 2. This layer shows that not only results but also controls can be templated. Item layer : In the preceding screenshot they are highlighted with the orange border in the middle and numbered 3. This layer shows that each result type can be templated to look unique. For example, in the screenshot we see how a site result (the first result), conversation results (next three results), and image result (last one) looks like. Each result type has its own display template. Hover panel layer : In the preceding screenshot, they are highlighted with the blue border on the right and numbered 4. They are introduced in SharePoint 2013, the hover panel shows information on a hovered result. The extra information can be a preview of the document (using Office Web Apps), a bigger version of an image or just about anything we like, as we can template the hover panel just like any other layer. Display templates are stored in a site's master page gallery under the Display templates folder. Each one of these layers is controlled by display templates. But if design templates are the beauty, what are the brains? Well, that is result types. Result types Result types are the glue between design templates (UX—user experience) and the type of search result they template. You can think of result types as the brain behind the templating engine. Using result types enables administrators to create display templates to be displayed based upon the type of content that is returned from the search engine. Each result type is defined by a rule and is bound to a result source. In addition, each result type is associated with a single display template. Just like display templates, SharePoint ships with it a set of out of the box result types that match popular content. For example, SharePoint renders Word document results using the Item_Word.html display templates within any result source if the item matches the Microsoft Word type of content. However, if an item matches the PDF type of content, the result will be rendered using the Item_PDF.html display template. Defining a result type is a process much like creating a query rule. We will create our first result type and display template towards the end of the article. Both result types and display templates are used not only for search results, but also for other web parts as well, such as the Content Search Web Part. Styling results in a Content Search Web Part The Content Search Web Part (CSWP) comes in handy when we wish to show search-driven content to users quickly and without any interaction on their side. When adding a CSWP we have two sections to set: Search Criteria and Display Templates . Each section has its unique settings, explained as follows: The search criteria section is equivalent to the result type. Using the Query Builder we tell the web part which result type it should get. The Query Builder enables us to either choose one of the built-in queries (latest documents, items related to current user, and so on) or build our own. In addition, we can set the scope of the search. It can either be the current site, current site collection, or a URL. For our example, we will set the query to be Documents(System) , meaning it searches for the latest documents, and the scope to Current site collection : Next, we set the display template for the control holding the results. This is equivalent to the Control layer we mentioned earlier. The CSWP provides three control templates: List , List with Paging , and Slideshow . The control templates change the way the container of the items looks. To compare the different templates, take a look at how the container looks when the List template is chosen: And the following screenshot displays how the exact same list looks when the Slideshow template is chosen: Since our content is not images, rendering the control as Slideshow makes no sense. Last but not least, we set the Item display template. As usual, SharePoint comes with a set of built-in item templates, each designated for different item types. By default, the Picture on left, 3 lines on right item display template is selected. By looking at the preceding screenshot we can see it's not right for our results. Since we are searching for documents, we don't have a picture representing them so the left area looks quite dull. If we change the Item display template to Two lines we will get a much more suitable result: Display templates allow us to change the look of our results instantly. While playing around with the out-of-the-box display templates is fun, extending them is even better. If you look at the Two lines template that we chose for the CSWP, it seems kind of empty. All we have is the document type, represented by an icon, and the name of the document. Let's extend this display template and add the last modified date and the author of the document to the display. Creating a custom display template As we mentioned earlier, the best way to extend a display template is to copy and paste a template that is close in nature to what we wish to achieve, and customize it. So, as we wish to extend the Two lines template, open SharePoint Designer, navigate to Master Page Gallery | Display Templates | Content Web Parts of the site you previously added the CSWP, and copy and paste the Item_TwoLines.html file into the same folder. Rename the newly created file to Item_TwoLinesWithExtraInfo.html. As soon as you save the new filename, refresh the folder. You'll notice that SharePoint automatically created a new file named Item_TwoLinesWithExtraInfo.js. The combination of the HTML and JavaScript file is what makes the magic of display templates come to life. Edit the Item_TwoLinesWithExtraInfo.html file, and change its title to Two Lines with Extra Info. Getting the new properties The first code block we should discuss is the CustomDocumentProperties block. Let's examine what it holds between its tags: <mso:CustomDocumentProperties> <mso:TemplateHidden msdt_dt="string">0</mso:TemplateHidden> <mso:ManagedPropertyMapping msdt_dt="string">'Link URL'{Link URL}:'Path','Line 1'…</mso:ManagedPropertyMapping> <mso:MasterPageDescription msdt_dt="string">This Item Display Template will show a small thumbnail…</mso:MasterPageDescription> <mso:ContentTypeId msdt_dt="string">0x0101002039C03B61C64EC4A04F5361F385106603</ mso:ContentTypeId> <mso:TargetControlType msdt_dt="string">;#Content Web Parts;#</mso:TargetControlType> <mso:HtmlDesignAssociated msdt_dt="string">1</mso:HtmlDesignAssociated> <mso:HtmlDesignConversionSucceeded msdt_dt="string">True</mso:HtmlDesignConversionSucceeded> <mso:HtmlDesignStatusAndPreview msdt_dt="string">https://hippodevssp.sharepoint.com/search/_ catalogs/masterpage/Display%20Templates/Content%20Web%20Parts/Item_ TwoLinesWithExtraInfo.html, Conversion successful.</mso:HtmlDesignStatusAndPreview> </mso:CustomDocumentProperties> The most important properties from this block are: ManagedPropertyMapping : This property holds all the managed properties that our display template will have access to. The properties are organized in the key:value format. For example, if we wish to make use of the Author property, we will declare it as 'Author':'Author'. The value can be a list of managed properties, so if the first one is null, the mapping will be done using the second one, and so on. ContentTypeId : This property sets the content type of the display template. The specific value recognizes the file as a display template. TargetControlType : This property sets the target of the display template. In our example it is set to Content Web Parts , which means the search content web part and any other related search content web part. Other possible values are SearchBox , SearchHoverPanel , SearchResults , and so on. Since we wish to display the author and the last modified date of the document, let's add these two managed properties to the ManagedPropertyMapping property. Add the following snippet in the beginning of the property, as follows: <mso:ManagedPropertyMapping msdt_dt="string">'Author:'Author','LastModified':'LastModifiedTime',… </mso:ManagedPropertyMapping> We mapped the Author managed property to the Author key, and the LastModifiedTime managed property to the LastModified key. Next, we will discuss how to actually use the new properties. Getting the values of the new properties Using the newly added properties is done using plain old JavaScript. Scroll down a bit until you see the following opening div statement: <div id="TwoLines"> The div tag begins with what seems to be a comment markup (<!--), but if you look closer you should recognize that it is actually JavaScript. By using built-in methods and client object model code, display templates can get any information out of SharePoint, and of the outside world. The getItemValue method is in charge of getting content back based on a given managed property. That means if we wish to get the author of a result, and we set the key to the managed property to be Author, the following line of code will get it: var author = $getItemValue(ctx,"Author"); The same goes for the last modified date. We used the key LastModified for the managed property, and hence the following line of code will be used: var last = $getItemValue(ctx,"LastModified"); Add these two lines just above the closing comment statement markup (_#-->). Remember that each result is rendered using this display template, so the author and the last variables are unique for that one item that is being rendered.
Read more
  • 0
  • 0
  • 2219

article-image-arrays-objects
Packt
30 Jul 2013
11 min read
Save for later

From arrays to objects

Packt
30 Jul 2013
11 min read
(For more resources related to this topic, see here.) An array is just a list of values. Each value has an index (a numeric key) that starts from zero and increments by one for each value. > var myarr = ['red', 'blue', 'yellow', 'purple']; > myarr; ["red", "blue", "yellow", "purple"]. > myarr[0]; "red" > myarr[3]; "purple" If you put the indexes in one column and the values in another, you'll end up with a table of key/value pairs shown as follows: Key Value 0 red 1 blue 2 yellow 3 purple An object is similar to an array, but with the difference that you define the keys yourself. You're not limited to using only numeric indexes and you can use friendlier keys, such as first_name, age, and so on. Let's take a look at a simple object and examine its parts: var hero = { breed: 'Turtle', occupation: 'Ninja' }; You can see that: The name of the variable that refers to the object is hero Instead of [ and ], which you use to define an array, you use { and } for objects You separate the elements (called properties) contained in the object with commas The key/value pairs are divided by colons, as in key: value The keys (names of the properties) can optionally be placed in quotation marks. For example, these are all the same: var hero = {occupation: 1}; var hero = {"occupation": 1}; var hero = {'occupation': 1}; It's recommended that you don't quote the names of the properties (it's less typing), but there are cases when you must use quotes: If the property name is one of the reserved words in JavaScript  If it contains spaces or special characters (anything other than letters, numbers, and the _ and $ characters) If it starts with a number In other words, if the name you have chosen for a property is not a valid name for a variable in JavaScript, then you need to wrap it in quotes. Have a look at this bizarre-looking object: var o = { $omething: 1, 'yes or no': 'yes', '!@#$%^&*': true }; This is a valid object. The quotes are required for the second and the third properties, otherwise you'll get an error. Later in this chapter, you'll see other ways to define objects and arrays in addition to [] and {}. But first, let's introduce this bit of terminology: defining an array with [] is called array literal notation, and defining an object using the curly braces {} is called object literal notation. Elements, properties, methods, and members When talking about arrays, you say that they contain elements. When talking about objects, you say that they contain properties. There isn't any significant difference in JavaScript; it's just the terminology that people are used to, likely from other programming languages. A property of an object can point to a function, because functions are just data. Properties that point to functions are also called methods. In the following example, talk is a method: var dog = { name: 'Benji', talk: function () { alert('Woof, woof!'); } }; As you have seen in the previous chapter, it's also possible to store functions as array elements and invoke them, but you'll not see much code like this in practice: > var a = []; > a[0] = function (what) { alert(what); }; > a[0]('Boo!'); You can also see people using the word members to refer to properties of an object, most often when it doesn't matter if the property is a function or not. Hashes and associative arrays In some programming languages, there is a distinction between: A regular array, also called an indexed or enumerated array (the keys are numbers) An associative array, also called a hash or a dictionary (the keys are strings) JavaScript uses arrays to represent indexed arrays and objects to represent associative arrays. If you want a hash in JavaScript, you use an object. Accessing an object's properties There are two ways to access a property of an object: Using the square bracket notation, for example hero['occupation'] Using the dot notation, for example hero.occupation The dot notation is easier to read and write, but it cannot always be used. The same rules apply as for quoting property names: if the name of the property is not a valid variable name, you cannot use the dot notation. Let's take the hero object again: var hero = { breed: 'Turtle', occupation: 'Ninja' }; Accessing a property with the dot notation: > hero.breed; "Turtle" Accessing a property with the bracket notation: > hero['occupation']; "Ninja" Accessing a non-existing property returns undefined: > 'Hair color is ' + hero.hair_color; "Hair color is undefined" Objects can contain any data, including other objects: var book = { name: 'Catch-22', published: 1961, author: { firstname: 'Joseph', lastname: 'Heller' } }; To get to the firstname property of the object contained in the author property of the book object, you use: > book.author.firstname; "Joseph" Using the square brackets notation: > book['author']['lastname']; "Heller" It works even if you mix both: > book.author['lastname']; "Heller" > book['author'].lastname; "Heller" Another case where you need square brackets is when the name of the property you need to access is not known beforehand. During runtime, it's dynamically stored in a variable: > var key = 'firstname'; > book.author[key]; "Joseph" Calling an object's methods You know a method is just a property that happens to be a function, so you access methods the same way as you would access properties: using the dot notation or using square brackets. Calling (invoking) a method is the same as calling any other function: you just add parentheses after the method name, which effectively says "Execute!". > var hero = { breed: 'Turtle', occupation: 'Ninja', say: function () { return 'I am ' + hero.occupation; } }; > hero.say(); "I am Ninja" If there are any parameters that you want to pass to a method, you proceed as with normal functions: > hero.say('a', 'b', 'c'); Because you can use the array-like square brackets to access a property, this means you can also use brackets to access and invoke methods: > hero['say'](); This is not a common practice unless the method name is not known at the time of writing code, but is instead defined at runtime: var method = 'say'; hero[method](); Best practice tip: no quotes (unless you have to) Use the dot notation to access methods and properties and don't quote properties in your object literals. Altering properties/methods JavaScript allows you to alter the properties and methods of existing objects at any time. This includes adding new properties or deleting them. You can start with a "blank" object and add properties later. Let's see how you can go about doing this. An object without properties is shown as follows: > var hero = {}; A "blank" object In this section, you started with a "blank" object, var hero = {};. Blank is in quotes because this object is not really empty and useless. Although at this stage it has no properties of its own, it has already inherited some. You'll learn more about own versus inherited properties later. So, an object in ES3 is never really "blank" or "empty". In ES5 though, there is a way to create a completely blank object that doesn't inherit anything, but let's not get ahead too much. Accessing a non-existing property is shown as follows: > typeof hero.breed; "undefined" Adding two properties and a method: > hero.breed = 'turtle'; > hero.name = 'Leonardo'; > hero.sayName = function () { return hero.name; }; Calling the method: > hero.sayName(); "Leonardo" Deleting a property: > delete hero.name; true Calling the method again will no longer find the deleted name property: > hero.sayName(); "undefined" Malleable objects You can always change any object at any time, such as adding and removing properties and changing their values. But, there are exceptions to this rule. A few properties of some built-in objects are not changeable (for example, Math.PI, as you'll see later). Using the this value In the previous example, the sayName() method used hero.name to access the name property of the hero object. When you're inside a method though, there is another way to access the object the method belongs to: by using the special value this. > var hero = { name: 'Rafaelo', sayName: function () { return this.name; } }; > hero.sayName(); "Rafaelo" So, when you say this, you're actually saying "this object" or "the current object". Constructor functions There is another way to create objects: by using constructor functions. Let's see an example: function Hero() { this.occupation = 'Ninja'; } In order to create an object using this function, you use the new operator, like this: > var hero = new Hero(); > hero.occupation; "Ninja" A benefit of using constructor functions is that they accept parameters, which can be used when creating new objects. Let's modify the constructor to accept one parameter and assign it to the name property: function Hero(name) { this.name = name; this.occupation = 'Ninja'; this.whoAreYou = function () { return "I'm " + this.name + " and I'm a " + this.occupation; }; } Now you can create different objects using the same constructor: > var h1 = new Hero('Michelangelo'); > var h2 = new Hero('Donatello'); > h1.whoAreYou(); "I'm Michelangelo and I'm a Ninja" > h2.whoAreYou(); "I'm Donatello and I'm a Ninja" Best practice By convention, you should capitalize the first letter of your constructor functions so that you have a visual clue that this is not intended to be called as a regular function. If you call a function that is designed to be a constructor but you omit the new operator, this is not an error, but it doesn't give you the expected result. > var h = Hero('Leonardo'); > typeof h; "undefined" What happened here? There is no new operator, so a new object was not created. The function was called like any other function, so h contains the value that the function returns. The function does not return anything (there's no return), so it actually returns undefined , which gets assigned to h. In this case, what does this refer to? It refers to the global object. The global object You have already learned a bit about global variables (and how you should avoid them). You also know that JavaScript programs run inside a host environment (the browser for example). Now that you know about objects, it's time for the whole truth: the host environment provides a global object and all global variables are accessible as properties of the global object. If your host environment is the web browser, the global object is called window. Another way to access the global object (and this is also true in most other environments) is to use this outside a constructor function, for example in the global program code outside any function. As an illustration, you can declare a global variable outside any function, such as: > var a = 1; Then, you can access this global variable in various ways: As a variable a As a property of the global object, for example window['a'] or window.a As a property of the global object referred to as this: > var a = 1; > window.a; 1 > this.a; 1 Let's go back to the case where you define a constructor function and call it without the new operator. In such cases, this refers to the global object and all the properties set to this become properties of window. Declaring a constructor function and calling it without new returns "undefined" : > function Hero(name) { this.name = name; } > var h = Hero('Leonardo'); > typeof h; "undefined" > typeof h.name; TypeError: Cannot read property 'name' of undefined Because you had this inside Hero, a global variable (a property of the global object) called name was created: > name; "Leonardo" > window.name; "Leonardo" If you call the same constructor function using new, then a new object is returned and this refers to it: > var h2 = new Hero('Michelangelo'); > typeof h2; "object" > h2.name; "Michelangelo" The built-in global functions can also be invoked as methods of the window object. So, the following two calls have the same result: > parseInt('101 dalmatians'); 101 > window.parseInt('101 dalmatians'); 101 And, when outside a function called as a constructor (with new), also: > this.parseInt('101 dalmatians'); 101
Read more
  • 0
  • 0
  • 890

article-image-command-line
Packt
30 Jul 2013
19 min read
Save for later

The Command Line

Packt
30 Jul 2013
19 min read
(For more resources related to this topic, see here.) VSTest.Console utility In Visual Studio 2012, the VSTest.Console command line utility is used for running the automated unit test and coded UI test. VSTest.Console is an optimized replacement for MSTest in Visual Studio 2012. There are multiple options for the command line utility that can used in any order with multiple combinations. Running the command VSTest.Console /? at the command prompt shows the summary of available options and the usage message. These options are shown in the following screenshot: Running tests using VSTest.Console Running the test from the command prompt requires the expected parameters to be passed based on the options used along with the command. Some of the options available with VSTest.Console command are explained in the next few sections: The /Tests option This command is used to select particular tests from the list of tests in the test file. Specify the test names as parameters to the command, and separate the tests using commas when multiple tests are to be run. The next screenshot shows a couple of test methods that run from the test file: The output shows the Test Run result for each of the tests along with the messages, if any. The summary of the tests is also shown at the end of the results sections with the time taken for the test execution. The /ListTests option This command is used to list all available tests within the test file. The following screenshot lists the tests from one of the Test Project file: The next one is another command line utility, MSTest, which is used to run any automated tests. MSTest utility To access the MSTest tool, add the Visual Studio install directory to the path or open the Visual Studio Group from the Start menu, and then open the Tools section to access the Visual Studio command prompt. Use the command MSTest from the command prompt. The MSTest command expects the name of the test as parameter to run the test. Just type MSTest /help or MSTest /? at the Visual Studio command prompt to get help and find out more about options. The following table lists the different parameters that can be used with MSTest and the description of each parameter and its usage: Option Description /help This option displays the usage message for all parameters type /? or /h. /nologo This option disables the display of startup banner and the copyright message. /testcontainer:[file name] This option loads a file that contains tests; multiple test files can be specified to load multiple tests from the files, for example: /tescontainer:mytestproject.dll/testcontainer:loadtest1.loadtest /maxpriority:[priority] /minpriority:[priority] This option execute the tests with priority less than or equal to the value: /minpriority:0 /maxpriority:2. /category This filter is used to select tests and run, based on the category of each test. We can use logical operators (& and !) to construct the filters, or we can use the logical operators (| and &!) to filter the tests.   /category:Priority1 - any tests with category as priority1.   /category: "Priority1&MyTests"- any tests with multiple categories as priority1 and Mytests.   /category: "Priority1|Mytests" - Multiple tests with category as either Priority1 or MyTests.   /category:"Priority1&!MyTests" - Priority1 tests that do not have category MyTests /testmetadata:[file name] This option loads a metadata file. For example, /testmetadata:testproject1.vsmdi. /testsettings:[file name] This option uses the specified test settings file. For example, /testsettings:mysettings.testsettings. /resultsfile:[file name] This option saves the Test Run results to the specified file. for example, /resultsfile:c:tempmyresults.trx. /testlist:[test list path] The test list to run as specified in the metadata file; you can specify this option multiple times to run more than one test list. For example, /testlist:checkintests/clientteam. /test:[file name] This is the name of a test to be run; you can specify this option multiple times to run more than one test. /unique This option runs a test only if one unique match is found for any given /test. /noisolation This option runs a test within the MSTest.exe process. This choice improves Test Run speed, but increases risk to the MsTest process. /noresults This option does not save the Test Results in a TRX file; the choice improves Test Run speed, but does not save the Test Run results /detail:[property id] This parameter is used for getting value of additional property along with the test outcome. For example, the following command with the property is to get the error message from the Test Result: /detail:errormessage Running a test from the command line MSTest is only for automated tests. Even if the command is applied to a manual test, the tool will remove the non-automated test from the Test Run. The /testcontainer option The /testcontainer option requires the filename as parameter which contains information about tests that must be run. The /testcontainer file is an assembly that contains all the tests under the project, and each of the projects under a solution has its own container for the tests within the projects. For example, the next screenshot shows the list of tests within the container unittestproject1.dll. MSTest executes all the tests within the container and shows the result as well. The summary of the Test Result is as shown in the next screenshot: First, the MSTest will load all the tests within the project, then start executing them one by one. The result of each Test Run is shown but the detailed Test Run information is stored in the test trace file. The trace file can be loaded in Visual Studio to get the details of the Test Result. The /testmetadata option The /testmetadata option is used for running tests in multiple Test Projects under a solution. This is based on the metadata file, which is an XML file that has the list of all the tests created under the solution. The /testcontainer option is specific to a Test Project, whereas /testmetadata is for multiple test containers with the flexibility of choosing tests from each container. The /test option There are instances where running all the tests within a test container is not required. To specify only the required tests, use the /test option with the /testmetadata option or the /testcontainer option. For example, the following command runs only the CodedUITest1 test from the list of all tests: The /test option can be used along with /testmetadata or /testcontainer, but not both. There are different usages for the /test option: Any number of tests can be specified using the /test option multiple times against the /testmetadata or /testcontainer option. The name used against the /test option is the search keyword of the fully qualified test names. For example, if there are test names with fully qualified names such as: UnitTestProject1.UnitTest1.CalculateTotalPriceTest UnitTestProject1.UnitTest1.CalculateTotalPricewithTaxTest UnitTestProject1.UnitTest1.GetTotalPriceTest And if the command contains the option /test:UnitTestProject1, then all of the preceding three tests will run as the name contains the UnitTestProject1 string in it. Even though we specify only the name to the /test option, the result will display the fully qualified name of the tests run in the results window. The /unique option The /unique option will make sure that only one test which matches the given name, is run. In the preceding examples, there are different tests with the string UnitTestProject1 in its fully qualified name. Running the following command executes all the preceding tests: mstest /testcontainer:c:SatheeshSharedAppsEmployeeMaintenance UnitTestProject1bindebugunittestproject1.dll /test:Unittestproject1 But if the /unique option is specified along with the preceding command, the MSTest utility will return the message saying that more than one test was found with the same name. It means that the test will be successful only if the test name is unique. The following command will execute successfully as there is only one test with the name GetTotalItemPriceTest. The /noisolation option The /noisolation option runs the tests within the MStest.exe process. This choice improves the Test Run speed, but increases risk to the MSTest.exe process. Usually, the tests are run in a separate process that is allocated with separate memory from the system. By launching the MSTest.exe process with the /noisolation option, we avoid having a separate process created for the test. The /testsettings option The /testsettings option is used to specify the Test Run to use a specific test settings file. If the settings file is not specified, MSTest uses the default settings file. The following example forces the test to use the TestSettings1 settings file: The /resultsfile option In all the command executions, the MSTest utility stores the Test Results to a trace file. By default, the trace file name is assigned by MSTest using the login user ID, the machine name, and the current date and time. This can be customized to store the Test Results in a custom trace file using the /resultsfile option. For example, the next screenshot shows the custom trace file named as customtestresults.trx: The preceding screenshot shows the Test Results stored at the c:Satheesh location in the results file, customtestresult.trx. The /noresults option The /noresults option informs the MSTest application not to store the Test Results to the TRX file. This option increases the performance of the test execution. The /nologo option The /nologo option is to inform the MSTest tool not to display the copyright information that is usually shown at the beginning of the Test Run. The /detail option The /detail option is used for collecting the property values from each Test Run result. Each Test Result provides information about the test such as error messages, start time, end time, test name, description, test type, and many more. The /detail option is useful to get the property values after the Test Run. For example, the following screenshot shows the start and end time of the Test Run, and also the type of the Test Run: The /detail option can be specified multiple times to get multiple property values after the Test Run. Publishing Test Results Publishing Test Results is valid only if Team Explorer is installed, and if Visual Studio is connected to the Team Foundation Server (TFS). This is to publish the test data and results to the TFS Team Project. Please refer to Microsoft Developer Network (MSDN) for more information on installing and configuring TFS and Team Explorer. Test Results can be published using the command line utility and the various options along with the utility. The /publish option with MSTest will first run the test, and then set the flavor and platform for the test before publishing the data to the TFS. Some of these options are mandatory for publishing the Test Run details. The following are the different publishing options for the command line MSTest tool: The /publish option The /publish option should be followed by the uniform resource identifier (URI) of the TFS, if the TFS is not registered in the client. If it is registered, just use the name of the server to which the Test Result has to be published, as shown in the following command: /publish:[server name] Refer to the following examples: If the TFS Server is not registered in the client, then: /publish:http://MyTFSServer() If the TFS Server is registered with the client, then: /publish:MyTFSServer The /publishbuild option The /publishbuild option is used for publishing the builds. The parameter value is the unique name that identifies the build from the list of scheduled builds. The /flavor option Publishing the Test Rresults to TFS requires /flavor as mandatory. Flavor is a string value that is used in combination with the platform name, and should match with the completed build that can be identified by the /publishbuild option. The MSTest command will run the test, and then set the flavor and platform properties, before publishing the Test Run results to the TFS: /flavour:[flavour string value] For example: /flavor:Release /flavor:Debug The /platform option This is a mandatory string value used in combination with the /flavor option which should match the build option. /platform:[string value] For example: /platform:Mixed Platforms /platform:NET /platform:Win32 The /publishresultsfile option MSTest stores all the Test Results in the default trace files with the extension .trx. Using the /publishresultsfile option, the Test Results file can be published to TFS using the output/trace option. The name of the file is the input to this option. If the value is not specified, MSTest will publish the current Test Run trace file to TFS. /publishresultsfile:[file name string] For example, to publish the current Test Run trace file, use the /publishresultsfile option. To publish the Test Result, one can use a combination of different options we saw in previous sections, along with the option /publishresultsfile. The Test Results from the results file are published to the build output of the solution. The steps involved in publishing are to create the test, create a build definition, build the solution, execute the test, and then publish the result to the build output. Step 1 – create/use existing Test Project The following screenshot contains the solution EmployeeMaintenance. The solution contains a Test Project WebAndLoadTestProject1 with a web test WebTest2. The following screenshot shows the Test Project named WebAndLoadTestProject1: Step 2 – running the test On running the web test, by default the Test Result is stored in the trace file <file name>.trx. Step 3 – creating a build The /build service in Team Foundation Server has to be configured with a controller and agents. Each build controller manages a set of build agents. Unfortunately, the steps and the details behind creating the build types will not be covered in this article as it would be too long to discuss it. The following screenshot shows the /build service configured with controller and agents: To create the build definition using the Team Explorer, navigate to the Build definitions in Builds folder, under Team Project. Select new build definition, and then configure the options by choosing the projects in TFS and the local folder. In one of the steps, you can see the following screenshot for selecting the project and setting the configuration information for the build process: There are different configuration sections such as Required, Basic, and, Advanced, from where the project can be selected to include as part of this build definition setting such as build file formats, Agents Settings, work item creation on build failure, and other configurations. Step 4 – building the project Now that the project is created, configurations and properties are set, and we are ready to run the test, we will build and publish the Test Results. Select the New build definition and start the build queue process. The build service takes care of building the solution by applying the build definition, and on completion the result section shows the build summary. Step 5 – publishing the result So far, the test is run and the result is saved in the trace file, and also we have built the project using the build definition. The Test Run results should be published to the build. There are multiple options used for publishing the Test Results using the MSTest command line tool. The following command in the next screenshot publishes the Test Result to the specified build: The command line options used in the preceding screenshot shows the Test Result trace file, TFS Team Project, and build against which the Test Result should be published. The command line also has the platform and the flavor values matching the build configurations. After publishing the Test Results, if you open the build file, the test information along with the build summary is shown in the build summary. The information also contains a link to the trace file. TCM command line utility TCM is the command line utility used for importing automated tests to the Test Plan, running the test from the Test Plan, and then viewing a of tests and IDs corresponding to them. This utility is very useful if the IDE is not available. The /help or /? command is used to get the syntax and parameters for the tool. Following are the syntax and parameters for the tcm.exe tool: Importing tests to a Test Plan There wasn't any test case for the unit test, and running the test case was also from Visual Studio IDE. This section explains how to import the tests to a Test Plan and create the test cases automatically while importing through the command line. The Test Plans are created using the Test Manager to group the Test Suites and test cases. The following screenshot shows a few Test Plans created for the Team Project TeamProject1: The EmployeeMaintenance solution contains the unit Test Project UnitTestProject1 with a few methods out of which there are methods such as CalculateTotalPriceTest() and CalculateTotalPricewithTaxTest() with their category defined as TotalPrice. So far there are no test cases defined in any of the Test Plans in the Test Manger for these tests. Refer to the following screenshot: For any tests created using Visual Studio, the TCM utility can be used to import it to the Test Plan in Test Manager as test cases. The following command imports all tests with the category defined as TotalPrice from the UnitTestProject1 assembly into the Team Project TeamProject1. The category is defined to the tests to group it from all other available tests within the assembly. Refer to the following screenshot: The command execution result shows the summary of the import, along with the names of the tests matching the command parameters. Connect to the TeamProject1 using Test Manager and open any of the Test Plans within the project. On the Contents tab under the Plan option in Testing Center, click on Add from the toolbar in Test Suite section on the right. This will open up a new window to search for any available test cases to add to the Test Suite. By default, the Test Plan is the Test Suite, if no other Test Suite is created for the plan. In the new window, just click on the Run option to perform the default search with default parameters. You may notice that the search result shows two test cases in the name of the test methods which were imported from the Test Project. The test cases are named after the test method itself. Select either or both of the test cases and add them to the Test Suite. After adding the test case to the Test Suite and Test Plan, open the test case using the Open toolbar option. There won't be any step except the name of the test case and few other details. Include the details of the test steps to the test case, if required. Running tests in a Test Plan The tests cases associated with the tests can be run using the TCM command line utility without using the IDE. Whenever a test is run using the TCM, it requires additional information such as the environment and roles within the environment. Running the test case using TCM requires Test Points or the Test Suite, and the configuration information. TCM requires the IDs of the Test Plan, Test Suite, and configuration. The TCM command line can be used to retrieve all these details. To list all configurations from the Team Project, the TCM command is like the following result: The following is the command and output for listing all the Test Plans within the Team Project: To list all the Test Suites within the Plan, use the following TCM command with the options as shown in the next screenshot along with the Plan ID, collection, and the Team Project name. Use the Plan ID from the previous command output: Use the Config ID, Plan ID, and the Suite ID collected by using the TCM utility from the collection and the Team Project to run the test. This will create a run as shown in the following screenshot: The Test Run is created and the result can be viewed in Test Manager for analysis. Open the Test Manager and select the option Test under Testing Center. Select Analyze Test Runs from the menu bar. The Analyze Test Runs window shows the Test Runs for the Test Plan. The following screenshot shows a detailed view of the Test Run. The test is still in progress but you can see the test cases and the other details provided at the command: The Test Agent needs to be set up to run as a process instead of a service to run the automated tests to interact with desktop. Summary This article explained the use of multiple command line utilities such as VSTest. Console, MSTest, and TCM for running the tests. These tools are very handy when there is no IDE. Lots of features are covered using the command line utility when compared to the earlier versions of Visual Studio. The VSTest.Console utility comes with multiple options to run automated tests such as unit tests and Coded UI tests. The MSTest utility provides options for backward compatibility along with multiple options to run automated tests and publish the results to the Team Foundation Server. The TCM utility is used for importing tests and creating test cases automatically to Test Plans. This utility is very useful, and saves lot of manual activities with Test Manager. Overall these utilities provide lot of features at the command line, and remove the IDE dependency. Resources for Article: Further resources on this subject: Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Visual Studio 2010 Test Types [Article] Displaying SQL Server Data using a Linq Data Source [Article]
Read more
  • 0
  • 0
  • 6121

article-image-aliens-have-landed
Packt
29 Jul 2013
28 min read
Save for later

The Aliens Have Landed!

Packt
29 Jul 2013
28 min read
(For more resources related to this topic, see here.) The progression of testing Way back when testing used to be primarily manual, test cases were created and executed by developers or quality assurance team members. These test cases would comprise of anything from simple unit tests (testing single methods or classes of code) or integration tests (testing multiple components within code) or even functional tests (tests that ensure the system performs as required). As we began to develop differently, whether it was from the agile project methodology or extreme programming methodologies, we needed more robust tools to support our needs. This led to the advent of automated testing. Instead of a tester working with your application and running tests against it, they could simply press a few buttons, hit a few key strokes, and execute a 100 or 200 test case suite against your application to see the functionality. In some realms, something called a test harness was used. Test harnesses usually included running the compiled application in some kind of a sandbox environment that was probably something like production (this is after all the final binary that would be rolled out) that may or may not have pointed to a database (if it did, and it was smart, it probably pointed to a completely non-discreet database instance) to perform some level of testing. User input would be simulated and a report (possibly in some cryptic format that only few understood) would be generated indicating whether the application did what was expected or not. Since then, new tools such as JUnit, Selenium, SoapUI to name a few, have been introduced to add more functionality to your test cases. These are meant to drive both unit and functional testing of your application. They are meant to be standard tools, easy to use and reuse, and overall a platform that many developers can work with, and can be a desirable skill set for employers. Standardizing of tools also allows for more integrations to occur; it may be difficult to get leverage to build an integration with your own built tools, with many developers wanting an integration with widely used frameworks A and B. What is Arquillian If you haven't heard of Arquillian before (or are very new to it), this may be the section for you. Arquillian is a testing framework for Java that leverages JUnit and TestNG to execute test cases against a Java container. The Arquillian framework is broken up into three major sections: test runners (JUnit or TestNG), containers (Weld, OpenWebBeans, Tomcat, Glassfish, and so on), and test enrichers (integration of your test case into the container that your code is running in). ShrinkWrap is an external dependency for you to use with Arquillian; they are almost sibling projects. ShrinkWrap helps you define your deployments, and your descriptors to be loaded to the Java container you are testing against. The JUnit test container is used throughout. If you'd like to use TestNG with your code, you simply need to replace the JUnit container with the TestNG container, and have your test classes extend the Arquillian class found there. The JUnit test cases use a JUnit Runner to start Arquillian. The containers used will vary with each case. The Arquillian difference Arquillian can be considered a standardized test harness for JVM-based applications. It abstracts the container or application start-up logic away from your unit tests and instead drives a deployment runtime paradigm with your application, allowing you to deploy your program, both via command line and to a Java EE application server. Arquillian allows you to deploy your application to your targeted runtime to execute test cases. Your targeted runtime can be an embedded application server (or series of libraries), a managed application server (where Arquillian performs the calls necessary to start and stop the JVM), or even a remote application server (which can be local to your machine, remote in your corporate infrastructure, or even the cloud). Arquillian fits in to certain areas of testing, which can vary based on testing strategies for your application. If you are using Java EE 6, you may want to use an embedded CDI container (such as Weld) to unit test parts of your application. These tests could happen hourly or every time someone commits a code change. You could also use Arquillian to automate your integration test cases, where you use a managed or embedded application server to run your application, or even just parts of your application. You can even use Arquillian to perform automated acceptance testing of your application, using other tools such as Selenium to drive requests through the user interface. This can also be used to smoke test deployments of applications. Arquillian can be used with a wide variety of containers that support everything from JBoss 4.2 (slightly pre-Java EE 5) through Java EE 6 and can control these containers in what Arquillian considers types – embedded, managed, and remote. Embedded application servers run within the same JVM as your test cases always do. Managed run within a separate JVM and are started and stopped by Arquillian. A managed container will start on the first test that requires a deployment and stop once all tests have been executed; there is no restart. Remote containers are as the name implies, remote JVMs. This could be on the same physical hardware that your test runs on or a remote piece of hardware that the application server runs on. The application server must be running in order to deploy. Note that if there is a problem deploying, such as the managed application server will not start or the remote application server will not start, Arquillian will fail once and assume that deployments will fail afterwards for the remaining test cases: Do not mix your unit test application servers that are used for automated testing and those that you use for manual testing. Whether it's a separate instance required, a distinct domain, profile, whichever your application server vendor supports, avoid mixing them. One of your biggest blockers may be from your manually deployed application interfering with your automated testing application. Even though remote application servers can be physically separated from your testing, they typically require the binaries to be locally available. Plan to have a copy of your application server available on your CI server(s) for this purpose. Prepare your application for this kind of testing. Whether it's the automatic deployment or undeployment of resources (JMS queues, JDBC connections, users, and so on) or ensuring that the server is up and running (for example, prebuild, kick off a kill, and restart process) make sure this can all happen from your build, either in your CI server or using scripts within your source repository. Do not try to reuse application servers across applications. If two test cases are running in parallel, you can run into inconsistent results. The fundamentals of a test case As our software has evolved, our strategy for testing it must evolve as well. We have become more dependent on techniques such as dependency injection (or inversion of control – IoC). When we take this in to consideration, we realize that our testing has to change. Take a look at the following example: @Testpublic void testCalculationOfBusinessData() {CalculatorData cd = new CalculatorData(1, 3, 5);CalculatorService ms = new CalculatorServiceImpl();ms.calculateSum(cd);assertEquals(1 + 3 + 5, cd.getResult());} We can assume that the method calculateSum takes the int values passed in to MyDataObject and sums up the values. As a result, when I construct it using 1, 3, 5 the total should come out to 9. This is a valid test case for our service layer, since our service layer knows what implementations exist out there and how they should be tested. If there was another implementation of CalculatorService that multiplied all results by 2, a separate test case or test class would exist which tested that object. Let's say we look at the business layer that invokes this service object: @Modelpublic class CalculatorController {@Injectprivate CalculatorService service;@Injectprivate CalculatorForm form;/**For the injected form, calculates the total of the input**/public void sum() {CalculatorData data = new CalculatorData(form.getX(),form.getY(),form.getZ());service.calculateSum(data);form.setSum(data.getCalculatedResult());}} This example uses JSR-330 annotations to inject references to CalculatorService, a service layer object that can perform basic calculator functions and CalculatorForm, some sort of UI component that has form input and output that can be read or returned. If we want to test this class, we will immediately run into a problem. Any invocation of the sum method outside of a JSR-330 (dependency injection for Java) container will result in a NullPointerException. So what does Arquillian do to make our test case more legitimate? Let's take a look at the test case and review the anatomy to understand that better: @RunWith(Arquillian.class)public class CalculatorTest {@Deploymentpublic static JavaArchive createArchive() {return ShrinkWrap.create(JavaArchive.class,"foo.jar").addAsManifestResource(EmptyAsset.INSTANCE,"beans.xml").addPackage(CalculatorData.class.getPackage());}@Inject CalculatorForm form;@Inject CalculatorController controller;@Testpublic void testInjectedCalculator() {form.setX(1);form.setY(3);form.setZ(5);controller.sum();assertEquals(9,form.getSum());}} There are a few pieces that make up this test case, each of which we'll need to review. These are given as follows: The @RunWith annotation: It tells JUnit to use the Arquillian runner for running this class. The @Deployment annotation: It tells Arquillian to use the specified archive for deployment purposes and testing purposes. The injection points: In this case, CDI injection points represent the objects under test. The Test: This is where we process the actual test case. Using the injected objects, we simulate form input by inserting values in X, Y, and Z in the form, then invoke the controller's sum method, which would be called from your user interface. We then validate that the resulting sum matches our expectations. What we gained was leveraging Arquillian to perform the same IoC injection that we would expect to see in our application. In addition, we have the following dependencies within our Maven pom file: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-bom</artifactId> <version>2.0.0-alpha-1</version> <scope>import</scope> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${org.arquillian.bom.version}</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-6.0</artifactId> <version>1.0.0.Final</version> <type>pom</type> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-impl-maven</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.junit</groupId> <artifactId>arquillian-junit-container</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.container</groupId> <artifactId>arquillian-weld-ee-embedded-1.1</artifactId> <version>1.0.0.CR3</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.weld</groupId> <artifactId>weld-core</artifactId> <version>1.1.8.Final</version> <scope>test</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.4</version> <scope>test</scope> </dependency> </dependencies> Our dependencies are as follows: Arquillian BOM: It's an overall list of all artifacts for running Arquillian ShrinkWrap: It's a separate API for creating deployments JUnit and Arquillian JUnit Container: Arquillian support for working with JUnit Arquillian Weld EE: Arquillian containers control what your deployment runtime should be Testing profiles One of the benefits of using Arquillian with Maven is that you can control how your tests execute using Maven profiles. With this approach, you can define a profile per container to test against. In some cases, you may want to test your application or library against multiple containers. This allows you to reuse deployments against different libraries. We still remain with the core limitation: you can only have one container on your classpath at a time. Let's suppose we take this same application, but want to run it against Apache OpenWebBeans as well as Weld. Each Arquillian container has a default Maven profile configuration that can be used for testing. These containers are covered within the Arquillian documentation, found at https://docs.jboss.org/author/display/ARQ/Container+adapters. Steps to try out containers are given as follows: Import the configuration defined by the container definition. Run a mvn clean install Pconfiguration_name. You can choose to set a default profile as well if you like. If you don't choose a default profile, you will need to specify one every time you want to run tests. Running the tests for this project, for both the weld-ee-embedded-1.1 container and the openwebbeans-embedded-1 profile should result in the same thing – a working test suite that is valid in both implementations. At the time of writing, I used Weld 1.1.8.Final and OpenWebBeans 1.1.3. It is important to point out at this time that these profile names are only useful if your application is designed purely for cross-platform testing and you want to run all test cases against each individual platform. If your application only targets a single platform, you may want to derive test cases that run on that platform as well as any subcomponents of that platform (for example, if you are a WebSphere v8 user, you may want your unit tests against OpenWebBeans and integration against WebSphere; however, if you are a WebLogic user, you would want to use Weld and WebLogic for your testing). Typically, when it comes to testing, you will use a distinct Maven profile to cover your stages of testing. You should set up a default Maven profile that runs only your basic tests (your unit tests); this will be set as activeByDefault. This should include any testing dependencies needed to run only these unit tests. You may optionally choose to only run certain parts of your test suite, which could be distinct packages under src/test/java or even standalone projects that are children to your parent that are only run under certain circumstances. I prefer the former approach, since the usage of conditional child projects can become confusing for developers. Profiles are useful for conditional dependencies, since they do include a dependency and dependencyManagement section in their pom files. You can also avoid dependency leaking. For example, most applications require the use of the full Java EE 6 APIs, but including these APIs with your Glassfish build will cause your tests to fail. Likewise, deploying to a managed application server may require different APIs than deploying to a remote application server. Categorizing your test cases One thing that Arquillian ultimately derives is that names mean everything. There are two naming schemes that you should follow always, they relate to the questions "what" and "how". What components are you testing? What phase of testing are you under? How does this class relate to your test cases? How is this code being deployed? These questions really pop up due to the nature of Arquillian, and really show off its robustness. Considering some of the main testing phases, unit test, integration test, system test, compatibility test, smoke test, acceptance test, performance test, usability test should all relate to specific packages in your test code. Here's a proposed naming scheme. I will assume that your code starts with com.mycompany.fooapp where com.mycompany is your company's domain, fooapp is the name of your application. You may have packages below this such as com.mycompany.fooapp.model or com.mycompany.fooapp.controller or even com.mycompany.fooapp.modulea.controller all of which represent varying testing scenarios that you may consider. com.mycompany.fooapp.test: This is the parent package for all test classes. There may or may not be any classes in this package. com.mycompany.fooapp.test.utils: This is where all of your test utility code goes. This would be where any deployments are actually build, but invoked from your test classes. com.mycompany.fooapp.test.unit: This is where all unit tests should exist. The packages/classes under test should fall relative under here. For example, com.mycompany.fooapp.test.unit.controller should test your controller logic. com.mycompany.fooapp.test.integration: Likewise, all integration test cases should fall under here. Following this pattern, you can derive your functional, acceptance, usability, and so on test case names. Following this pattern, you can easily define Maven profiles within your projects to test out your various layers. Let's suppose you want to define a unittest profile where the build includes running all unit tests (usually light weight tests, that maybe use an embedded container with Arquillian), you could do something like this: <profile><activation><activeByDefault>true</activeByDefault></activation><id>unittest</id><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><configuration><includes><include>**/unit/**</include></includes></configuration></plugin></plugins></build></profile> This will tell Maven to only run the tests in the unit test packages by default. This allows your builds to go by quickly and test key components of your code when needed. Since it's active by default, you would get these tests run anytime you kick off mvn install or mvn test from the command line, or your continuous integration server. Likewise, you may want to run more tests during your integration testing or system testing phases. These may or may not overlap with one another, but would likely include your unit tests as well. You could use the following, very similar Maven profile to achieve that: <profile><id>integrationtest</id><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><configuration><includes><include>**/unit/**</include><include>**/integration/**</include><include>**/system/**</include><include>**/regression/**</include></includes></configuration></plugin></plugins></build></profile> With the way this profile is configured, any time you invoke mvn test – integrationtest you will run all unit, integration, system, and regression test cases. These tests will probably take a while. If you're lucky and your application server supports an embedded mode it probably has a quick start up/shutdown. These tests may be running against a remote container though, so the deployment and execution time may take longer. Enriching your tests One of the core principles noted in the demo test case is that we injected a reference to the business object being tested, which itself had injection points as well that were satisfied. There are three core injection points supported by default with Arquillian, though others are supported by extensions as well as some custom injection points defined by Arquillian itself. They are as follows: @Resource: It defines references to any JNDI entry as per standard naming and injection schemes. @EJB: As long as you are including a deployed EJB in your archive, you can refer to it as long as your target container includes EJB support. EJB JNDI entry conventions remain intact. It is also worthwhile to note that Local and Remote interfaces matter. When you deploy to a same JVM application server, or a remote application server you must ensure that your injected test cases can see the EJB reference appropriately. @Inject: This is available as long as you have deployed a valid bean archive (requires beans.xml in META-INF or WEB-INF). Running out of the container Sometimes you will need to mix your Arquillian tests with non Arquillian tests, or sometimes you have special test cases that require a deployment but should not be run with that deployment (simulating a remote client to that deployment, for example). This may include a set of unit tests that have deployments to the container and some that do not require this. You can run the test outside of the container JVM by using the @RunAsClient annotation. This can be done at the class level or at the method level, allowing you to have multiple run-types within a single test class. This approach would be useful if you want to test a web service client, either SOAP or a REST API, ensuring that you are executing outside of the container JVM for your client. One of the custom injection points that Arquillian supports is a @ArquillianResource URL baseUrl; this represents the base URL of a remote container deployment that you can use in your client-level tests. The ArquillianResource annotation supports an additional value attribute that can be the class of a Servlet deployed in case you have multiple deployments occurring as a part of your test. Once you have the URL for your application deployment, you can build the location of your web services to be able to use your client application for testing purposes. This would also be used if you wanted to functionally test your application via HTTP, perhaps using Selenium as the driver for the application. The other types of objects available for injection using this annotation are the Deployer used and the InitialContext of the remote application server. The deployer gives you access to the underlying deployment while InitialContext would allow you to look up remote objects – for example, EJBs, JMS Queues/Topics/ConnectionFactories, or any other remotely accessible resource. Efficient test authoring Now that we have a test that can run against multiple containers, we need to start adding more test cases to our suite. When we do this, we have to keep two key elements in mind. Don't repeat yourself, and don't bloat your software. In many of our applications, we have a number of components that make up various layers. Some of them are dependent on one another, others are more generic. One of the key things to think about when planning your Arquillian testing is how should your JAR files look. Let's suppose we have an entire data object layer that has dependencies all throughout your application. We must have those classes in every test case. However, we can usually skip to only certain controllers and business beans within their specific test cases. Remember to create utilities to define your object structure, this gives you a single entry point for creating your deployment archive and allows for better extensibility. Here is a prototype class that can be used to start, it supports creating both a JAR file as well as a full web application: public class TestUtils {public static JavaArchive createBasicJar() {return ShrinkWrap.create(JavaArchive.class,"test.jar").addAsManifestResource(EmptyAsset.INSTANCE,"beans.xml").addPackages(false,getCorePackages());}public static WebArchive createWebApp() {return ShrinkWrap.create(WebArchive.class,"test.war").addAsWebInfResource(EmptyAsset.INSTANCE,"beans.xml").addPackages(false,getCorePackages());}public static Package[] getCorePackages() {return new Package[]{CalculatorData.class.getPackage()};}} What these methods add is a significant reduction in code that is impacted by a package change, or code refactoring in your primary code base. Let's suppose that we want to extend getCorePackages to also take in a number of classes, which when added, add the entire package to the deployment: public static Package[] getCorePackages(Class<?>...classes) {List<Package> packages = new ArrayList<Package>();if(classes != null) {for(Class<?> c : classes) {packages.add(c.getPackage());}}packages.add(CalculatorData.class.getPackage());return packages.toArray(new Package[]{});} One benefit that we have using this approach is that anyone who was using getCorePackages() does not need to change their code, since the argument is not required. Note that the ShrinkWrap API has several addPackage/addPackages methods. This one used has a first argument, Boolean, whether to re-curse through child packages to find your code. Going back a few pages to some naming conventions I proposed, what would happen if you add the package com.mycompany.fooapp to your bundle? All of your application classes, including test classes, would be added to the deployment you are creating. This is probably not what you would have expected, as a result, the recommendation is to not re-curse into child packages and instead just list out each package you want added explicitly. Another option to consider is to have your test classes delegate their work. Commonly thought of as the façade programming paradigm, you can actually apply this to your test code as well. If you have code that should be tested distinctly but use different deployments you may want to use distinct deployment objects but reuse your test case. This may involve using a controller type test case that simply delegates its test calls to another object that is meant to purely handle the testing logic. Your controller would have methods annotated @Test and include your @Deployment(s) but would delegate logic to another class, potentially an injected test executor class. ShrinkWrap – building your own app One of the more curious things about Arquillian is that your test case is responsible for constructing the application to be deployed. When you're working with Maven it is especially odd, since all of the information to build the application is there, either implicitly based on the project structure or explicitly listed in your pom file. There are two realizations around this that are important: Your test cases are meant to test anything from a subset of your code to your entire application. Arquillian is flexible enough to handle both extremes. Arquillian works great with Maven, but also works with other build tools such as Ant and Gradle. In order to support the dynamic generation of your application, the ShrinkWrap project exists to help dynamically build Java archive files. There are four primary archive types supported in ShrinkWrap: Java Archive (plain JAR files), Web Archive (WAR files), Enterprise Archive (EAR files), and Resource Adapters (RARs). Your Arquillian test case can declare any number of these archive files to be created for testing purposes. Another place that ShrinkWrap helps with is the creation of deployment descriptors. These could be application.xml files, or persistence.xml files, or any of the standard deployment descriptors that you would use in your application. Likewise, it has extensibility built in to allow the creation of new descriptors in a programmatic fashion. This article assumes that you are using the ShrinkWrap 2.0 APIs; one of the key features added is support for resolving dependency files from reading a Maven pom file. Another key thing you need to do is modify your test classpath to include some files from your core application. Here is an example of what to do, from another project I was working on recently: <testResources><testResource><directory>src/test/resources</directory></testResource><testResource><directory>src/main/resources</directory></testResource><testResource><directory>src/main/webapp/WEB-INF</directory><targetPath>web</targetPath></testResource></testResources> This will make it easier to reference your main application resources without requiring full paths or file manipulation. ShrinkWrap also provides ways to build deployment descriptors. This is a programmatic approach to adding the descriptor to your deployment, including programmatically creating the descriptor. Because of the need to test using the same descriptors being built with the production application, I have found it easier to reference to an existing descriptor. However, in some cases it may make more sense to use one customized to your test applications. In this scenario, I would strongly recommend creating a utility method to build the descriptor. To do this, we will add the following to our Maven pom.xml in the dependencyManagement section: <dependency><groupId>org.jboss.shrinkwrap.descriptors</groupId><artifactId>shrinkwrap-descriptors-bom</artifactId><version>2.0.0-alpha-4</version><scope>import</scope><type>pom</type></dependency> Declare the following dependency: <dependency><groupId>org.jboss.shrinkwrap.descriptors</groupId><artifactId>shrinkwrap-descriptors-impl-javaee</artifactId></dependency> Then add the following method to our code: public static StringAsset createWebXml() {return new StringAsset(Descriptors.create(WebAppDescriptor.class).exportAsString());}public static WebArchive createWebApp() {return ShrinkWrap.create(WebArchive.class,"test.war").addAsWebInfResource(EmptyAsset.INSTANCE,"beans.xml").addAsWebInfResource(createWebXml(), "web.xml").addPackages(false,getCorePackages());} This will generate the web.xml file expected for your test case. One of the benefits of using the descriptors is that we can actually import the one from our application and make changes as needed. public static StringAsset createWebXml() {WebAppDescriptor descriptor = Descriptors.importAs(WebAppDescriptor.class).fromFile(new File("web/web.xml"));descriptor.createWelcomeFileList().welcomeFile("someFile.jsp");return new StringAsset(descriptor.exportAsString());} As a result, this will use your base web.xml file but change the welcome file to something else. You can do this with a number of other descriptors, including persistence.xml, beans.xml, web-fragment.xml, and so on. Getting the most from Arquillian Arquillian does take some understanding to work with. In order to get the most from it, you have to work with it and invest time. Your standard rules still apply. Arquillian does not use any special rules when it comes to processing deployments. The rules about deployment descriptors, archive contents, and so on and so forth still apply. You need to keep the rule of thumb – if it deploys to your application server, then you can deploy the same archive via Arquillian; just make sure that you are deploying the same archive. Note that you can use the Archive's toString method to print out the contents when in doubt. This supports a formatter as well, to make it easier to read the contents. Alternatively, you can export an Archive using archive.as(ZipExporter.class).exportTo(File) if you want to manually review the file. Run as many tests as you can with Arquillian. Due to Arquillian's nature, you're going to start to find inconsistencies in your code if you don't test it with Arquillian. This could include unexpected dependency injection expectations, which Arquillian will process for you. Arquillian, since it executes your code the way the application server would as requests come in, makes your tests more consistent with the real world of how the code works. Testing more in Arquillian, even if it is just using a basic CDI container or OpenEJB container, will allow you to test more effectively. You make the best use of Arquillian when you use it throughout 100 percent of your test cases. Finally, my last key advice to getting the most from Arquillian is to remember to not over-complicate your build to make use of Arquillian. Arquillian works fine as a part of your standard build. It has no requirement to create distinct projects for each build type or make overly complicated build structures on top of your application. If you are attempting to run Arquillian with your continuous integration server, then you must ensure that different test cases run either as different steps of the build or as separate build jobs. Arquillian extensions There are a number of extensions available for Arquillian; they are designed to extend Arquillian functionality to do some domain-specific testing: Persistence, using DBUnit and validating that results of interacting with the database. REST, invoke REST APIs from Arquillian the were deployed as a part of the test case. Spring, use a Spring Context and additional Spring libraries with your test cases. Drone/Selenium, functionally test your web applications using Arquillian. Summary My main goal in this article was to give you an overview of Arquillian and introduce you to some of its core concepts. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Web Services Testing and soapUI [Article] SOAP and PHP 5 [Article]
Read more
  • 0
  • 0
  • 1226
article-image-performance-enhancements-aspnet
Packt
26 Jul 2013
40 min read
Save for later

Performance enhancements to ASP.NET

Packt
26 Jul 2013
40 min read
ASP.NET performance Performance is one of the primary goals for any system. For a server, the throughput /time actually specifies the performance of the hardware or software in the system. It is important to increase performance and decrease the amount of hardware used for the throughput. There must be a balance between the two. Performance is one of the key elements of web development. In the last phase of ASP.NET 4.5, performance was one of the key concerns for Microsoft. They made a few major changes to the ASP.NET system to make it more performant. Performance comes directly, starting from the CPU utilization all the way back to the actual code you write. Each CPU cycle you consume for producing your response will affect performance. Consuming a large number of CPU cycles will lead you to add more and more CPUs to avoid site unavailability. As we are moving more and more towards the cloud system, performance is directly related to the cost. Here, CPU cycles costs money. Hence to make a more cost effective system running on the cloud, unnecessary CPU cycles should always be avoided. .NET 4.5 addresses the problem to its core to support background GC. The background GC for the server introduces support for concurrent collection without blocking threads; hence the performance of the site is not compromised because of garbage collection as well. The multicore JIT in addition improves the start-up time of pages without additional work. By the way, some of the improvements in technology can be really tangible to developers as well as end users. They can be categorized as follows. CPU and JIT improvements ASP.NET feature improvements The first category is generally intangible while the second case is tangible. The CPU and JIT improvements, as we have already discussed, are actually related to server performance. JIT compilations are not tangible to the developers which means they will automatically work on the system rather than any code change while the second category is actually related to code. We will focus here mainly on the tangible improvements in this recipe. Getting ready To get started, let us start Visual Studio 2012 and create an ASP.NET project. If you are opening the project for the first time, you can choose the ASP.NET Web Forms Application project template. Visual Studio 2012 comes with a cool template which virtually creates the layout of a blank site. Just create the project and run it and you will be presented with a blank site with all the default behaviors you need. This is done without writing a single line of code. Now, if you look into Solution Explorer, the project is separated into folders, each representing its identification. For instance, the Scripts folder includes all the JavaScript associated with the site. You can also see the Themes folder in Content, which includes the CSS files. Generally, for production-level sites, we have large numbers of JavaScript and CSS files that are sometimes very big and they download to the client browser when the site is initially loaded. We specify the file path using the script or link path. If you are familiar with web clients you will know that, the websites request these files in a separate request after getting the HTTP response for the page. As most browsers don't support parallel download, the download of each file adds up to the response. Even during the completion of each download there is a pause, which we call network latency. So, if you see the entire page response of a website, you will see that a large amount of response time is actually consumed by the download of these external files rather than the actual website response. Let us create a page on the website and add few JavaScript files and see the response time using Fiddler: The preceding screenshot shows how the browser requests the resources. Just notice, the first request is the actual request made by the client that takes half of a second, but the second half of that second is consumed by the requests made by the response from the server. The server responds with a number of CSS and JavaScript requests in the header, which have eventually been called to the same web server one by one. Sometimes, if the JavaScript is heavy, it takes a lot of time to load these individual files to the client which results in delay in response time for the web page. It is the same with images too. Even though external files are downloaded in separate streams, big images hurt the performance of the web page as well: Here you can see that the source of the file contains the call to a number of files that corresponds to each request. When the HTML is processed on the browser, it invokes each of these file requests one by one and replaces the document with the reference of those files. As I have already told you, making more and more resources reduces the performance of a page. This huge number of requests makes the website very slow. The screenshot depicts the actual number of requests, the bytes sent and received, and the performance in seconds. If you look at some big applications, the performance of the page is reduced by a lot more than this. To address this problem we take the following two approaches: Minimizing the size of JavaScript and CSS by removing the whitespaces, newlines, tab spaces, and so on, or omitting out the unnecessary content Bundling all the files into one file of the same MIME type to reduce the requests made by the browser ASP.NET addresses both of these problems and introduces a new feature that can both minimize the content of the JavaScript and CSS files as well as bundle all the JavaScript or CSS files together to produce one single file request from the site. To use this feature you need to first install the package. Open Visual Studio 2012, select View | Other Windows | Package Manager Console as shown in the following screenshot. Package Manager Console will open the PowerShell window for package management inside Visual Studio: Once the package manager is loaded, type the following command. Install-Package Microsoft.Web.Optimization This will load the optimizations inside Visual Studio. On the other hand, rather than opening Package Manager Console, you can also open Nuget package manager by right-clicking on the references folder of the project and select Add Library Package Reference. This will produce a nice dialog box to select and install the appropriate package. In this recipe, we are going to cover how to take the benefits of bundling and minification of website contents in .NET 4.5. How to do it... In order to move ahead with the recipe, we will use a blank web solution instead of the template web solution that I have created just now. To do this, start Visual Studio and select the ASP.NET Empty Web Application template. The project will be created without any pages but with a web.config file (a web.config file is similar to app.config but works on web environments). Add a new page to the project and name it as home.aspx. Leave it as it is, and go ahead by adding a folder to the solution and naming it as Resources. Inside resources, create two folders, one for JavaScript named js and another for stylesheets name css. Create a few JavaScript files inside js folder and a few CSS files inside the css folder. Once you finish the folder structure will look like below: Now let us add the files on the home page. Just drag-and-drop the js files one by one into the head section of the page. The page IDE will produce the appropriate tags for scripts and CSS automatically. Now run the project. You will see that the CSS and the JavaScript are appropriately loaded. To check, try using Fiddler. When you select the source of a page, you will see links that points to the JavaScript, CSS, or other resource files. These files are directly linked to the source and hence if we navigate to these files, it will show the raw content of the JavaScript. Open Fiddler and refresh the page keeping it open in Internet Explorer in the debug mode. You will see that the browser invokes four requests. Three of them are for external files and one for the actual HTML file. The Fiddler shows how the timeframe of the request is maintained. The first request being for the home.aspx file while the others are automatically invoked by the browser to get the js and css files. You can also take a note at the total size of the whole combined request for the page. Let's close the browser and remove references to the js and css files from the head tag, where you have dragged and added the following code to reference folders rather than individual files: <script src = "Resources/js"></script> <link rel="stylesheet" href="Resources/css" /> Open the Global.asax file (add if not already added) and write the following line in Application_Start: void Application_Start(object sender, EventArgs e) { //Adds the default behavior BundleTable.Bundles.EnableDefaultBundles(); } Once the line has been added, you can now run the project and see the output. If you now see the result in Fiddler, you will see all the files inside the scripts are clubbed into a single file and the whole file gets downloaded in a single request. If you have a large number of files, the bundling will show considerable performance gain for the web page. Bundling is not the only performance gain that we have achieved using Optimization. Press F5 to run the application and try to look into the actual file that has been downloaded as js and css. You will see that the bundle has been minified already by disregarding comments, blank spaces, new lines, and so on. Hence, the size of the bundles has also been reduced physically. You can also add your custom BundleTable entries. Generally, we add them inside the Application_Start section of the Global.asax file, like so: Bundle mybundle = new Bundle("~/mycustombundle", typeof(JsMinify)); mybundle.AddFile("~/Resources/Main.js"); mybundle.AddFile("~/Resources/Sub1.js"); mybundle.AddDirectory("/Resources/Files", "*.js", false); BundleTable.Bundles.Add(mybundle); The preceding code creates a new bundle for the application that can be referenced later on. We can use AddFile to add individual files to the Bundle or we can also use AddDirectory to specify a whole directory for a particular search pattern. The last argument for AddDirectory specifies whether it needs to search for a subdirectory or not. JsMinify is the default Rule processor for the JavaScript files. Similar to JsMinify is a class called CssMinfy that acts as a default rule for CSS minification. You can reference your custom bundle directly inside your page using the following directive: <script src = "mycustombundle" type="text/javascript" /> You will notice that the directive appropriately points to the custom bundle that has been created. How it works... Bundling and minification works with the introduction of the System.Web.Optimization namespace. BundleTable is a new class inside this namespace that keeps track of all the bundles that have been created in the solution. It maintains a list of all the Bundle objects, that is, list of JavaScript or CSS files, in a key-value pair collection. Once the request for a bundle is made, HttpRuntime dynamically combines the files and/or directories associated with the bundle into a single file response. Let us consider some other types that helps in transformation. BundleResponse: This class represents the response after the resources are bundled and minified. So BundleResponse keeps track of the actual response of the combined file. IBundleTransform: This type specifies the contract for transformation. Its main idea is to provide the transformation for a particular resource. JsMinfy or CssMinify are the default implementations of IBundleTransform. Bundle: The class represents a resource bundle with a list of files or directories. The IBundleTransform type specifies the rule for producing the BundleResponse class. To implement custom rules for a bundle, we need to implement this interface. public class MyCustomTransform : IBundleTransform { public void Process(BundleResponse bundleresponse) { // write logic to Bundle and minfy… } } Here, the BundleResponse class is the actual response stream where we need to write the minified output to. Basically, the application uses the default BundleHandler class to initiate the transform. BundleHandler is an IHttpHandler that uses ProcessRequest to get the response for the request from the browser. The process is summarized as follows: HttpRuntime calls the default BundleHandler.ProcessRequest method to handle the bundling and minification request initiated by the browser. ProcessRequest gets the appropriate bundle from the BundleTable class and calls Bundle.ProcessRequest. The Bundle.ProcessRequest method first retrieves the bundle's Url and invokes Bundle.GetBundleResponse. GetBundleResponse first performs a cache lookup. If there is no cache available, it calls GenerateBundleResponse. The GenerateBundleResponse method creates an instance of BundleResponse, sets the files to be processed in correct order, and finally invokes IBundleTransform.Process. The response is then written to BundleResponse from this method and the output is thrown back to the client. The preceding flow diagram summarizes how the transformation is handled by ASP.NET. The final call to IBundleTransform returns the response back to the browser. There's more... Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task. How to configure the compilation of pages in ASP.NET websites Compilation also plays a very vital role in the performance of websites. As we have already mentioned, we have background GC available to the servers with .NET 4.5 releases, which means when GC starts collecting unreferenced objects, there will be no suspension of executing threads on the server. The GC can start collecting in the background. The support of multicore JIT will increase the performance of non-JITed files as well. By default, .NET 4.5 supports multicore JIT. If you want to disable this option, you can use the following code. <system.web> <compilation profileGuidedOptimizations="None" /> </system.web> This configuration will disable the support of spreading the JIT into multiple cores. The server enables a Prefetcher technology, similar to what Windows uses, to reduce the disk read cost of paging during application startup. The Prefetcher is enabled by default, you can also disable this using the following code: <system.web> <compilation enablePrefetchOptimization ="false" /> </system.web> This settings will disable the Prefetcher technology on the ASP.NET site. You can also configure your server to directly manipulate the amount of GC: <runtime> <performanceScenario value="HighDensityWebHosting" /> The preceding configuration will make the website a high density website. This will reduce the amount of memory consumed per session. What is unobtrusive validation Validation plays a very vital role for any application that employs user input. We generally use ASP.NET data validators to specify validation for a particular control. The validation forms the basis of any input. People use validator controls available in ASP.NET (which include RequiredFieldValidator, RangeValidator, and so on) to validate the controls when either a page is submitted or when the control loses its focus or on any event the validator is associated with. Validators being most popular server-side controls that handles client-side validation by producing an inline JavaScript block inside the actual page that specifies each validator. Let us take an instance: <asp:TextBox ID="Username" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ErrorMessage="Username is required!" ControlToValidate="Username" runat="server"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ErrorMessage="Username can only contain letters!" ControlToValidate="Username" ValidationExpression="^[A-Za-z]+$" runat="server"></asp:RegularExpressionValidator> The validator handles both the client-side and server-side validations. When the preceding lines are rendered in the browser, it produces a mess of inline JavaScript. .NET 4.5 uses unobtrusive validation. That means the inline JavaScript is replaced by the data attributes in the HTML. This is a normal HTML-only code and hence performs better than the inline HTML and is also very understandable, neat, and clean. You can also turn off the default behavior of the application just by adding the line in Application_Start of the Global.asax file: void Application_Start(object sender, EventArgs e) { //Disable UnobtrusiveValidation application wide ValidationSettings.UnobtrusiveValidationMode = UnobtrusiveValidationMode.None; } The preceding code will disable the feature for the application. Applying appSettings configuration key values Microsoft has implemented the ASP.NET web application engine in such a way that most of its configurations can be overridden by the developers while developing applications. There is a special configuration file named Machine.config that provides the default configuration of each of the config sections present for every application. web.config is specific to an application hosted on IIS. The IIS reads through the configuration of each directory to apply for the pages inside it. As configuring a web application is such a basic thing for any application, there is always a need to have a template for a specific set of configuration without rewriting the whole section inside web.config again. There are some specific requirements from the developers perspective that could be easily customizable without changing too much on the config.ASP.NET 4.5 introduces magic strings that could be used as configuration key values in the appSettings element that could give special meaning to the configuration. For instance, if you want the default built-in JavaScript encoding to encode & character, you might use the following: <appSettings> <add key="aspnet:JavaScriptDoNotEncodeAmpersand" value="false" /> </appSettings> This will ensure that & character is encoded as "u0026" which is the JavaScript-escaped form of that character. When the value is true, the default JavaScript string will not encode &. On the other hand, if you need to allow ScriptResource.axd to serve arbitrary static files other than JavaScript, you can use another magic appSettings key to handle this: <appSettings> <add key="aspnet:ScriptResourceAllowNonJsFiles" value="false" /> </appSettings> The configuration will ensure that ScriptResource.axd will not serve any file other than the .js extension even if the web page has a markup . Similar to this, you can also enable UnobtrusiveValidationMode on the website using a separate magic string on appSetting too: <appSettings> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> This configuration will make the application to render HTML5 data-attributes for validators. There are a bunch of these appSettings key magic strings that you can use in your configuration to give special meaning to the web application. Refer to http://bit.ly/ASPNETMagicStrings for more information. DLL intern in ASP.NET servers Just like the reusability of string can be achieved using string intern tables, ASP.NET allows you to specify DLL intern that reduces the use of loading multiple DLLs into memory from different physical locations. The interning functionality introduced with ASP.NET reduces the RAM requirement and load time-even though the same DLL resides on multiple physical locations, they are loaded only once into the memory and the same memory is being shared by multiple processes. ASP.NET maintains symbolic links placed on the bin folder that map to a shared assembly. Sharing assemblies using a symbolic link requires a new tool named aspnet_intern.exe that lets you to create and manage the stored interned assemblies. To make sure that the assemblies are interned, we need to run the following code on the source directory: aspnet_inturn –mode exec –sourcedir "Temporary ASP.NET files" – interndir "c:assemblies" This code will put the shared assemblies placed inside assemblies directory interned to the temporary ASP.NET files. Thus, once a DLL is loaded into memory, it will be shared by other requests. How to work with statically-typed model binding in ASP.NET applications Binding is a concept that attaches the source with the target such that when something is modified on the source, it automatically reflects to the target. The concept of binding is not new in the .NET framework. It was there from the beginning. On the server-side controls, when we set DataSource, we generally don't expect DataSource to automatically produce the output to be rendered onto the actual HTML. We expect to call a DataBind method corresponding to the control. Something magical happens in the background that generates the actual HTML from DataSource and produces the output. DataSource expects a collection of items where each of the items produces single entry on the control. For instance, if we pass a collection as the data source of a grid, the data bind will enumerate the collection and each entry in the collection will create a row of DataGrid. To evaluate each property from within the individual element, we use DataBinder.Eval that uses reflection to evaluate the contextual property with actual data. Now we all know, DataBinder actually works on a string equivalent of the actual property, and you cannot get the error before you actually run the page. In case of model binding, the bound object generates the actual object. Model binding does have the information about the actual object for which the collection is made and can give you options such as IntelliSense or other advanced Visual Studio options to work with the item. Getting ready DataSource is a property of the Databound element that takes a collection and provides a mechanism to repeat its output replacing the contextual element of the collection with generated HTML. Each control generates HTML during the render phase of the ASP.NET page life cycle and returns the output to the client. The ASP.NET controls are built so elegantly that you can easily hook into its properties while the actual HTML is being created, and get the contextual controls that make up the HTML with the contextual data element as well. For a template control such as Repeater, each ItemTemplate property or the AlternateItemTemplate property exposes a data item in its callback when it is actually rendered. This is basically the contextual object of DataSource on the nth iteration. DataBinder.Eval is a special API that evaluates a property from any object using Reflection. It is totally a runtime evaluation and hence cannot determine any mistakes on designer during compile time. The contextual object also doesn't have any type-related information inherent inside the control. With ASP.NET 4.5, the DataBound controls expose the contextual object as generic types so that the contextual object is always strongly typed. The control exposes the ItemType property, which can also be used inside the HTML designer to specify the type of the contextual element. The object is determined automatically by the Visual Studio IDE and it produces proper IntelliSense and provides compile-time error checking on the type defined by the control. In this recipe we are going to see step by step how to create a control that is bound to a model and define the HTML using its inherent Item object. How to do it... Open Visual Studio and start an ASP.NET Web Application project. Create a class called Customer to actually implement the model. For simplicity, we are just using a class as our model: public class Customer { public string CustomerId { get; set; } public string Name { get; set; } } The Customer class has two properties, one that holds the identifier of the customer and another the name of the customer. Now let us add an ASPX file and add a Repeater control. The Repeater control has a property called ModelType that needs the actual logical path of the model class. Here we pass the customer. Once ModelType is set for the Repeater control, you can directly use the contextual object inside ItemTemplate just by specifying it with the : Item syntax: <asp:Repeater runat="server" ID="rptCustomers" ItemType="SampleBinding.Customer"> <ItemTemplate> <span><%# :Item.Name %></span> </ItemTemplate> </asp:Repeater> Here in this Repeater control we have directly accessed the Name property from the Customer class. So here if we specify a list of Customer values to its data source, it will bind the contextual objects appropriately. The ItemType property is available to all DataBound controls. The Databound controls in ASP.NET 4.5 also support CRUD operations. The controls such as Gridview, FormView, and DetailsView expose properties to specify SelectMethod, InsertMethod, UpdateMethod, or DeleteMethod. These methods allow you to pass proper methods that in turn allow you to specify DML statements. Add a new page called Details.aspx and configure it as follows: <asp:DetailsView SelectMethod="dvDepartments_GetItem" ID="dvDepartments" UpdateMethod="dvDepartments_ UpdateItem" runat="server" InsertMethod="dvDepartments_ InsertItem" DeleteMethod="dvDepartments_DeleteItem" ModelType="ModelBindingSample.ModelDepartment" AutoGenerateInsertB utton="true"> </asp:DetailsView> Here in the preceding code, you can see that I have specified all the DML methods. The code behind will have all the methods and you need to properly specify the methods for each operation. How it works... Every collection control loops through the Datasource control and renders the output. The Bindable controls support collection to be passed to it, so that it can make a template of itself by individually running the same code over and over again with a contextual object passed for an index. The contextual element is present during the phase of rendering the HTML. ASP.NET 4.5 comes with the feature that allows you to define the type of an individual item of the collection such that the template forces this conversion, and the contextual item is made available to the template. In other words, what we have been doing with Eval before, can now be done easily using the Item contextual object, which is of same type as we define in the ItemType property. The designer enumerates properties into an IntelliSense menu just like a C# code window to write code easier. Each databound control in ASP.NET 4.5 allows CRUD operations. For every CRUD operation there is a specific event handler that can be configured to handle operations defined inside the control. You should remember that after each of these operations, the control actually calls DataBind again so that the data gets refreshed. There's more... ModelBinding is not the only thing that is important. Let us discuss some of the other important concepts deemed fit to this category. ModelBinding with filter operations ModelBinding in ASP.NET 4.5 has been enhanced pretty much to support most of the operations that we regularly need with our ASP.NET pages. Among the interesting features is the support of filters in selection of control. Let us use DetailsView to introduce this feature: <asp:DropDownList ID="ddlDepartmentNames" runat="server" ItemType="ModelBindingSample.ModelDepartment" AutoPostBack="true" DataValueField="DepartmentId" DataTextField="DepartmentName" SelectMethod="GetDepartments"> </asp:DropDownList> <asp:DetailsView SelectMethod="dvDepartments_GetItems" ID="dvDepartments" UpdateMethod="dvDepartments_UpdateItem" runat="server" InsertMethod="dvDepartments_InsertItem" DeleteMethod="dvDepartments_DeleteItem" ItemType="ModelBindingSample.ModelCustomer" AutoGenerateIn sertButton="true"> </asp:DetailsView> Here you can see the DropDownList control calls Getdepartments to generate the list of departments available. The DetailsView control on the other hand uses the ModelCustomer class to generate the customer list. SelectMethod allows you to bind the control with the data. Now to get the filter out of SelectMethod we use the following code: public IQueryable<ModelCustomer> dvDepartments_GetItems([Control("ddlD epartmentNames")]string deptid) { // get customers for a specific id } This method will be automatically called when the drop-down list changes its value. The departmentid of the selected DropDownItem control is automatically passed into the method and the result is bound to DetailsView automatically. Remember, the dvDepartments_GetItems method always passes a Nullable parameter. So, if departmentid is declared as integer, it would have been passed as int? rather than int. The attribute on the argument specifies the control, which defines the value for the query element. You need to pass IEnumerable (IQueryable in our case) of the items to be bound to the control. You can also specify a filter using Querystring. You can use the following code: public IQueryable<Customer> GetCustomers([QueryString]string departmentid) { return null; }   This code will take departmentid from the query string and load the DataBound control instead of the control specified within the page. Introduction to HTML5 and CSS3 in ASP.NET applications Web is the media that runs over the Internet. It's a service that has already has us in its grasp. Literally, if you think of the Web, the first thing that can come into your mind is everything about HTML, CSS, and JavaScript. The browsers are the user agents that are used to communicate with the Web. The Web has been there for almost a decade and is used mostly to serve information about business, communities, social networks, and virtually everything that you can think of. For such a long period of time, users primarily use websites to see text-based content with minimum UI experiences and texts that can easily be consumed by search engines. In those websites, all that the browsers do is send a request for a page and the server serves the client with the appropriate page which is later rendered on the browser. But with the introduction to modern HTMLs, websites are gradually adopting interactivity in terms of CSS, AJAX, iFrame, or are even using sandboxed applications with the use of Silverlight, Flash, and so on. Silverlight and Adobe AIR (Flash) are specifically likely to be used when the requirement is great interactivity and rich clients. They totally look like desktop applications and interact with the user as much as they can. But the problems with a sandboxed application are that they are very slow and need every browser to install the appropriate plugin before they can actually navigate to the application. They are heavyweight and are not rendered by the browser engine. Even though they are so popular these days, most of the development still employs the traditional approach of HTML and CSS. Most businesses cannot afford the long loading waits or even as we move along to the lines of devices, most of these do not support them. The long term user requirements made it important to take the traditional HTML and CSS further, ornamenting it in such a way that these ongoing requirements could easily be solved using traditional code. The popularity of the ASP.NET technology also points to the popularity of HTML. Even though we are dealing with server-side controls (in case of ASP.NET applications), internally everything renders HTML to the browser. HTML5, which was introduced by W3C and drafted in June 2004, is going to be standardized in 2014 making most of the things that need desktop or sandboxed plugins easily carried out using HTML, CSS, and JavaScript. The long term requirement to have offline web, data storage, hardware access, or even working with graphics and multimedia is easily possible with the help of the HTML5 technology. So basically what we had to rely on (the sandbox browser plugins) is now going to be standardized. In this recipe, we are going to cover some of the interesting HTML5 features that need special attention. Getting ready HTML5 does not need the installation of any special SDK to be used. Most of the current browsers already support HTML5 and all the new browsers are going to support most of these features. The official logo of HTML5 has been considered as follows. HTML5 has introduced a lot of new advanced features but yet it also tries to simplify things that we commonly don't need to know but often need to remember in order to write code. For instance, the DocType element of an HTML5 document has been simplified to the following one line: <!DOCTYPE html> So, for an HTML5 document, the document type that specifies the page is simply HTML. Similar to DocType, the character set for the page is also defined in very simple terms. <meta charset="utf-8" /> The character set can be of any type. Here we specified the document to be UTF – 8. You do not need to specify the http-equiv attribute or content to define charset for the page in an HTML5 document according to the specification. Let us now jot down some of the interesting HTML5 items that we are going to take on in this recipe. Semantic tags, better markups, descriptive link relations, micro-data elements, new form types and field types, CSS enhancements and JavaScript enhancements. Not all browsers presently support every feature defined in HTML5. There are Modernizr scripts that can help as cross-browser polyfills for all browsers. You can read more information about it here. How to do it... The HTML5 syntax has been adorned with a lot of important tags which include header, nav, aside, figure, and footer syntaxes that help in defining better semantic meaning of the document: <body> <header> <hgroup> <h1>Page title</h1> <h2>Page subtitle</h2> </hgroup> </header> <nav> <ul> Specify navigation </ul> </nav> <section> <article> <header> <h1>Title</h1> </header> <section> Content for the section </section> </article> <article> <aside> Releated links </aside> <figure> <img src = "logo.jpg"/> <figcaption>Special HTML5 Logo</figcaption> </figure> <footer> Copyright © <time datetime="2010-11-08">2010</time>. </footer> </body> By reading the document, it clearly identifies the semantic meaning of the document. The header tag specifies the header information about the page. The nav tag defines the navigation panel. The Section tag is defined by articles and besides them, there are links. The img tag is adorned with the Figure tag and finally, the footer information is defined under the footer tag. A diagrammatic representation of the layout is shown as follows. The vocabulary of the page that has been previously defined by div and CSS classes are now maintained by the HTML itself and the whole document forms a meaning to the reader. HTML5 not only improves the semantic meaning of the document, it also adds new markup. For instance, take a look at the following code: <input list="options" type="text"/> <datalist id="options"> <option value="Abhishek"/> <option value="Abhijit"/> <option value="Abhik"/> </datalist> datalist specifies the autocomplete list for a control. A datalist item automatically pops up a menu while we type inside a textbox. The input tag specifies the list for autocomplete using the list attribute. Now if you start typing on the textbox, it specifies a list of items automatically. <details> <summary>HTML 5</summary> This is a sliding panel that comes when the HTML5 header is clicked </details> The preceding markup specifies a sliding panel container. We used to specify these using JavaScript, but now HTML5 comes with controls that handle these panels automatically. HTML5 comes with a progress bar. It supports the progress and meter tags that define the progress bar inside an HTML document: <meter min="0" max="100" low="40" high="90" optimum="100" value="91">A+</meter> <progress value="75" max="100">3/4 complete</progress> The progress shows 75 percent filled in and the meter shows a value of 91. HTML5 added a whole lot of new attributes to specify aria attributes and microdata for a block. For instance, consider the following code: <div itemscope itemtype="http://example.org/band"> <ul id="tv" role="tree" tabindex="0" aria-labelledby="node1"> <li role="treeitem" tabindex="-1" aria-expanded="true">Inside Node1</li> </li> </ul> Here, Itemscope defines the microdata and ul defines a tree with aria attributes. These data are helpful for different analyzers or even for automated tools or search engines about the document. There are new Form types that have been introduced with HTML5: <input type="email" value="[email protected]" /> <input type="date" min="2010-08-14" max="2011-08-14" value="2010-08-14"/> <input type="range" min="0" max="50" value="10" /> <input type="search" results="10" placeholder="Search..." /> <input type="tel" placeholder="(555) 555-5555" pattern="^(?d{3})?[-s]d{3}[-s]d{4}.*?$" /> <input type="color" placeholder="e.g. #bbbbbb" /> <input type="number" step="1" min="-5" max="10" value="0" /> These inputs types give a special meaning to the form. The preceding figure shows how the new controls are laid out when placed inside a HTML document. The controls are email, date, range, search, tel, color, and number respectively. HTML5 supports vector drawing over the document. We can use a canvas to draw 2D as well as 3D graphics over the HTML document: <script> var canvasContext = document.getElementById("canvas"). getContext("2d"); canvasContext.fillRect(250, 25, 150, 100); canvasContext.beginPath(); canvasContext.arc(450, 110, 100, Math.PI * 1/2, Math.PI * 3/2); canvasContext.lineWidth = 15; canvasContext.lineCap = 'round'; canvasContext.strokeStyle = 'rgba(255, 127, 0, 0.5)'; canvasContext.stroke(); </script> Consider the following diagram. The preceding code creates an arc on the canvas and a rectangle filled with the color black as shown in the diagram. The canvas gives us the options to draw any shape within it using simple JavaScript. As the world is moving towards multimedia, HTML5 introduces audio and video tags that allow us to run audio and video inside the browser. We do not need any third-party library or plugin to run audio or video inside a browser: <audio id="audio" src = "sound.mp3" controls></audio> <video id="video" src = "movie.webm" autoplay controls></video> The audio tag runs the audio and the video tag runs the video inside the browser. When controls are specified, the player provides superior browser controls to the user. With CSS3 on the way, CSS has been improved greatly to enhance the HTML document styles. For instance, CSS constructs such as .row:nth-child(even) gives the programmer control to deal with a particular set of items on the document and the programmer gains more granular programmatic approach using CSS. How it works... HTML5 is the standardization to the web environments with W3C standards. The HTML5 specifications are still in the draft stage (a 900-page specification available at http://www.w3.org/html/wg/drafts/html/master/), but most modern browsers have already started supporting the features mentioned in the specifications. The standardization is due in 2014 and by then all browsers need to support HTML5 constructs. Moreover, with the evolution of smart devices, mobile browsers are also getting support for HTML5 syntaxes. Most smart devices such as Android, iPhone, or Windows Phone now support HTML5 browsers and the HTML that runs over big devices can still show the content on those small browsers. HTML5 improves the richness of the web applications and hence most people have already started shifting their websites to the future of the Web. There's more... HTML5 has introduced a lot of new enhancements which cannot be completed using one single recipe. Let us look into some more enhancements of HTML5, which are important to know. How to work with web workers in HTML5 Web workers are one of the most awaited features of the entire HTML5 specification. Generally, if we think of the current environment, it is actually turning towards multicore machines. Today, it's verbal that every computer has at least two cores installed in their machine. Browsers are well capable of producing multiple threads that can run in parallel in different cores. But programmatically, we cannot have the flexibility in JavaScript to run parallel tasks in different cores yet. Previously, developers used setTimeout, setInterval, or XMLHttprequst to actually create non-blocking calls. But these are not truly a concurrency. I mean, if you still put a long loop inside setTimeout, it will still hang the UI. Actually these works asynchronously take some of the UI threads time slices but they do not actually spawn a new thread to run the code. As the world is moving towards client-side, rich user interfaces, we are prone to develop codes that are capable of computation on the client side itself. So going through the line, it is important that the browsers support multiple cores to be used up while executing a JavaScript. Web workers are actually a JavaScript type that enable you to create multiple cores and run your JavaScript in a separate thread altogether, and communicate the UI thread using messages in a similar way as we do for other languages. Let's look into the code to see how it works. var worker = new Worker('task.js'); worker.onmessage = function(event) { alert(event.data); }; worker.postMessage('data'); Here we will load task.js from Worker.Worker is a type that invokes the code inside a JavaScript in a new thread. The start of the thread is called using postMessage on the worker type. Now we have already added a callback to the event onmessage such that when the JavaScript inside task.js invokes postMessage, this message is received by this callback. Inside task.js we wrote. self.onmessage = function(event) { // Do some CPU intensive work. self.postMessage("recv'd: " + event.data); }; Here after some CPU-intensive work, we use self.postMessage to send the data we received from the UI thread and the onmessage event handler gets executed with message received data. Working with Socket using HTML5 HTML5 supports full-duplex bidirectional sockets that run over the Web. The browsers are capable of invoking socket requests directly using HTML5. The important thing that you should note with sockets is that it sends only the data without the overload of HTTP headers and other HTTP elements that are associated with any requests. The bandwidth using sockets is dramatically reduced. To use sockets from the browsers, a new protocol has been specified by W3C as a part of the HTML5 specification. WebSocket is a new protocol that supports two-way communication between the client and the server in a single TCP channel. To start socket server, we are going to use node.js for server side. Install node.js on the server side using the installer available at http://nodejs.org/dist/v0.6.6/node-v0.6.6.msi. Once you have installed node.js, start a server implementation of the socket. var io = require('socket.io'); //Creates a HTTP Server var socket = io.listen(8124); //Bind the Connection Event //This is called during connection socket.sockets.on('connection',function(socket){ //This will be fired when data is received from client socket.on('message', function(msg){ console.log('Received from client ',msg); }); //Emit a message to client socket.emit('greet',{hello: 'world'}); //This will fire when the client has disconnected socket.on('disconnect', function(){ console.log('Server has disconnected'); }); }); In the preceding code, the server implementation has been made. The require('socket.io') code snippet specifies the include module header. socket.io provides all the APIs from node.js that are useful for socket implementation. The Connection event is fired on the server when any client connects with the server. We have used to listen at the port 8124 in the server. socket.emit specifies the emit statement or the response from the server when any message is received by it. Here during the greet event, we pass a JSON object to the client which has a property hello. And finally, the disconnect event is called when the client disconnects the socket. Now to run this client implementation, we need to create a HTML file. <html> <title>WebSocket Client Demo</title> <script src = "http://localhost:8124/socket.io/socket.io.js"></ script> <script> //Create a socket and connect to the server var socket = io.connect('http://localhost:8124/'); socket.on("connect",function(){ alert("Client has connected to the server"); }); socket.on('greet', function (data) { alert(data.hello); } ); </script </html> Here we connect the server at 8124 port. The connect event is invoked first. We call an alert method when the client connects to the server. Finally, we also use the greet event to pass data from the server to the client. Here, if we run both the server and the client, we will see two alerts; one when the connection is made and the other alert to greet. The greet message passes a JSON object that greets with world. The URL for the socket from the browser looks like so. [scheme] '://' [host] '/' [namespace] '/' [protocol version] '/' [transport id] '/' [session id] '/' ( '?' [query] ) Here, we see. Scheme: This can bear values such as http or https (for web sockets, the browser changes it to ws:// after the connection is established, it's an upgrade request) host: This is the host name of the socket server namespace: This is the Socket.IO namespace, the default being socket.io protocol version: The client support default is 1 transport id: This is for the different supported transports which includes WebSockets, xhr-polling, and so on session id: This is the web socket session's unique session ID for the client Getting GeoLocation from the browser using HTML5 As we are getting inclined more and more towards devices, browsers are trying to do their best to actually implement features to suit these needs. HTML5 introduces GeoLocation APIs to the browser that enable you to get the location of your position directly using JavaScript. In spite of it being very much primitive, browsers are capable of detecting the actual location of the user using either Wi-Fi, satellite, or other external sources if available. As a programmer, you just need to call the location API and everything is handled automatically by the browser. As geolocation bears sensitive information, it is important to ask the user for permission. Let's look at the following code. if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { var latLng = "{" + position.coords.latitude + "," + position.coords. longitude + "with accuracy: " + position.coords.accuracy; alert(latLng); }, errorHandler); } Here in the code we first detect whether the geolocation API is available to the current browser. If it is available, we can use getCurrentPosition to get the location of the current device and the accuracy of the position as well. We can also use navigator.geolocation.watchPosition to continue watching the device location at an interval when the device is moving from one place to another.   Resources for Article: Further resources on this subject: Planning for a successful integration .NET 4.5 Extension Methods on IQueryable Core .NET Recipes
Read more
  • 0
  • 0
  • 2774

article-image-microsoft-dynamics-crm-2011-overview
Packt
22 Jul 2013
10 min read
Save for later

Microsoft Dynamics CRM 2011 Overview

Packt
22 Jul 2013
10 min read
(For more resources related to this topic, see here.) Introduction to CRM 2011 Every organization is dependent on customers, and every organization is challenged to manage these customers. Businesses need a way to attract, sell to, service, and track every interaction these customers have with their organization. To do so, they often implement a Customer Relationship Management (CRM) system. Microsoft offers a CRM solution as part of their Dynamics suite, which allows companies to implement a system that manages more than just their customers. Microsoft Dynamics CRM is a web-based application that can be accessed online through an Internet browser or through Microsoft Office Outlook. Microsoft Dynamics CRM 2011 Web client Microsoft Dynamics CRM 2011 Outlook client Microsoft Dynamics CRM includes three main modules: Sales Marketing Customer service Each module contains entities, which are objects in CRM, and are used to help model data. Throughout this article, business examples will be used to explain various concepts. A fictitious company called Race2Win Insurance Company will be used to further illustrate these concepts. Sales module Microsoft Dynamics CRM allows businesses to optimize their sales processes and customer tracking efforts. With CRM, an organization can increase interactions with current customers, shorten sales cycles, and increase opportunity close rates to gain new customers. The sales module within CRM consists of the following entities: Leads Accounts Contacts Opportunities Marketing lists Products Sales literature Quotes Orders Invoices Competitors Quick campaigns Goals Goal metrics Rollup queries Each of these entities are explained in more detail further, but let's give a high-level overview of how users could potentially use these out of the box capabilities. Business scenarios The business scenarios that we'll cover in a moment outlines the use of the sales module for our fictitious insurance company. Although entities such as marketing lists, quick campaigns, and sales literature are found in the sales module, these will be discussed in the marketing business scenarios. For our scenarios, we'll illustrate examples from the viewpoint of various individual roles within the organization. Leads Race2Win Insurance Company needs a way to track leads they've acquired as a result of efforts by their marketing team. Microsoft Dynamics CRM allows the company to track and qualify leads. For example, let's say Race2Win is capturing information on the lead record related to a potential insurance customer. Based on information entered, business rules can be applied to categorize the lead as hot, warm, or cold. CRM has out of the box functionality to apply these business rules automatically. Once a lead is qualified, Dynamics CRM provides the ability to convert the lead into an opportunity, account, contact, or a combination of the three. Accounts The organizations or companies that Race2Win Insurance views as customers can be tracked as accounts in CRM. These accounts may be managed by one or more salespeople. A salesperson may receive commission on sales made to their managed accounts, thus making accounts critical. Accounts contain the address, industry, and various other demographic data about the customer. Contacts As part of the sales process, Race2Win Insurance Company's sales force may reach out to individuals or employees at a company. Individuals, either on their own or tied to a company, can be tracked as contacts in Microsoft Dynamics CRM. Like the account, contact records can store addresses and industry/professional information, but can also store more personal data, such as birthdays, anniversaries, and information about various family members. A salesperson may find personal information critical to start and maintain a personal relationship with people at a company. Opportunities Opportunities can allow the Race2Win sales force to track potential sales opportunities with their customers. The opportunities can go through various sales cycles, and can track the estimated versus actual revenue of a particular sale. Opportunities can have products from the product catalog associated to them in order to help estimate the revenue more accurately. The Race2Win Insurance salespeople receive their commission when an opportunity is won, and the commission amount is based on the estimated revenue. Products Race2Win Insurance Company has various insurance products and services it may sell. Products can be sold to different customers at different prices, and some products may be eligible for discounts based on the quantity ordered. Dynamics CRM allows all of the above functionality through the use of the product catalog. Quotes Let's say that during the course of a sales cycle, the customer wishes to receive more detailed pricing information such as an estimate from the Race2Win salesperson. Dynamics CRM allows a quote to be generated ad hoc or from an opportunity. In either case, products from the product catalog can be added, prices from the price list can be used, and discounts can be applied—all to create an accurate quote. Orders If a customer wishes to proceed with purchasing products or services, the opportunity can be closed as won. The salesperson will receive commission, and the process is now handed off to someone in order entry to create an order. Orders can be created ad hoc, from an opportunity, or quote. If an opportunity or quote has products, prices, and/or discounts, these can be inherited, making the generation of the order timely and accurate. Invoices An invoice can be generated for a customer in place of an order if needed. Many other organizations like Race2Win have their billing department typically create an invoice once the order is ready to be billed. An invoice can be created ad hoc or from an opportunity, quote, or order. Like an order, invoices can inherit the products, prices, and/or discounts from an opportunity, quote, or order. While quotes, orders, and invoices can be created in Microsoft Dynamics CRM, sometimes they are created in an ERP system and then imported into Dynamics CRM for reference or reporting. Competitors Race2Win Insurance Company has now been using the leads generated to attain new customers, increase sales, and generate quotes, orders, and invoices for customers quickly and efficiently. Like any successful organization, Race2Win has to deal with competition. Dynamics CRM gives the organization's sales force and management the ability to track competitors, analyze their sales strategies, products, and the business they win. Goals Throughout a period of time, say fiscal year, an organization like Race2Win can set up goals to track various metrics such as sales performance. These goals can be tracked for each individual, and managers have the ability to see each of their team member's goals. The entire business scenario we just outlined can be achieved by using the Microsoft Dynamics CRM sales module. Marketing module Microsoft Dynamics CRM allows an organization to launch and maintain marketing campaigns. The responses to a campaign can be tracked to gauge the effectiveness of a campaign. These responses can also be converted into accounts, contacts, leads, quotes, orders, or opportunities. The Dynamics CRM marketing module consists of the following entities: Leads Accounts Contacts Marketing lists Campaigns Quick campaigns Sales literature Products Each of these entities are explained in more detail further, but let's again look at a high-level overview. Business scenario The business scenarios that we'll cover in a moment outlines the use of the marketing module using our fictitious company, Race2Win Insurance Company. Race2Win's marketing team has analyzed its existing customer base and found areas to expand the company's business. They can market new products and services to their existing customer base, and try and turn potential customers into new ones. Marketing lists Race2Win has realized that it needs to market new insurance products to an existing list of accounts, and market existing products to potential customers in China. Microsoft Dynamics CRM allows Race2Win to set up a marketing list containing a static list of accounts. They can also use CRM to set up a dynamic marketing list of all leads where the address has the country set to China. Campaigns Using marketing lists, an organization can kick off, run, and evaluate marketing campaigns. Race2Win can kick off a campaign very quickly for the existing accounts and it can also execute a more lengthy campaign, containing multiple communication channels, for the leads in China. Depending on the campaign type, the marketing team can create planning activities, track the cost of the campaign,and target certain products. Sales literature At times, customers or even internal business users may want to know more about a product that an organization is selling or marketing. Race2Win is no different, dealing with multiple inquiries about products from customers on a weekly basis. The sales literature in CRM can offer collateral for marketing or sales teams to share. An example can be a brochure of their latest insurance products. The scenarios we just described illustrates how the Microsoft Dynamics CRM marketing module can be used to achieve any marketing goal. Customer service module Obtaining new customers is an important goal of any organization, but servicing existing customers is just as important. Microsoft Dynamics CRM offers a solution to service an organization's customers, manage service resources, and maintain a knowledge base repository that can be used to efficiently resolve future service incidents. The Dynamics CRM service module consists of the following entities: Cases Service calendar Accounts Contacts Knowledge base article Contracts Products Goals Goal metrics Rollup queries Each of these entities are explained in more detail further, but let's look at a high-level overview of how an organization could potentially use these out of the box capabilities. Business scenarios Thanks to CRM, Race2Win Insurance Company has been able to effectively market and sell its products to companies in a wide range of industries all over the world. Now it's time for Race2Win to start servicing their clients, and it's Dynamics CRM to the rescue once again. Cases Let us say a customer didn't receive their invoice for insurance products that they purchased. This customer e-mails this issue to a client service representative at Race2Win. With Dynamics CRM, this e-mail can be converted to a case. During that process of conversion, an account or contact can be linked (the case can reference a product if needed). Once converted, the original e-mail will be tied to the newly opened case. After the case is created, it can be assigned to a user, or routed to a queue. Throughout its lifecycle, activities such as e-mails, phone calls, and/or tasks can be tracked as a part of the case. Knowledge base articles In trying to resolve the case, users can access a repository of knowledge base articles to reference past issues and resolutions. Contracts Race2Win can use CRM to create service contracts to defne the type and level of support required. These contracts can then be referenced when new cases are opened. Service calendar If the servicing of customers requires the allocation of resources, whether that is an individual employee, a contractor, a facility or equipment, Dynamics CRM's scheduling features can assist. The service calendar can show users how each resource is being allocated for a given time period. With this knowledge, you can schedule resources more efficiently in order to deliver a prompt resolution to a case. As you can see, CRM has a rich feature set that can be used to service any type of client. We just went through a high-level introduction of the three main modules in Microsoft Dynamics CRM. This article is a guide to help pass the CRM 2011 Applications exam (MB2-868) and we will delve deeper into each of the modules to help you better understand the application. The next section gives you an introduction to Microsoft Dynamics CRM 2011 training and certifications.
Read more
  • 0
  • 0
  • 1708

article-image-vaadin-project-spring-and-handling-login-spring
Packt
22 Jul 2013
16 min read
Save for later

Vaadin project with Spring and Handling login with Spring

Packt
22 Jul 2013
16 min read
(For more resources related to this topic, see here.) Setting up a Vaadin project with Spring in Maven We will set up a new Maven project for Vaadin application that will use the Spring framework. We will use a Java annotation-driven approach for Spring configuration instead of XML configuration files. This means that we will eliminate the usage of XML to the necessary minimum (for XML fans, don't worry there will be still enough XML to edit). In this recipe, we will set up a Spring project where we define a bean that will be obtainable from the Spring application context in the Vaadin code. As the final result, we will greet a lady named Adela, so we display Hi Adela! text on the screen. The brilliant thing about this is that we get the greeting text from the bean that we define via Spring. Getting ready First, we create a new Maven project. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.packtpub.vaadin -DartifactId=vaadin-with-spring -Dversion=1.0 More information about Maven and Vaadin can be found at https://vaadin.com/book/-/page/getting-started.maven.html. How to do it... Carry out the following steps, in order to set up a Vaadin project with Spring in Maven: First, we need to add the necessary dependencies. Just add the following Maven dependencies into the pom.xml file: dependencies into the pom.xml file: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> In the preceding code, we are referring to the spring.version property. Make sure we have added the Spring version inside the properties tag in the pom.xml file. <properties> … <spring.version>3.1.2.RELEASE</spring.version> </properties> At the time of writing, the latest version of Spring was 3.1.2. Check the latest version of the Spring framework at http://www.springsource.org/spring-framework. The last step in the Maven configuration file is to add the new repository into pom.xml. Maven needs to know where to download the Spring dependencies. <repositories> … <repository> <id>springsource-repo</id> <name>SpringSource Repository</name> <url>http://repo.springsource.org/release</url> </repository> </repositories> Now we need to add a few lines of XML into the src/main/webapp/WEB-INF/web.xml deployment descriptor file. At this point, we make the first step in connecting Spring with Vaadin. The location of the AppConfig class needs to match the full class name of the configuration class. <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.packtpub.vaadin.AppConfig </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> Create a new class AppConfig inside the com.packtpub.vaadin package and annotate it with the @Configuration annotation. Then create a new @Bean definition as shown: package com.packtpub.vaadin; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class AppConfig { @Bean(name="userService") public UserService helloWorld() { return new UserServiceImpl(); } } In order to have the recipe complete, we need to make a class that will represent a domain class. Create a new class called User. public class User { private String name; // generate getters and setters for name field } UserService is a simple interface defining a single method called getUser(). When the getUser() method is called in this recipe, we always create and return a new instance of the user (in the future, we could add parameters, for example login, and fetch user from the database). UserServiceImpl is the implementation of this interface. As mentioned, we could replace that implementation by something smarter than just returning a new instance of the same user every time the getUser() method is called. public interface UserService { public User getUser(); } public class UserServiceImpl implements UserService { @Override public User getUser() { User user = new User(); user.setName("Adela"); return user; } } Almost everything is ready now. We just make a new UI and get the application context from which we get the bean. Then, we call the service and obtain a user that we show in the browser. After we are done with the UI, we can run the application. public class AppUI extends UI { private ApplicationContext context; @Override protected void init(VaadinRequest request) { UserService service = getUserService(request); User user = service.getUser(); String name = user.getName(); Label lblUserName = new Label("Hi " + name + " !"); VerticalLayout layout = new VerticalLayout(); layout.setMargin(true); setContent(layout); layout.addComponent(lblUserName); } private UserService getUserService (VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); return (UserService) context.getBean("userService"); } } Run the following Maven commands in order to compile the widget set and run the application: mvn package mvn jetty:run How it works... In the first step, we have added dependencies to Spring. There was one additional dependency to cglib, Code Generation Library. This library is required by the @ Configuration annotation and it is used by Spring for making the proxy objects. More information about cglib can be found at http://cglib.sourceforge.net Then, we have added contextClass, contextConfigLocation and ContextLoaderListener into web.xml file. All these are needed in order to initialize the application context properly. Due to this, we are able to get the application context by calling the following code: WebApplicationContextUtils.getRequiredWebApplicationContext (servletContext); Then, we have made UserService that is actually not a real service in this case (we did so because it was not in the scope of this recipe). We will have a look at how to declare Spring services in the following recipes. In the last step, we got the application context by using the WebApplicationContextUtils class from Spring. WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); Then, we obtained an instance of UserService from the Spring application context. UserService service = (UserService) context.getBean("userService"); User user = service.getUser(); We can obtain a bean without knowing the bean name because it can be obtained by the bean type, like this context.getBean(UserService.class). There's more... Using the @Autowire annotation in classes that are not managed by Spring (classes that are not defined in AppConfig in our case) will not work, so no instances will be set via the @Autowire annotation. Handling login with Spring We will create a login functionality in this recipe. The user will be able to log in as admin or client. We will not use a database in this recipe. We will use a dummy service where we just hardcode two users. The first user will be "admin" and the second user will be "client". There will be also two authorities (or roles), ADMIN and CLIENT. We will use Java annotation-driven Spring configuration. Getting ready Create a new Maven project from the Vaadin archetype. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.app -DartifactId=vaadin-spring-login -Dversion=1.0 Maven archetype generates the basic structure of the project. We will add the packages and classes, so the project will have the following directory and file structure: How to do it... Carry out the following steps, in order to create login with Spring framework: We need to add Maven dependencies in pom.xml to spring-core, spring-context, spring-web, spring-security-core, spring-security-config, and cglib (cglib is required by the @Configuration annotation from Spring). <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> Now we edit the web.xml file, so Spring knows we want to use the annotation-driven configuration approach. The path to the AppConfig class must match full class name (together with the package name). <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.app.config.AppConfig</param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> We are referring to the AppConfig class in the previous step. Let's implement that class now. AppConfig needs to be annotated by the @Configuration annotation, so Spring can accept it as the context configuration class. We also add the @ComponentScan annotation, which makes sure that Spring will scan the specified packages for Spring components. The package names inside the @ComponentScan annotation need to match our packages that we want to include for scanning. When a component (a class that is annotated with the @Component annotation) is found and there is a @Autowire annotation inside, the auto wiring will happen automatically. package com.app.config; import com.app.auth.AuthManager; import com.app.service.UserService; import com.app.ui.LoginFormListener; import com.app.ui.LoginView; import com.app.ui.UserView; import org.springframework.context.annotation.Bean; import org.springframework.context. annotation.ComponentScan; import org.springframework.context. annotation.Configuration; import org.springframework.context. annotation.Scope; @Configuration @ComponentScan(basePackages = {"com.app.ui" , "com.app.auth", "com.app.service"}) public class AppConfig { @Bean public AuthManager authManager() { AuthManager res = new AuthManager(); return res; } @Bean public UserService userService() { UserService res = new UserService(); return res; } @Bean public LoginFormListener loginFormListener() { return new LoginFormListener(); } } We are defining three beans in AppConfig. We will implement them in this step. AuthManager will take care of the login process. package com.app.auth; import com.app.service.UserService; import org.springframework.beans.factory. annotation.Autowired; import org.springframework.security.authentication. AuthenticationManager; import org.springframework.security.authentication. BadCredentialsException; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.stereotype.Component; import java.util.Collection; @Component public class AuthManager implements AuthenticationManager { @Autowired private UserService userService; public Authentication authenticate (Authentication auth) throws AuthenticationException { String username = (String) auth.getPrincipal(); String password = (String) auth.getCredentials(); UserDetails user = userService.loadUserByUsername(username); if (user != null && user.getPassword(). equals(password)) { Collection<? extends GrantedAuthority> authorities = user.getAuthorities(); return new UsernamePasswordAuthenticationToken (username, password, authorities); } throw new BadCredentialsException("Bad Credentials"); } } UserService will fetch a user based on the passed login. UserService will be used by AuthManager. package com.app.service; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. authority.GrantedAuthorityImpl; import org.springframework.security.core. authority.SimpleGrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.security.core. userdetails.UserDetailsService; import org.springframework.security.core. userdetails.UsernameNotFoundException; import org.springframework.security.core. userdetails.User; import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.List; public class UserService implements UserDetailsService { @Override public UserDetails loadUserByUsername (String username) throws UsernameNotFoundException { List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(); // fetch user from e.g. DB if ("client".equals(username)) { authorities.add (new SimpleGrantedAuthority("CLIENT")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } if ("admin".equals(username)) { authorities.add (new SimpleGrantedAuthority("ADMIN")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } else { return null; } } } LoginFormListener is just a listener that will initiate the login process, so it will cooperate with AuthManager. package com.app.ui; import com.app.auth.AuthManager; import com.vaadin.navigator.Navigator; import com.vaadin.ui.*; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core.context. SecurityContextHolder; import org.springframework.stereotype.Component; @Component public class LoginFormListener implements Button.ClickListener { @Autowired private AuthManager authManager; @Override public void buttonClick(Button.ClickEvent event) { try { Button source = event.getButton(); LoginForm parent = (LoginForm) source.getParent(); String username = parent.getTxtLogin().getValue(); String password = parent.getTxtPassword().getValue(); UsernamePasswordAuthenticationToken request = new UsernamePasswordAuthenticationToken (username, password); Authentication result = authManager.authenticate(request); SecurityContextHolder.getContext(). setAuthentication(result); AppUI current = (AppUI) UI.getCurrent(); Navigator navigator = current.getNavigator(); navigator.navigateTo("user"); } catch (AuthenticationException e) { Notification.show("Authentication failed: " + e.getMessage()); } } } The login form will be made as a separate Vaadin component. We will use the application context and that way we get bean from the application context by ourselves. So, we are not using auto wiring in LoginForm. package com.app.ui; import com.vaadin.ui.*; import org.springframework.context.ApplicationContext; public class LoginForm extends VerticalLayout { private TextField txtLogin = new TextField("Login: "); private PasswordField txtPassword = new PasswordField("Password: "); private Button btnLogin = new Button("Login"); public LoginForm() { addComponent(txtLogin); addComponent(txtPassword); addComponent(btnLogin); LoginFormListener loginFormListener = getLoginFormListener(); btnLogin.addClickListener(loginFormListener); } public LoginFormListener getLoginFormListener() { AppUI ui = (AppUI) UI.getCurrent(); ApplicationContext context = ui.getApplicationContext(); return context.getBean(LoginFormListener.class); } public TextField getTxtLogin() { return txtLogin; } public PasswordField getTxtPassword() { return txtPassword; } } We will use Navigator for navigating between different views in our Vaadin application. We make two views. The first is for login and the second is for showing the user detail when the user is logged into the application. Both classes will be in the com.app.ui package. LoginView will contain just the components that enable a user to log in (text fields and button). public class LoginView extends VerticalLayout implements View { public LoginView() { LoginForm loginForm = new LoginForm(); addComponent(loginForm); } @Override public void enter(ViewChangeListener.ViewChangeEvent event) { } }; UserView needs to identify whether the user is logged in or not. For this, we will use SecurityContextHolder that obtains the SecurityContext that holds the authentication data. If the user is logged in, then we display some data about him/her. If not, then we navigate him/her to the login form. public class UserView extends VerticalLayout implements View { public void enter(ViewChangeListener.ViewChangeEvent event) { removeAllComponents(); SecurityContext context = SecurityContextHolder.getContext(); Authentication authentication = context.getAuthentication(); if (authentication != null && authentication.isAuthenticated()) { String name = authentication.getName(); Label labelLogin = new Label("Username: " + name); addComponent(labelLogin); Collection<? extends GrantedAuthority> authorities = authentication.getAuthorities(); for (GrantedAuthority ga : authorities) { String authority = ga.getAuthority(); if ("ADMIN".equals(authority)) { Label lblAuthority = new Label("You are the administrator. "); addComponent(lblAuthority); } else { Label lblAuthority = new Label("Granted Authority: " + authority); addComponent(lblAuthority); } } Button logout = new Button("Logout"); LogoutListener logoutListener = new LogoutListener(); logout.addClickListener(logoutListener); addComponent(logout); } else { Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } } We have mentioned LogoutListener in the previous step. Here is how that class could look: public class LogoutListener implements Button.ClickListener { @Override public void buttonClick(Button.ClickEvent clickEvent) { SecurityContextHolder.clearContext(); UI.getCurrent().close(); Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } Everything is ready for the final AppUI class. In this class, we put in to practice all that we have created in the previous steps. We need to get the application context. That is done in the first lines of code in the init method. In order to obtain the application context, we need to get the session from the request, and from the session get the servlet context. Then, we use the Spring utility class, WebApplicationContextUtils, and we find the application context by using the previously obtained servlet context. After that, we set up the navigator. @PreserveOnRefresh public class AppUI extends UI { private ApplicationContext applicationContext; @Override protected void init(VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); applicationContext = WebApplicationContextUtils. getRequiredWebApplicationContext(servletContext); Navigator navigator = new Navigator(this, this); navigator.addView("login", LoginView.class); navigator.addView("user", UserView.class); navigator.navigateTo("login"); setNavigator(navigator); } public ApplicationContext getApplicationContext() { return applicationContext; } } Now we can run the application. The password for usernames client and admin is pass. mvn package mvn jetty:run How it works... There are two tricky parts from the development point of view while making the application: First is how to get the Spring application context in Vaadin. For this, we need to make sure that contextClass, contextConfigLocation, and ContextLoaderListener are defined in the web.xml file. Then we need to know how to get Spring application context from the VaadinRequest. We certainly need a reference to the application context in UI, so we define the applicationContext class field together with the public getter (because we need access to the application context from other classes, to get Spring beans). The second part, which is a bit tricky, is the AppConfig class. That class represents annotated Spring application configuration (which is referenced from the web.xml file). We needed to define what packages Spring should scan for components. For this, we have used the @ComponentScan annotation. The important thing to keep in mind is that the @Autowired annotation will work only for Spring managed beans that we have defined in AppConfig. When we try to add the @Autowired annotation to a simple Vaadin component, the autowired reference will remain empty because no auto wiring happens. It is up to us to decide what instances should be managed by Spring and where we use the Spring application context to retrieve the beans. Summary In this article, we saw how to add Spring into the Maven project. We also took a look at handling login with Spring Resources for Article:   Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Getting Started with Ext GWT [Article]
Read more
  • 0
  • 0
  • 4009
article-image-java-development
Packt
18 Jul 2013
16 min read
Save for later

Java Development

Packt
18 Jul 2013
16 min read
(For more resources related to this topic, see here.) Creating a Java project To create a new Java project, navigate to File | New | Project . You will be presented with the New Project wizard window that is shown in the following screenshot: Choose the Java Project option, and click on Next . The next page of the wizard contains the basic configuration of the project that you will create. The JRE section allows you to use a specific JRE to compile and run your project. The Project layout section allows you to choose if both source and binary files are created in the project's root folder or if they are to be separated into different folders (src and bin by default). The latter is the default option. You can create your project inside a working set. This is a good idea if you have too many projects in your workspace and want to keep them organized. Check the Creating working sets section of this article for more information on how to use and manage working sets. The next page of the wizard contains build path options. In the Managing the project build path section of this article , we will talk more about these options. You can leave everything as the default for now, and make the necessary changes after the project is created. Creating a Java class To create a new Java class, right-click on the project in the Package Explorer view and navigate to New | Class . You will be presented with the New Java Class window, where you will input information about your class. You can change the class's superclass, and add interfaces that it implements, as well as add stubs for abstract methods inherited from interfaces and abstract superclasses, add constructors from superclasses, and add the main method. To create your class inside a package, simply enter its name in the appropriate field, or click on the Browse button beside it and select the package. If you input a package name that doesn't exist, Eclipse will create it for you. New packages can also be created by right-clicking on the project in the Package Explorer and navigating to New | Package . Right-clicking on a package instead of a project in the Project Explorer and navigating to New | Class will cause the class to be created inside that package. Creating working sets Working sets provide a way to organize your workspace's projects into subsets. When you have too many projects in your workspace, it gets hard to find the project you're looking for in the Package Explorer view. Projects you are not currently working on, for example, can be kept in separate working sets. They won't get in the way of your current work but will be there in case you need them. To create a new working set, open the Package Explorer's view menu (white triangle in the top-right corner of the view), and choose Select Working Set . Click on New and select the type of projects that the working set will contain (Java , in this case). On the next page, insert the name of the working set, and choose which projects it will contain. Once the working set is created, choose the Selected Working Sets option, and mark your working set. Click on OK , and the Package Explorer will only display the projects inside the working set you've just created. Once your working sets are created, they are listed in the Package Explorer's view menu. Selecting one of them will make it the only working set visible in the Package Explorer. To view more than one working set at once, choose the Select Working Set option and mark the ones you want to show. To view the whole workspace again, choose Deselect Workspace in the view menu. You can also view all the working sets with their nested projects by selecting working sets as the top-level element of the Package Explorer view. To do this, navigate to Top Level Elements | Working Sets in the view menu. Although you don't see projects that belong to other working sets when a working set is selected, they are still loaded in your workspace, and therefore utilize resources of your machine. To avoid wasting these resources, you can close unrelated projects by right-clicking on them and selecting Close Project . You can select all the projects in a working set by using the Ctrl + A keyboard shortcut. If you have a big number of projects, but you never work with all of them at the same time (personal/business projects, different clients' projects, and so on), you can also create a specific workspace for each project set. To create a new workspace, navigate to File | Switch Workspace | Other in the menu, enter the folder name of your new workspace, and click on OK . You can choose to copy the current workspace's layout and working sets in the Copy Settings section. Importing a Java project If you are going to work on an existing project, there are a number of different ways you can import it into Eclipse, depending on how you have obtained the project's source code. To open the Import wizard, navigate to File | Import . Let's go through the options under the General category: Archive file : Select this option if the project you are working on already exists in your workspace, and you just want to import an archive file containing new resources to it. The Import wizard will list all the resources inside the archive file so that you can select the ones you wish to import. To select to which project the resources will be imported click on the Browse button. You can also select in which folder the resources are to be included. Click on Finish when you are done. The imported resources will be decompressed and copied into the project's folder. Existing Projects into Workspace : If you want to import a new project, select this option from the Import wizard. If the project's source file has been compressed into an archive file (the .zip, .tar, .jar, or .tgz format), there's no need to decompress it; just mark the Select archive file option on the following page of the wizard, and point to the archive file. If you have already decompressed the code, mark Select root directory and point to the project. The wizard will list all the Eclipse projects found in the folder or archive file. Select the ones you wish to import and click on Finish . You can add the imported projects to a specific working set and choose whether the projects are to be copied into your workspace folder or not. It's highly recommended to do so for both simplicity and portability; you know where all your Eclipse projects are, and it's easy to backup or move all of them to a different machine. File System : Use this wizard if you already have a project in your workspace and want to add new existing resources in your filesystem. On the next page, select the resources you wish to import by checking them. Click on the Browse button to select the project and the folder where the resources will be imported. When you click on the Finish button, the resources will be copied to the project's folder inside your workspace. Preferences : You can import Eclipse preferences files to your workspace by selecting this option. Preferences file contains code style and compiler preferences, the list of installed JREs, and the Problems view configurations. You can choose which of these preferences you wish to import from the selected configuration file. Importing a project from Version Control Servers Projects that are stored in Version Control Servers can be imported directly into Eclipse. There's a number of version control softwares, each with its pros and cons, and most of them are supported by Eclipse via plugins. GIT is one of the most used softwares for version control. CVS is the only version control system supported by default. To import a project managed by it, navigate to CVS | Projects from CVS in the Import wizard. Fill in the server information on the following page, and click on Finish . Introducing Java views Eclipse's user interface consists of elements called views. The following sections will introduce the main views related to Java development. The Package Explorer view The Package Explorer view is the default view used to display a project's contents. As the name implies, it uses the package hierarchy of the project to display its classes, regardless of the actual file hierarchy. This view also displays the project's build path. The following screenshot shows how the Package Explorer view looks: The Java Editor view The Java Editor is the Eclipse component that will be used to edit Java source files. It is the main view in the Java perspective and is located in the middle of the screen. The following screenshot shows the Java Editor view: The Java Editor is much more than an ordinary text editor. It contains a number of features that makes it easy for newcomers to start writing Java code and increases the productivity of experienced Java programmers. Let's talk about some of these features. Compiling errors and warnings annotations As you will see in the Building and running section with more details, Eclipse builds your code automatically after every saved modification by default. This allows Eclipse to get the Java Compiler output and mark errors and warnings through the code, making it easier to spot and correct them. Warnings are underlined in yellow and errors in red. Content assist This is probably the most used Java Editor feature both by novice and experienced Java programmers. It allows you to list all the methods callable by a given instance, along with their documentation. This feature will work by default for all Java classes and for the ones in your workspace. To enable it for external libraries, you will have to configure the build path for your project. We'll talk more about build paths further in this article in the Managing the project build path section. To see this feature in action, open a Java Editor, and create a new String instance. String s = new String(); Now add a reference to this String instance, followed by a dot, and press Ctrl + Space bar . You will see a list of all the String() and Object() methods. This is way more practical than searching for the class's API in the Java documentation or memorizing it. The following screenshot shows the content assist feature in action: This list can be filtered by typing the beginning of the method's name after the dot. Let's suppose you want to replace some characters in this String instance. As a novice Java programmer, you are not sure if there's a method for that; and if there is, you are not sure which parameters it receives. It's a fair guess that the method's name probably starts with replace, right? So go ahead and type: s.replace When you press Ctrl along with the space bar, you will get a list of all the String() methods whose name starts with replace. By choosing one of them and pressing Enter , the editor completes the code with the rest of the method's name and its parameters. It will even suggest some variables in your code that you might want to use as parameters, as shown in the following screenshot: Content assist will work with all classes in the project's classpath. You can disable content assist's automatic activation by unmarking Enable auto activation inside the Preferences window and navigating to Java | Editor | Content Assist . Code navigation When the project you are working on is big enough, finding a class in the Package Explorer can be a pain. You will frequently find yourself asking, "In which package is that class again?". You can leave the source code of the classes you are working on open in different tabs, but soon enough you will have more open tabs than you would like to have. Eclipse has an easy solution for this. In the toolbar, select Navigate | Open Type . Now, just type in the class's name, and click on OK . If you don't remember the full name of the class, you can use the wildcard characters, ? (matches one character) and * (matches any number of characters). You can also use only the uppercase letters for the CamelCase names (for example, SIOOBE for StringIndexOutOfBoundsException). The shortcut for the Open Type dialog is Ctrl + Shift + T . There's also an equivalent feature for finding and opening resources other than Java classes, such as HTML files, images, and plain text files. The shortcut for the Open Resource dialog is Ctrl + Shift + R . You can also navigate to a class' source file by holding Ctrl and clicking on a reference to that class in the code. To navigate to a method's implementation or definition directly, hold Ctrl and click on the method's call. Another useful feature that makes it easy to browse through your project's source files is the Link With Editor feature in the Package Explorer view, as shown in the following screenshot: By enabling it, the selected resource in the Package Explorer will always be the one that's open in the editor. Using this feature together with OpenType is certainly the easiest way of finding a resource in the Package Explorer. Quick fix Whenever there's an error or warning marker in your code, Eclipse might have some suggestions on how to get rid of it. To open the Quick Fix menu containing the suggestions, place the caret on the marked piece of code related to the error or warning, right-click on it, and choose Quick Fix . You can also use the shortcut by pressing Ctrl + 1 with the caret placed on the marked piece of code. The following screenshot shows the quick fix feature suggesting you to either get rid of the unused variable, create getters and setters for it, or add a SuppressWarnings annotation: Let's see some of the most used quick fixes provided by Eclipse. You can take advantage of these quick fixes to speed up your code writing. You can for example, deliberately call a method that throws an exception without the try/catch block, and use the quick fix to generate it instead of writing the try/catch block yourself. Unhandled exceptions : When a method that throws an exception is called, and the exception is not caught or thrown, Eclipse will mark the call with an error. You can use the quick fix feature to surround the code with a proper try/catch block automatically. Just open the Quick Fix menu, and choose Surround with Try/Catch . It will generate a catch block that will then call the printStackTrace() method of the thrown exception. If the method is already inside a try block, you can also choose the Add catch clause to the surrounding try option. If the exception shouldn't be handled in the current method, you can also use the Add throws declaration quick fix. References to nonexisting methods and variables : Eclipse can create a stub for methods referenced through the code that doesn't exist with quick fix. To illustrate this feature's usefulness, let's suppose you are working on a class's code, and you realize that you will need a method that performs some specific operation with two integers, returning another integer value. You can simply use the method, pretending that it exists: int b = 4; int c = 5; int a = performOperation(b,c); The method call will be marked with an error that says performOperation is undefined. To create a stub for this method, place the caret over the method's name, open the Quick Fix menu, and choose create method performOperation(int, int) . A private method will be created with the correct parameters and return type as well as a TODO marker inside it, reminding you that you have to implement the method. You can also use a quick fix to create methods in other classes. Using the same previous example, you can create the performOperation() method in a different class, such as the following: OperationPerformer op = new OperationPerformer(); int a = op.performOperation(b,c); Speaking of classes, quick fix can also create one if you add a call to a non-existing class constructor. Non-existing variables can also be created with quick fix. Like with the method creation, just refer to a variable that still doesn't exist, place the caret over it, and open the Quick Fix menu. You can create the variable either as a local variable, a field, or a parameter. Remove dead code : Unused methods, constructors and fields with private visibility are all marked with warnings. While the quick fix provided for unused methods and constructors is the most evident one (remove the dead code), it's also possible to generate getters and setters for unused private fields with a quick fix. Customizing the editor Like almost everything in Eclipse, you can customize the Java Editor's appearance and behavior. There are plenty of configurations in the Preferences window (Window | Preferences ) that will certainly allow you to tailor the editor to suit your needs. Appearance-related configurations are mostly found in General | Appearance | Colors and Fonts and behavior and feature configurations are mostly under General | Editors | Text Editors . Since there are lots of different categories and configurations, the filter text in the Preferences window might help you find what you want. A short list of the preferences you will most likely want to change is as follows: Colors and fonts : Navigate to General | Appearance . In the Colors and Fonts configuration screen, you can see that options are organized by categories. The ones inside the Basic and Java categories will affect the Java Editor. Enable/Disable spell checking : The Eclipse editor comes with a spellchecker. While in some cases it can be useful, in many others you won't find much use for it. To disable or configure it, navigate to General | Editors | Text Editors | Spelling . Annotations : You can edit the way annotations (warnings and errors, among others) are shown in the editor by navigating to General | Editors | Text Editors | Annotations inside the Preferences window. You can change colors, the way annotations are highlighted in the code (underline, squiggly line, box, among others), and whether they are shown in the vertical bar before the code. Show Line Numbers : To show line numbers on the left-hand side of the editor, mark the corresponding checkbox by navigating to General | Editors | Text Editors . Right-clicking on the bar on the editor's left-hand side brings a dialog in which you can also enable/disable line numbers.
Read more
  • 0
  • 0
  • 3878

article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 7701