Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
Packt
21 Mar 2013
13 min read
Save for later

Learn Cinder Basics – Now

Packt
21 Mar 2013
13 min read
(For more resources related to this topic, see here.) What is creative coding This is a really short introduction about what creative coding is, and I'm sure that it is possible to find out much more about this topic on the Internet. Nevertheless, I will try to explain how it looks from my perspective. Creative coding is a relatively new term for a field that combines coding and design. The central part of this term might be the "coding" one—to become a creative coder, you need to know how to write code and some other things about programming in general. Another part—the "creative" one—contains design and all the other things that can be combined with coding. Being skilled in coding and design at the same time lets you explain your ideas as working prototypes for interface designs, art installations, phone applications, and other fields. It can save time and effort that you would give in explaining your ideas to someone else so that he/she could help you. The creative coding approach may not work so well in large projects, unless there are more than one creative codes involved. A lot of new tools that make programming more accessible have emerged during the last few years. All of them are easy to use, but usually the less complicated a tool is, the less powerful it is, and vice versa. A few words about Cinder So we are up to some Cinder coding! Cinder is one of the most professional and powerful creative coding frameworks that you can get for free on the Internet. It can help you if you are creating some really complicated interactive real-time audio-visual piece, because it uses one of the most popular and powerful low-level programming languages out there—C++—and relies on minimum third-party code libraries. The creators of Cinder also try to use all the newest C++ language features, even those that are not standardized yet (but soon will be) by using the so called Boost libraries. This book is not intended as an A-to-Z guide about Cinder nor the C++ programming language, nor areas of mathematics involved. This is a short introduction for us who have been working with similar frameworks or tools and know some programming already. As Cinder relies on C++, the more we know about it the better. Knowledge of ActionScript, Java, or even JavaScript will help you understand what is going on here. Introducing the 3D space To use Cinder with 3D we need to understand a bit about 3D computer graphics. First thing that we need to know is that 3D graphics are created in a three-dimensional space that exists somewhere in the computer and is transformed into a two-dimensional image that can be displayed on our computer screens afterwards. Usually there is a projection (frustrum) that has different properties which are similar to the properties of cameras we have in the real world. Frustrum takes care of rendering all the 3D objects that are visible in frustrum. It is responsible for creating the 2D image that we see on the screen. As you can see in the preceding figure, all objects inside the frustrum are being rendered on the screen. Objects outside the view frustrum are being ignored. OpenGL (that is being used for drawing in Cinder) relies on the so called rendering pipeline to map the 3D coordinates of the objects to the 2D screen coordinates. Three kind of matrices are used for this process: the model, view, and projection matrices. The model matrix maps the 3D object's local coordinates to the world (or global) space, the view matrix maps it to the camera space, and finally the projection matrix takes care of the mapping to the 2D screen space. Older versions of OpenGL combine the model and view matrices into one—the modelview matrix. The coordinate system in Cinder starts from the top-left corner of the screen. Any object placed there has the coordinates 0, 0, 0 (these are values of x, y, and z respectively). The x axis extends to the right, y to the bottom, but z extends towards the viewer (us), as shown in the following figure: Drawing in 3D Let's try to draw something by taking into account that there is a third dimension. Create another project by using TinderBox and name it Basic3D. Open the project file ( xcode/Basic3D.xcodeproj on Mac or vc10\Basic3D.sln on Windows). Open Basic3DApp.cpp in the editor and navigate to the draw() method implementation. Just after the gl::clear() method add the following line to draw a cube: gl::drawCube( Vec3f(0,0,0), Vec3f(100,100,100) ); The first parameter defines the position of the center of the cube, the second defines its size. Note that we use the Vec3f() variables to de fine position and size within three (x, y and z) dimensions. Compile and run the project. This will draw a solid cube at the top-left corner of the screen. We are able to see just one quarter of it because the center of the cube is the reference point. Let's move it to the center of the screen by transforming the previous line as follows: gl::drawCube( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0), Vec3f(100,100,100) ); Now we are positioning the cube in the middle of the screen no matter what the window's width or height is, because we pass half of the window's width (getWindowWidth()/2 ) and half of the window's height ( getWindowHeight()/2) as values for the x and y coordinates of the cube's location. Compile and run the project to see the result. Play around with the size parameters to understand the logic behind it. We may want to rotate the cube a bit. There is a built-in rotate() function that we can use. One of the things that we have to remember, though, is that we have to use it before drawing the object. So add the following line before gl::drawCube(): gl::rotate( Vec3f(0,1,0) ); Compile and run the project. You should see a strange rotation animation around the y axis. The problem here is that the rotate() function rotates the whole 3D world of our application including the object in it and it does so by taking into account the scene coordinates. As the center of the 3D world (the place where all axes cross and are zero) is in the top-left corner, all rotation is happening around this point. To change that we have to use the translate() function. It is used to move the scene (or canvas) before we rotate() or drawCube(). To make our cube rotate around the center of the screen, we have to perform the following steps: Use the translate() function to translate the 3D world to the center of the screen. Use the rotate() function to rotate the 3D world. Draw the object (drawCube()). Use the translate()function to translate the scene back. We have to use the translate() function to translate the scene back to the location, because each time we call translate() values are added instead of being replaced. In code it should look similar to the following: gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f::yAxis()*1 ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::translate( Vec3f(-getWindowWidth()/2,-getWindowHeight()/2,0) ); So now we get a smooth rotation of the cube around the y axis. The rotation angle around y axis is increased in each frame by 1 degree as we pass the Vec3f::yAxis()*1 value to the rotate() function. Experiment with the rotation values to understand this a bit more. What if we want the cube to be in a constant rotated position? We have to remember that the rotate() function works similar to the translate function. It adds values to the rotation of the scene instead of replacing them. Instead of rotating the object back, we will use the pushMatrices() and popMatrices() functions. Rotation and translation are transformations. Every time you call translate() or rotate() , you are modifying the modelview matrix. If something is done, it is sometimes not so easy to undo it. Every time you transform something, changes are being made based on all previous transformations in the current state. So what is this state? Each state contains a copy of the current transformation matrices. By calling pushModelView() we enter a fresh state by making a copy of the current modelview matrix and storing it into the stack. We will make some crazy transformations now without worrying about how we will undo them. To go back, we call popModelView() that pops (or deletes) the current modelview matrix from the stack, and returns us to the state with the previous modelview matrix. So let's try this out by adding the following code after the gl::clear() call: gl::pushModelView(); gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f(35,20,0) ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::popModelView(); Compile and run our program now, you should see something similar to the following screenshot: As we can see, before doing anything, we create a copy of the current state with pushModelView(). Then we do the same as before, translate our scene to the middle of the screen, rotate it (this time 35 degrees around x axis and 20 degrees around y axis), and finally draw the cube! To reset the stage to the state it was before, we have to use just one line of code, popModelView(). Using built-in eases Now, say we want to make use of the easing algorithms that we saw in the EaseGallery sample. To do that, we have to change the code by following certain steps. To use the easing functions, we have to include the Easing.h header file:#include "cinder/Easing.h" First we are going to add two more variables, startPostition and circleTimeBase: Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; Then, in the setup() method implementation, we have to change the currentPosition parts to startPosition and add an initial value to the circleTimeBase array members: startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; Next, we have to change the update() method so that it can be used along with the easing functions. They are based on time and they return a floating point value between 0 and 1 that defines the playhead position on an abstract 0 to 1 timeline: void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( getElapsedSeconds()-circleTimeBase[i]) * difference + startPosition[i]; if ( currentPosition[i].distance(targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } The highlighted parts in the preceding code snippet are those that have been changed. The most important part of it is the currentPosition[i] calculation part. We take the distance between the start and end points of the timeline and multiply it with the position floating point number that is being returned by our easing function, which in this case is easeOutExpo() . Again, it returns a floating point variable between 0 and 1 that represents the position on an abstract 0 to 1 timeline. If we multiply any number with, say, 0.33f, we get one-third of that number, 0.5f, we get one-half of that number, and so on. So, we add this distance to the circle's starting position and we get it's current position! Compile and run our application now. You should see something as follows: Almost like a snow storm! We will add a small modification to the code though. I will add a TWEEN_SPEED definition at the top of the code and multiply the time parameter passed to the ease function with it, so we can control the speed of the circles: #define TWEEN_SPEED 0.2 Change the following line in the update() method implementation: currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i])*TWEEN_SPEED) * difference + startPosition[i]; I did this because the default time base for each tween is 1 second. That means that each transition is happening exactly for 1 second and that's a bit too fast for our current situation. We want it to be slower, so we multiply the time we pass to the easing function with a floating point number that is less than 1.0f and greater than 0.0f. By doing that we ensure that the time is scaled down and instead of 1 second we get 5 seconds for our transition. So try to compile and run this, and see for yourself! Here is the full source code of our circle-creation: #include "cinder/app/AppBasic.h" #include "cinder/gl/gl.h" #include "cinder/Rand.h" #include "cinder/Easing.h" #define CIRCLE_COUNT 100 #define TWEEN_SPEED 0.2 using namespace ci; using namespace ci::app; using namespace std; class BasicAnimationApp : public AppBasic { public: void setup(); void update(); void draw(); void prepareSettings( Settings *settings ); Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; }; void BasicAnimationApp::prepareSettings( Settings *settings ) { settings->setWindowSize(800,600); settings->setFrameRate(60); } void BasicAnimationApp::setup() { for(int i=0; i<CIRCLE_COUNT; i++) { currentPosition[i].x=Rand::randFloat(0, getWindowWidth()); currentPosition[i].y=Rand::randFloat(0, getWindowHeight()); targetPosition[i].x=Rand::randFloat(0, getWindowWidth()); targetPosition[i].y=Rand::randFloat(0, getWindowHeight()); circleRadius[i] = Rand::randFloat(1, 10); startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; } } void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i]) * TWEEN_SPEED) * difference + startPosition[i]; if ( currentPosition[i].distance( targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } void BasicAnimationApp::draw() { gl::clear( Color( 0, 0, 0 ) ); for (int i=0; i<CIRCLE_COUNT; i++) { gl::drawSolidCircle( currentPosition[i], circleRadius[i] ); } } CINDER_APP_BASIC( BasicAnimationApp, RendererGl ) Experiment with the properties and try to change the eases. Not all of them will work with this example, but at least you will understand how to use them to create smooth animations with Cinder. Summary This article explains what is Cinder, introduces the 3D space, how to draw in 3D, and also explains in short about using built-in eases. Resources for Article : Further resources on this subject: 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] Sage: 3D Data Plotting [Article] Building your First Application with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 2143

article-image-creating-animated-gauge-css3
Packt
20 Mar 2013
15 min read
Save for later

Creating an Animated Gauge with CSS3

Packt
20 Mar 2013
15 min read
(For more resources related to this topic, see here.) A basic gauge structure Let's begin with a new project; as usual we need to create an index.html file. This time the markup involved is so small and compact that we can add it right now: <!doctype html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <title>Go Go Gauges</title> <link rel="stylesheet" type="text/css" href="css/application.css"> </head> <body> <div data-gauge data-min="0" data-max="100" data-percent="50"> <div data-arrow></div> </div> </body> </html> The gauge widget is identified by the data-gauge attribute and defined with three other custom data attributes; namely, data-min, data-max, and data-percent, which indicate the respective minimum and maximum value of the range and the current arrow position expressed in percentage value. Within the element marked with the data-gauge attribute, we have defined a div tag that will become the arrow of the gauge. To start with the styling phase, we first need to equip ourselves with a framework that is easy to use and can give us the opportunity to generate CSS code. We decide to go for SASS so we first need to install Ruby (http://www.ruby-lang.org/en/downloads/) and then enter the following from a command-line terminal: gem install sass You would probably need to execute the following command if you are working in Unix/Linux environments: sudo gem install sass   Installing Compass For this project we'll also use Compass, a SASS extension able to add some interesting features to our SASS stylesheet. To install Compass, we have to just enter gem install compass (or sudo gem install compass) in a terminal window. After the installation procedure is over, we have to create a small config.rb file in the root folder of our project using the following code: # Require any additional compass plugins here. # Set this to the root of your project when deployed: http_path = YOUR-HTTP-PROJECT-PATH css_dir = "css" sass_dir = "scss" images_dir = "img" javascripts_dir = "js" # You can select your preferred output style here (can be overridden via the command line): # output_style = :expanded or :nested or :compact or :compressed # To enable relative paths to assets via compass helper functions. Uncomment: relative_assets = true # To disable debugging comments that display the original location of your selectors. Uncomment: # line_comments = false preferred_syntax = :sass The config.rb file helps Compass to understand the location of the various assets of the project; let's have a look at these options in detail: http_path: This must be set to the HTTP URL related to the project's root folder css_dir: This contains the relative path to the folder where the generated CSS files should be saved sass_dir: This contains the relative path to the folder that contains our .scss files images_dir: This contains the relative path to the folder that holds all the images of the project javascripts_dir: This is similar to images_dir, but for JavaScript files There are other options available; we can decide whether the output CSS should be compressed or not, or we can ask Compass to use relative paths instead of absolute ones. For a complete list of all the options available, see the documentation at http://compass-style.org/help/tutorials/configuration-reference/. Next, we can create the folder structure we just described, providing our project with the css, img, js, and scss folders. Lastly, we can create an empty scss/application.scss file and start discovering the beauty of Compass. CSS reset and vendor prefixes We can ask Compass to regenerate the CSS file after each update to its SCSS counterpart. To do so, we need to execute the following command from the root of our project using a terminal: compass watch . Compass provides an alternative to the Yahoo! reset stylesheet we used in our previous project. To include this stylesheet, all we have to do is add a SASS include directive to our application.scss file: @import "compass/reset"; If we check css/application.css, the following is the result (trimmed): /* line 17, ../../../../.rvm/gems/ruby-1.9.3-p194/gems/compass- 0.12.2/frameworks/compass/stylesheets/compass/reset/_utilities.scss */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video { margin: 0; padding: 0; border: 0; font: inherit; font-size: 100%; vertical-align: baseline; } /* line 22, ../../../../.rvm/gems/ruby-1.9.3-p194/gems/compass- 0.12.2/frameworks/compass/stylesheets/compass/reset/_utilities.scss */ html { line-height: 1; } ... Notice also how the generated CSS keeps a reference to the original SCSS; this comes in handy when it's a matter of debugging some unexpected behaviors in our page. The next @import directive will take care of the CSS3 experimental vendor prefixes. By adding @import "compass/css3" on top of the application.scss file, we ask Compass to provide us with a lot of powerful methods for adding experimental prefixes automatically; for example, the following snippet: .round { @include border-radius(4px); } Is compiled into the following: .round { -moz-border-radius: 4px; -webkit-border-radius: 4px; -o-border-radius: 4px; -ms-border-radius: 4px; -khtml-border-radius: 4px; border-radius: 4px; } Equipped with this new knowledge, we can now start deploying the project. Using rem For this project we want to introduce rem, a measurement unit that is almost equivalent to em, but is always relative to the root element of the page. So, basically we can define a font size on the html element and then all the sizes will be related to it: html{ font-size: 20px; } Now, 1rem corresponds to 20px; the problem of this measurement is that some browsers, such as Internet Explorer version 8 or less, don't actually support it. To find a way around this problem, we can use the following two different fallback measurement units: em: The good news is that em, if perfectly tuned, works exactly as rem; the bad news is that this measurement unit is relative to the element's font-size property and is not relative to html. So, if we decide to pursue this method, we then have to take extra care every time we deal with font-size. px: We can use a fixed unit pixel size. The downside of this choice is that in older browsers, we're complicating the ability to dynamically change the proportions of our widget. In this project, we will use pixels as our unit of measurement. The reason we have decided this is because one of the rem benefits is that we can change the size of the gauge easily by changing the font-size property with media queries. This is only possible where media queries and rem are supported. Now, we have to find a way to address most of the duplication that would emerge from having to insert every statement containing a space measurement unit twice (rem and px). We can easily solve this problem by creating a SASS mixin within our application.scss file as follows (for more info on SASS mixins, we can refer to the specifications page at http://sass-lang.com/docs/yardoc/file.SASS_REFERENCE.html#mixins): @mixin px_and_rem($property, $value, $mux){ #{$property}: 0px + ($value * $mux); #{$property}: 0rem + $value; } So the next time instead of writing the following: #my_style{ width: 10rem; } We can instead write: #my_style{ @include px_and_rem(width, 10, 20); } In addition to that, we can also save the multiplier coefficient between px and rem in a variable and use it in every call to this function and within the html declaration; let's also add this to application.scss: $multiplier: 20px; html{ font-size: $multiplier; } Of course, there are still some cases in which the @mixin directive that we just created doesn't work, and in such situations we'll have to handle this duality manually. Basic structure of a gauge Now we're ready to develop at least the basic structure of our gauge, which includes the rounded borders and the minimum and maximum range labels. The following code is what we need to add to application.scss: div[data-gauge]{ position: absolute; /* width, height and rounded corners */ @include px_and_rem(width, 10, $multiplier); @include px_and_rem(height, 5, $multiplier); @include px_and_rem(border-top-left-radius, 5, $multiplier); @include px_and_rem(border-top-right-radius, 5, $multiplier); /* centering */ @include px_and_rem(margin-top, -2.5, $multiplier); @include px_and_rem(margin-left, -5, $multiplier); top: 50%; left: 50%; /* inset shadows, both in px and rem */ box-shadow: 0 0 #{0.1 * $multiplier} rgba(99,99,99,0.8), 0 0 #{0.1 * $multiplier} rgba(99,99,99,0.8) inset; box-shadow: 0 0 0.1rem rgba(99,99,99,0.8), 0 0 0.1rem rgba(99,99,99,0.8) inset; /* border, font size, family and color */ border: #{0.05 * $multiplier} solid rgb(99,99,99); border: 0.05rem solid rgb(99,99,99); color: rgb(33,33,33); @include px_and_rem(font-size, 0.7, $multiplier); font-family: verdana, arial, sans-serif; /* min label */ &:before{ content: attr(data-min); position: absolute; @include px_and_rem(bottom, 0.2, $multiplier); @include px_and_rem(left, 0.4, $multiplier); } /* max label */ &:after{ content: attr(data-max); position: absolute; @include px_and_rem(bottom, 0.2, $multiplier); @include px_and_rem(right, 0.4, $multiplier); } } With box-shadow and border, we can't use the px_and_rem mixin, so we duplicated these properties using px first and then rem. The following screenshot shows the result: Gauge tick marks How to handle tick marks? One method would be by using images, but another interesting alternative is to benefit from multiple background support and create those tick marks out of gradients. For example, to create a vertical mark, we can use the following within the div[data-gauge] selector: linear-gradient(0deg, transparent 46%, rgba(99, 99, 99, 0.5) 47%, rgba(99, 99, 99, 0.5) 53%, transparent 54%) Basically, we define a very small gradient between transparent and another color in order to obtain the tick mark. That's the first step, but we're yet to deal with the fact that each tick mark must be defined with a different angle. We can solve this problem by introducing a SASS function that takes the number of tick marks to print and iterates up to that number while also adjusting the angles of each mark. Of course, we also have to take care of experimental vendor prefixes, but we can count on Compass for that. The following is the function. We can create a new file called scss/_gauge.scss for this and other gauge-related functions; the leading underscore is to tell SASS to not create a .css file out of this .scss file, because it will be included in a separate file. @function gauge-tick-marks($n, $rest){ $linear: null; @for $i from 1 through $n { $p: -90deg + 180 / ($n+1) * $i; $linear: append($linear, linear-gradient( $p, transparent 46%, rgba (99,99,99,0.5) 47%, rgba(99,99,99,0.5) 53%, transparent 54%), comma); } @return append($linear, $rest); } We start with an empty string adding the result of calling the linear-gradient Compass function, which handles experimental vendor prefixes, with an angle that varies based on the current tick mark index. To test this function out, we first need to include _gauge.scss in application.scss: @import "gauge.scss"; Next, we can insert the function call within the div[data-gauge] selector in application.scss, specifying the number of tick marks required: @include background(gauge-tick-marks(11,null)); The background function is also provided by Compass and it is just another mechanism to deal with experimental prefixes. Unfortunately, if we reload the projects the results are far from expected: Although we can see a total of 11 stripes, they are of the wrong sizes and in the wrong position. To resolve this, we will create some functions to set the correct values for background-size and background-position. Dealing with background size and position Let's start with background-size, the easiest. Since we want each of the tick marks to be exactly 1rem in size, we can proceed by creating a function that prints 1rem 1rem as many times as the number of the passed parameter; so let's add the following code to _gauge.scss: @function gauge-tick-marks-size($n, $rest){ $sizes: null; @for $i from 1 through $n { $sizes: append($sizes, 1rem 1rem, comma); } @return append($sizes, $rest, comma); } We already noticed the append function; an interesting thing to know about it is that the last parameter of this function lets us decide if some letter must be used to concatenate the strings being created. One of the available options is comma, which perfectly suits our needs. Now, we can add a call to this function within the div[data-gauge] selector: background-size: gauge-tick-marks-size(11, null); And the following is the result: Now the tick marks are of the right size, but they are displayed one above the other and are repeated all across the element. To avoid this behavior, we can simply add background-repeat: no-repeat just below the previous instruction: background-repeat: no-repeat; On the other hand, to handle the position of the tick marks we need another SASS function; this time it's a little more complex and involves a bit of trigonometry. Each gradient must be placed in the function of its angle—x is the cosine of that angle and y the sine. The sin and cos functions are provided by Compass, we need just to handle the shift, because they are referred to the center of the circle whereas our css property's origin is in the upper-left corner: @function gauge-tick-marks-position($n, $rest){ $positions: null; @for $i from 1 through $n { $angle: 0deg + 180 / ($n+1) * $i; $px: 100% * ( cos($angle) / 2 + 0.5 ); $py: 100% * (1 - sin($angle)); $positions: append($positions, $px $py, comma); } @return append($positions, $rest, comma); } Now we can go ahead and add a new line inside the div[data-gauge] selector: background-position: gauge-tick-marks-position(11, null); And here's the much-awaited result: The next step is to create a @mixin directive to hold these three functions together, so we can add the following to _gauge.scss: @mixin gauge-background($ticks, $rest_gradient, $rest_size, $rest_position) { @include background-image( gauge-tick-marks($ticks, $rest_gradient) ); background-size: gauge-tick-marks-size($ticks, $rest_size); background-position: gauge-tick-marks-position($ticks, $rest_position); background-repeat: no-repeat; } And replace what we placed inside div[data-gauge] in this article with a single invocation: @include gauge-background(11, null, null, null ); We've also left three additional parameters to define extra values for background, background-size, and background-position, so we can, for example, easily add a gradient background: @include gauge-background(11, radial-gradient(50% 100%, circle, rgb(255,255,255), rgb(230,230,230)), cover, center center ); And following is the screenshot: Creating the arrow To create an arrow we can start by defining the circular element in the center of the gauge that holds the arrow. This is easy and doesn't introduce anything really new; here's the code that needs to be nested within the div[data-gauge] selector: div[data-arrow]{ position: absolute; @include px_and_rem(width, 2, $multiplier); @include px_and_rem(height, 2, $multiplier); @include px_and_rem(border-radius, 5, $multiplier); @include px_and_rem(bottom, -1, $multiplier); left: 50%; @include px_and_rem(margin-left, -1, $multiplier); box-sizing: border-box; border: #{0.05 * $multiplier} solid rgb(99,99,99); border: 0.05rem solid rgb(99,99,99); background: #fcfcfc; } The arrow itself is a more serious business; the basic idea is to use a linear gradient that adds a color only to half the element starting from its diagonal. Then we can rotate the element in order to move the pointed end at its center. The following is the code that needs to be placed within div[data-arrow]: &:before{ position: absolute; display: block; content: ''; @include px_and_rem(width, 4, $multiplier); @include px_and_rem(height, 0.5, $multiplier); @include px_and_rem(bottom, 0.65, $multiplier); @include px_and_rem(left, -3, $multiplier); background-image: linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -webkit-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -moz-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -o-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); @include apply-origin(100%, 100%); @include transform2d( rotate(-3.45deg)); box-shadow: 0px #{-0.05 * $multiplier} 0 rgba(0,0,0,0.2); box-shadow: 0px -0.05rem 0 rgba(0,0,0,0.2); @include px_and_rem(border-top-right-radius, 0.25, $multiplier); @include px_and_rem(border-bottom-right-radius, 0.35, $multiplier); } To better understand the trick behind this implementation, we can temporarily add border: 1px solid red within the &:before selector to the result and zoom a bit: Moving the arrow Now we want to position the arrow to the correct angle depending on the data-percent attribute value. To do so we have to take advantage of the power of SASS. In theory the CSS3 specification would allow us to valorize some properties using values taken from attributes, but in practice this is only possible while dealing with the content property. So what we're going to do is create a @for loop from 0 to 100 and print in each iteration a selector that matches a defined value of the data-percent attribute. Then we'll set a different rotate() property for each of the CSS rules. The following is the code; this time it must be placed within the div[data-gauge] selector: @for $i from 0 through 100 { $v: $i; @if $i < 10 { $v: '0' + $i; } &[data-percent='#{$v}'] > div[data-arrow]{ @include transform2d(rotate(#{180deg * $i/100})); } } If you are too scared about the amount of CSS generated, then you can decide to adjust the increment of the gauge, for example, to 10: @for $i from 0 through 10 { &[data-percent='#{$i*10}'] > div[data-arrow]{ @include transform2d(rotate(#{180deg * $i/10})); } } And the following is the result:    
Read more
  • 0
  • 0
  • 3379

article-image-responsive-techniques
Packt
20 Mar 2013
9 min read
Save for later

Responsive techniques

Packt
20 Mar 2013
9 min read
(For more resources related to this topic, see here.) Media queries Media queries are an important part of responsive layouts. They are part of CSS to make it possible to add styles specific to a certain media. Media queries can target the output type, screen size, as well as the device orientation, and even the density of the display. But let's have a look at a simple example before we get lost in theory: #header { background-repeat: no-repeat; background-image: url(logo.png); } @media print { #header { display: none; } } The highlighted line in the preceding code snippet makes sure that all the nested styles are only used if the CSS file is used on a printer. To be a bit more precise, it will hide the element with the ID header once you print the document. The same could be achieved by creating two different files and including them with a code as follows: <link rel="stylesheet" media="all" href="normal.css" /> <link rel="stylesheet" media="print" href="print.css" /> It's not always clear whether you want to create a separate CSS file or not. Using too many CSS files might slow down the loading process a bit, but having one big file might make it a bit messier to handle it. Having a separate print CSS is something you'll see rather often while screen resolution dependent queries are usually in the main CSS file. Here's another example where we use the screen width to break a two columns layout into a single column layout if the screen gets smaller. The following screenshot shows you the layout on an average desktop computer as well as on a tablet: The HTML code we need for our example looks as follows: <div class="box">1</div> <div class="box">2</div> The CSS including the media queries could be as follows: .box { width: 46%; padding: 2%; float: left; } @media all and (max-width: 1023px) { .box { width: 96%; float: none; } } By default, each box has a width of 46 percent and a padding of 2 percent on each side, adding up to a total width of 50 percent. If you look at the highlighted line, you can see a media query relevant to all media types but with a restriction to a maximum width of 1023 pixels. This means that if you view the page on a device with a screen width less than 1023 pixels, the nested CSS styles will be used. In the preceding example, we're overriding the default width of 46 percent with 96 percent. In combination with the 2 percent padding that is still there, we're going to stretch the box to the width of the screen. Checking the maximum width can achieve a lot already, but there are different queries as well. Here are a few queries that could be useful: The following query matches the iPad but only if the orientation is landscape. You could also check the portrait mode by using orientation: portrait. @media screen and (device-width: 768px) and (device-height: 1024px) and (orientation: landscape) If you want to display content specific for a high-resolution screen such as a retina display, use this: @media all and (-webkit-min-device-pixel-ratio: 2) Instead of checking the maximum width, you can also do it the other way round and check the minimum width. The following query could be used to use styles only used in a normal-sized screen: @media screen and (min-width: 1224px) There are a lot more variables you can check in a media query. You can find a nicely arranged list of queries including a testing application on the following site: http://cssmediaqueries.com/ Try to get an overview of what's possible; but don't worry, you'll hardly ever need more than four media queries. How to scale pictures We've seen how you can check the device type in a few different ways, and also used this to change a two-column layout into a single-column layout. This allows you to rearrange the layout elements; but what happens to pictures if the container, in which the picture is located, changes size? If you look at the following screen, you can see a mock-up where the picture has to change its width from 1000 pixels to 700 pixels: Look at the code that follows: <div id="container"> <img src = "picture.jpg" width="1000" height="400"> </div> Assuming the HTML code looks like this, if we would add a responsive media query that resizes the container to match the width of the screen, we've to cut off a part of the picture. What we want to do is to scale the picture within the container to have the same or a smaller size than the container. The following CSS snippet can be added to any CSS file to make your pictures scale nicely in a responsive layout: img { max-width: 100%; height: auto; } Once you've added this little snippet, your picture will never get bigger than its parent container. The property max-width is pretty obvious; it restricts the maximum size. But why is height: auto necessary? If you look at the preceding HTML code, you can see that our picture has a fixed width and height. If we'd only specify max-width and look at the picture on a screen with a width of 500 pixels, we'd get a picture with the dimensions 500 x 400 pixels. The picture would be distorted. To avoid this, we specify height: auto; to make sure the height stays in relation to the width of the picture. Pictures on high-density screens If you printed some of your graphic ideas on a paper and not just displayed them on a computer screen, you'll probably have heard of pixels per inch (ppi) and dots per inch (dpi) before. It's a measure you can use to determine the number of pixels per inch. You'll get a higher density and more details if you have more pixels or dots per inch. You might have read about retina screens; those displays have a density of around 300 dpi, about twice as much as an average computer monitor. You don't have to, but it's nice if you also make sure that you give owners of such a device the chance to look at high-quality pictures. What's necessary to deliver a high-quality picture to a retina screen? Some of you might think that you have to save an image with more ppi; but that's not what you should do when creating a retina-ready site. Devices with a retina display simply ignore the pixels per inch information stored in an image. However, the dimension of your images matters. A picture saved for a retina display should have exactly twice the size of the original image. You could use an SVG picture instead but you still have to use a fallback picture because SVG isn't supported as well as image formats like PNG. In theory, SVG would be even better because it is vector-based and scales perfectly in any resolution. Enough theory, let's look at an example that uses CSS to display a different image for a retina display: #logo { background-image: url(logo.png); background-size: 400px 150px; width: 400px; height: 150px; } @media all screen and (-webkit-min-device-pixel-ratio: 2) { #logo { background-image: url([email protected]); } } We've got an HTML element <div id="logo"></div> where we display a background picture to show our logo. If you look at the media query, you can see that we're checking a variable called –webkit-min-device-pixel-ratio to detect a high-resolution display. In case the display has a pixel ratio of 2 or higher, we're using an alternative picture that is twice the size. But note the width and height of the container stays the same. What alternatives are there? As we quickly mentioned, a vector-based format such as SVG would have some benefits but isn't supported on IE7 and IE8. However, you can still use SVG. But you have to make sure that there's a fallback image for those two browsers. Have a look at the following code: <!--[if lte IE 8]> <img src = "logo.png" width="200" height="50" /> <![endif]--> <!--[if gt IE 8]> <img src = "logo.svg" width="200" height="50" /> <![endif]--> <!--[if !IE]>--> <img src = "logo.svg" width="200" height="50" /> <!--<![endif]--> In the preceding code, we're using conditional tags to make sure that the old versions of Internet Explorer use the logo saved as PNG, but switch to SVG for modern browsers. It takes a bit more effort because you have to work with vectors and still have to save it as a PNG file, but if you want to make sure your logo or illustrations are nicely displayed when zoomed, use this trick. Working with SVG is an option, but there's another vector-based solution that you might want to use in some situations. A simple text is always rendered using vectors. No matter what font size you're using, it will always have sharp edges. Most people probably think about letters, numbers, and punctuation marks when they think about fonts, but there are icon fonts as well. Also keep in mind that you can create your own icon web font too. Check this site:http://fontello.com/ , which is a nice tool that allows you to select the icons you need and use it as a web font. You can also create your very own icon web font from scratch; check out the following link for a detailed tutorial: http://www.webdesignerdepot.com/2012/01/how-to-make-your-own-iconwebfont/ One last word before we continue; Retina screens are relatively new. It's therefore no surprise that some of the specifications to create a perfect retina layout are still drafts. The Web is a fast moving place; expect some changes and new features to insert an image with multiple sources and more vector features. Summary In this article, we learned about responsive themes that can be added to our themes and how media queries are an important part of responsive layouts. This article also helped you on how to scale pictures on different types of devices. It also helped you understand areas regarding what it takes to display websites for retina screens. Resources for Article : Further resources on this subject: concrete5: Mastering Auto-Nav for Advanced Navigation [Article] Everything in a Package with concrete5 [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 939
Banner background image

article-image-magento-payment-and-shipping-method
Packt
13 Mar 2013
4 min read
Save for later

Magento : Payment and shipping method

Packt
13 Mar 2013
4 min read
(For more resources related to this topic, see here.) Payment and shipping method Magento CE comes with several Payment and Shipping methods out of the box. Since total payment is calculated based on the order and shipping cost, it makes sense to first define our shipping method. The available shipping methods can be found in Magento Admin Panel under the Shipping Methods section in System | Configuration | Sales. Flat Rate, Table Rates, and Free Shipping methods fall under the category of static methods while others are dynamic. Dynamic means retrieval of rates from various shipping providers. Static means that shipping rates are based on a predefined set of rules. For the live production store you might be interested in obtaining the merchant account for one of the dynamic methods because they enable potentially more precise shipping cost calculation in regards to product weight. Clean installation of Magento CE comes with the Flat Rate shipping method turned on, so be sure to turn it off in production if not required by setting the Enabled option in System | Configuration | Sales | Shipping Methods | Flat Rate to No. Setting up the dynamic methods is pretty easy; all you need to do is to obtain the access data from the shipping provider such as FedEx then configure that access data under the proper shipping method configuration area, for example, System | Configuration | Sales | Shipping Methods | FedEx. Payment method configuration is available under System | Configuration | Sales | Payment Methods. Similar to shipping methods, these are defined into two main groups, static and dynamic. Dynamic in this case means that an external payment gateway provider such as PayPal will actually charge the customer's credit card upon successful checkout. Static simply means that checkout will be completed but you as a merchant will have to make sure that the customer actually paid the order prior to shipping the products to him. Clean installation of Magento CE comes with Saved CC, Check / Money order, and Zero Subtotal Checkout turned on, so be sure to turn these off in production if they are not required. How to do it... To configure the shipping method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Shipping Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. To configure payment method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Payment Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. How it works... Once a certain shipping method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Shipping Method step, as shown in the screenshot that follows. The shipping method's price is based on the customer's shipping address, products in cart, applied promo rules, and possibly other parameters. Numerous other shipping modules are provided at Magento Connect on the site http://www.magentocommerce.com/magento-connect/integrations/shippingfulfillment.html and new ones are uploaded often so this is by no means a final list of shipping methods. Once a certain payment method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Payment Information step, as shown in the following screenshot: Additionally, there are numerous other payment modules provided at Magento Connect on the site at http://www.magentocommerce.com/magento-connect/integrations/payment-gateways.html. Summary In this article, we have explained the payment and shipping method you may require when you build your own shop using Magneto. Resources for Article : Further resources on this subject: Creating and configuring a basic mobile application [Article] Installing WordPress e-Commerce Plugin and Activating Third-party Themes [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 1764

article-image-core-net-recipes
Packt
26 Feb 2013
15 min read
Save for later

Core .NET Recipes

Packt
26 Feb 2013
15 min read
(For more resources related to this topic, see here.) Implementing the validation logic using the Repository pattern The Repository pattern abstracts out data-based validation logic. It is a common misconception that to implement the Repository pattern you require a relational database such as MS SQL Server as the backend. Any collection can be treated as a backend for a Repository pattern. The only point to keep in mind is that the business logic or validation logic must treat it as a database for saving, retrieving, and validating its data. In this recipe, we will see how to use a generic collection as backend and abstract out the validation logic for the same. The validation logic makes use of an entity that represents the data related to the user and a class that acts as the repository for the data allowing certain operations. In this case, the operation will include checking whether the user ID chosen by the user is unique or not. How to do it... The following steps will help check the uniqueness of a user ID that is chosen by the user: Launch Visual Studio .NET 2012. Create a new project of Class Library project type. Name it CookBook.Recipes.Core.CustomValidation. Add a folder to the project and set the folder name to DataModel. Add a new class and name it User.cs. Open the User class and create the following public properties: Property name Data type UserName String DateOfBirth DateTime Password String Use the automatic property functionality of .NET to create the properties. The final code of the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Next, let us create the repository. Add a new folder and name it Repository. Add an interface to the Repository folder and name it IRepository.cs. The interface will be similar to the following code snippet: public interface IRepository { } Open the IRepository interface and add the following methods: Name Description Parameter(s) Return Type AddUser Adds a new user User object Void IsUsernameUnique Determines whether the username is already taken or not string Boolean After adding the methods, IRepository will look like the following code: namespace CookBook.Recipes.Core.CustomValidation { public interface IRepository { void AddUser(User user); bool IsUsernameUnique(string userName); } } Next, let us implement IRepository. Create a new class in the Repository folder and name it MockRepository. Make the MockRepository class implement IRepository. The code will be as follows: namespace CookBook.Recipes.Core.Data.Repository { public class MockRepository:IRepository { #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already ///exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } } Now, add a private variable of type List<Users> in the MockRepository class. Name it _users. It will hold the registered users. It is a static variable so that it can hold usernames across multiple instantiations. Add a constructor to the class. Then initialize the _users list and add two users to the list: public class MockRepository:IRepository { #region Variables Private static List<User> _users; #endregion public MockRepository() { _users = new List<User>(); _users.Add(new User() { UserName = "wayne27", DateOfBirth = new DateTime(1950, 9, 27), Password = "knight" }); DateOfBirth = new DateTime(1955, 9, 27), Password = "justice" }); } #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } Now let us add the code to check whether the username is unique. Add the following statements to the IsUsernameUnique method: bool exists = _users.Exists(u=>u.UserName==userName); return !exists; The method turns out to be as follows: public bool IsUsernameUnique(string userName) { bool exists = _users.Exists(u=>u.UserName==userName); return !exists; } Modify the AddUser method so that it looks as follows: public void AddUser(User user) { _users.Add(user); } How it works... The core of the validation logic lies in the IsUsernameUnique method of the MockRespository class. The reason to place the logic in a different class rather than in the attribute itself was to decouple the attribute from the logic to be validated. It is also an attempt to make it future-proof. In other words, tomorrow, if we want to test the username against a list generated from an XML file, we don't have to modify the attribute. We will just change how IsUsernameUnique works and it will be reflected in the attribute. Also, creating a Plain Old CLR Object (POCO) to hold values entered by the user stops the validation logic code from directly accessing the source of input, that is, the Windows application. Coming back to the IsUsernameUnique method, it makes use of the predicate feature provided by .NET. Predicate allows us to loop over a collection and find a particular item. Predicate can be a static function, a delegate, or a lambda. In our case it is a lambda. bool exists = _users.Exists(u=>u.UserName==userName); In the previous statement, .NET loops over _users and passes the current item to u. We then make use of the item held by u to check whether its username is equal to the username entered by the user. The Exists method returns true if the username is already present. However, we want to know whether the username is unique. So we flip the value returned by Exists in the return statement, as follows: return !exists; Creating a custom validation attribute by extending the validation data annotation .NET provides data annotations as a part of the System.ComponentModel. DataAnnotation namespace. Data annotations are a set of attributes that provides out of the box validation, among other things. However, sometimes none of the in-built validations will suit your specific requirements. In such a scenario, you will have to create your own validation attribute. This recipe will tell you how to do that by extending the validation attribute. The attribute developed will check whether the supplied username is unique or not. We will make use of the validation logic implemented in the previous recipe to create a custom validation attribute named UniqueUserValidator. How to do it... The following steps will help you create a custom validation attribute to meet your specific requirements: Launch Visual Studio 2012. Open the CustomValidation solution. Add a reference to System.ComponentModel.DataAnnotations. Add a new class to the project and name it UniqueUserValidator. Add the following using statements: using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation.MessageRepository; using CookBook.Recipes.Core.Data.Repository; Derive it from ValidationAttribute, which is a part of the System. ComponentModel.DataAnnotations namespace. In code, it would be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { } } Add a property of type IRepository to the class and name it Repository. Add a constructor and initialize Repository to an instance of MockRepository. Once the code is added, the class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } } } Override the IsValid method of ValidationAttribute. Convert the object argument to string. Then call the IsUsernameUnique method of IRepository, pass the string value as a parameter, and return the result. After the modification, the code will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } public override bool IsValid(object value) { string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); } } } We have completed the implementation of our custom validation attribute. Now let's test it out. Add a new Windows Forms Application project to the solution and name it CustomValidationApp. Add a reference to the System.ComponentModel.DataAnnotations and CustomValidation projects. Rename Form1.cs to Register.cs. Open Register.cs in the design mode. Add controls for username, date of birth, and password and also add two buttons to the form. The form should look like the following screenshot: Name the input control and button as given in the following table: Control Name Textbox txtUsername Button btnOK Since we are validating the User Name field, our main concern is with the textbox for the username and the OK button. I have left out the names of other controls for brevity. Switch to the code view mode. In the constructor, add event handlers for the Click event of btnOK as shown in the following code: public Register() { InitializeComponent(); this.btnOK.Click += new EventHandler(btnOK_Click); } void btnOK_Click(object sender, EventArgs e) { } Open the User class of the CookBook.Recipes.Core.CustomValidation project. Annotate the UserName property with UniqueUserValidator. After modification, the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Go back to Register.cs in the code view mode. Add the following using statements: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation; using CookBook.Recipes.Core.Data.Repository; Add the following code to the event handler of btnOK: //create a new user User user = new User() { UserName = txtUsername.Text, DateOfBirth=dtpDob.Value }; //create a validation context for the user instance ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) { foreach (ValidationResult result in errors) MessageBox.Show(result.ErrorMessage); } else { IRepository repository = new MockRepository(); repository.AddUser(user); MessageBox.Show("New user added"); } Hit F5 to run the application. In the textbox add a username, say, dreamwatcher. Click on OK. You will get a message box stating User has been added successfully Enter the same username again and hit the OK button. This time you will get a message saying User name already exists. This means our attribute is working as desired. Go to File | Save Solution As…, enter CustomValidation for Name, and click on OK. We will be making use of this solution in the next recipe. How it works... To understand how UniqueUserValidator works, we have to understand how it is implemented and how it is used/called. Let's start with how it is implemented. It extends ValidationAttribute. The ValidationAttribute class is the base class for all the validation-related attributes provided by data annotations. So the declaration is as follows: public class UniqueUserValidator:ValidationAttribute This allowed us to make use of the public and protected methods/attribute arguments of ValidationAttribute as if it is a part of our attribute. Next, we have a property of type IRepository, which gets initialized to an instance of MockRepository. We have used the interface-based approach so that the attribute will only need a minor change if we decide to test the username against a database table or list generated from a file. In such a scenario, we will just change the following statement: this.Repository = new MockRepository(); The previous statement will be changed to something such as the following: this.Repository = new DBRepository(); Next, we overrode the IsValid method. This is the method that gets called when we use UniqueUserValidator. The parameter of the IsValid method is an object. So we have to typecast it to string and call the IsUniqueUsername method of the Repository property. That is what the following statements accomplish: string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); Now let us see how we used the validator. We did it by decorating the UserName property of the User class: [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName {get; set;} As I already mentioned, deriving from ValidatorAttribute helps us in using its properties as well. That's why we can use ErrorMessage even if we have not implemented it. Next, we have to tell .NET to use the attribute to validate the username that has been set. That is done by the following statements in the OK button's Click handler in the Register class: ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) First, we instantiate an object of ValidationContext. Its main purpose is to set up the context in which validation will be performed. In our case the context is the User object. Next, we call the TryValidateObject method of the Validator class with the User object, the ValidationContext object, and a list to hold the errors. We also tell the method that we need to validate all properties of the User object by setting the last argument to true. That's how we invoke the validation routine provided by .NET. Using XML to generate a localized validation message In the last recipe you saw that we can pass error messages to be displayed to the validation attribute. However, by default, the attributes accept only a message in the English language. To display a localized custom message, it needs to be fetched from an external source such as an XML file or database. In this recipe, we will see how to use an XML file to act as a backend for localized messages. How to do it... The following steps will help you generate a localized validation message using XML: Open CustomValidation.sln in Visual Studio .NET 2012. Add an XML file to the CookBook.Recipes.Core.CustomValidation project. Name it Messages.xml. In the Properties window, set Build Action to Embedded Resource. Add the following to the Messages.xml file: <?xml version="1.0" encoding="utf-8" ?> <messages> <en> <message key="not_unique_user">User name is not unique</message> </en> <fr> <message key="not_unique_user">Nom d'utilisateur n'est pas unique</message> </fr> </messages> Add a folder to the CookBook.Recipes.Core.CustomValidation project. Name it MessageRepository. Add an interface to the MessageRepository folder and name it IMessageRepository. Add a method to the interface and name it GetMessages. It will have IDictionary<string,string> as a return type and will accept a string value as parameter. The interface will look like the following code: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public interface IMessageRepository { IDictionary<string, string> GetMessages(string locale); } } Add a class to the MessageRespository folder. Name it XmlMessageRepository Add the following using statements: using System.Xml; Implement the IMessageRepository interface. The class will look like the following code once we implement the interface: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public class XmlMessageRepository:IMessageRepository { #region IMessageRepository Members public IDictionary<string, string> GetMessages(string locale) { return null; } #endregion } } Modify GetMessages so that it looks like the following code: ublic IDictionary<string, string> GetMessages(string locale) { XmlDocument xDoc = new XmlDocument(); xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceS tream("CustomValidation.Messages.xml")); XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node. InnerText); } return dictionary; } Next let us see how to modify UniqueUserValidator so that it can localize the error message. How it works... The Messages.xml file and the GetMessages method of XmlMessageRespository form the core of the logic to generate a locale-specific message. Message.xml contains the key to the message within the locale tag. We have created the locale tag using the two-letter ISO name of a locale. So, for English it is <en></en> and for French it is <fr></fr>. Each locale tag contains a message tag. The key attribute of the tag will have the key that will tell us which message tag contains the error message. So our code will be as follows: <message key="not_unique_user">User name is not unique</message> not_unique_user is the key to the User is not unique error message. In the GetMessages method, we first load the XML file. Since the file has been set as an embedded resource, we read it as a resource. To do so, we first got the executing assembly, that is, CustomValidation. Then we called GetManifestResourceAsStream and passed the qualified name of the resource, which in this case is CustomValidation.Messages.xml. That is what we achieved in the following statement: xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceStream( "CustomValidation.Messages.xml")); Then we constructed an XPath to the message tag using the locale passed as the parameter. Using the XPath query/expression we got the following message nodes: XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); After getting the node list, we looped over it to construct a dictionary. The value of the key attribute of the node became the key of the dictionary. And the value of the node became the corresponding value in the dictionary, as is evident from the following code: SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node.InnerText); } The dictionary was then returned by the method. Next, let's understand how this dictionary is used by UniqueUserValidator.
Read more
  • 0
  • 0
  • 1306

article-image-apache-solr-configuration
Packt
19 Feb 2013
17 min read
Save for later

Apache Solr Configuration

Packt
19 Feb 2013
17 min read
(For more resources related to this topic, see here.) During the writing of this article, I used Solr version 4.0 and Jetty Version 8.1.5. If another version of Solr is mandatory for a feature to run, then it will be mentioned. If you don't have any experience with Apache Solr, please refer to the Apache Solr tutorial which can be found at : http://lucene.apache.org/solr/tutorial.html. Running Solr on Jetty The simplest way to run Apache Solr on a Jetty servlet container is to run the provided example configuration based on embedded Jetty. But it's not the case here. In this recipe, I would like to show you how to configure and run Solr on a standalone Jetty container. Getting ready First of all you need to download the Jetty servlet container for your platform. You can get your download package from an automatic installer (such as, apt-get), or you can download it yourself from http://jetty.codehaus.org/jetty/ How to do it... The first thing is to install the Jetty servlet container, which is beyond the scope of this article, so we will assume that you have Jetty installed in the /usr/share/jetty directory or you copied the Jetty files to that directory. Let's start by copying the solr.war file to the webapps directory of the Jetty installation (so the whole path would be /usr/share/jetty/webapps). In addition to that we need to create a temporary directory in Jetty installation, so let's create the temp directory in the Jetty installation directory. Next we need to copy and adjust the solr.xml file from the context directory of the Solr example distribution to the context directory of the Jetty installation. The final file contents should look like the following code: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www. eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/solr</Set> <Set name="war"><SystemProperty name="jetty.home"/>/webapps/solr. war</Set> <Set name="defaultsDescriptor"><SystemProperty name="jetty.home"/>/ etc/webdefault.xml</Set> <Set name="tempDirectory"><Property name="jetty.home" default="."/>/ temp</Set> </Configure> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Now we need to copy the jetty.xml, webdefault.xml, and logging.properties files from the etc directory of the Solr distribution to the configuration directory of Jetty, so in our case to the /usr/share/jetty/etc directory. The next step is to copy the Solr configuration files to the appropriate directory. I'm talking about files such as schema.xml, solrconfig.xml, solr.xml, and so on. Those files should be in the directory specified by the solr.solr.home system variable (in my case this was the /usr/share/solr directory). Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. The last thing about the configuration is to update the /etc/default/jetty file and add –Dsolr.solr.home=/usr/share/solr to the JAVA_OPTIONS variable of that file. The whole line with that variable could look like the following: JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true -Dsolr.solr.home=/usr/ share/solr/" If you didn't install Jetty with apt-get or a similar software, you may not have the /etc/default/jetty file. In that case, add the –Dsolr.solr.home=/usr/share/solr parameter to the Jetty startup. We can now run Jetty to see if everything is ok. To start Jetty, that was installed, for example, using the apt-get command, use the following command: /etc/init.d/jetty start You can also run Jetty with a java command. Run the following command in the Jetty installation directory: java –Dsolr.solr.home=/usr/share/solr –jar start.jar If there were no exceptions during the startup, we have a running Jetty with Solr deployed and configured. To check if Solr is running, try going to the following address with your web browser: http://localhost:8983/solr/. You should see the Solr front page with cores, or a single core, mentioned. Congratulations! You just successfully installed, configured, and ran the Jetty servlet container with Solr deployed. How it works... For the purpose of this recipe, I assumed that we needed a single core installation with only I and solrconfig.xml configuration files. Multicore installation is very similar – it differs only in terms of the Solr configuration files. The first thing we did was copy the solr.war file and create the temp directory. The WAR file is the actual Solr web application. The temp directory will be used by Jetty to unpack the WAR file. The solr.xml file we placed in the context directory enables Jetty to define the context for the Solr web application. As you can see in its contents, we set the context to be /solr, so our Solr application will be available under http://localhost:8983/solr/ We also specified where Jetty should look for the WAR file (the war property), where the web application descriptor file (the defaultsDescriptor property) is, and finally where the temporary directory will be located (the tempDirectory property). The next step is to provide configuration files for the Solr web application. Those files should be in the directory specified by the system solr.solr.home variable. I decided to use the /usr/share/solr directory to ensure that I'll be able to update Jetty without the need of overriding or deleting the Solr configuration files. When copying the Solr configuration files, you should remember to include all the files and the exact directory structure that Solr needs. So in the directory specified by the solr.solr.home variable, the solr.xml file should be available – the one that describes the cores of your system. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be a cores tag (with the adminPath variable set to the address where Solr's cores administration API is available and the defaultCoreName attribute that says which is the default core). The cores tag is a parent for cores definition – each core should have its own cores tag with name attribute specifying the core name and the instanceDir attribute specifying the directory where the core specific files will be available (such as the conf directory). If you installed Jetty with the apt–get command or similar, you will need to update the /etc/default/jetty file to include the solr.solr.home variable for Solr to be able to see its configuration directory. After all those steps we are ready to launch Jetty. If you installed Jetty with apt–get or a similar software, you can run Jetty with the first command shown in the example. Otherwise you can run Jetty with a java command from the Jetty installation directory. After running the example query in your web browser you should see the Solr front page as a single core. Congratulations! You just successfully configured and ran the Jetty servlet container with Solr deployed. There's more... There are a few tasks you can do to counter some problems when running Solr within the Jetty servlet container. Here are the most common ones that I encountered during my work. I want Jetty to run on a different port Sometimes it's necessary to run Jetty on a different port other than the default one. We have two ways to achieve that: Adding an additional startup parameter, jetty.port. The startup command would look like the following command: java –Djetty.port=9999 –jar start.jar Changing the jetty.xml file – to do that you need to change the following line: <Set name="port"><SystemProperty name="jetty.port" default="8983"/></Set> To: <Set name="port"><SystemProperty name="jetty.port" default="9999"/></Set> Buffer size is too small Buffer overflow is a common problem when our queries are getting too long and too complex, – for example, when we use many logical operators or long phrases. When the standard head buffer is not enough you can resize it to meet your needs. To do that, you add the following line to the Jetty connector in the jetty.xml file. Of course the value shown in the example can be changed to the one that you need: <Set name="headerBufferSize">32768</Set> After adding the value, the connector definition should look more or less like the following snippet: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="port"><SystemProperty name="jetty.port" default="8080"/></ Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> <Set name="headerBufferSize">32768</Set> </New> </Arg> </Call> Running Solr on Apache Tomcat Sometimes you need to choose a servlet container other than Jetty. Maybe because your client has other applications running on another servlet container, maybe because you just don't like Jetty. Whatever your requirements are that put Jetty out of the scope of your interest, the first thing that comes to mind is a popular and powerful servlet container – Apache Tomcat. This recipe will give you an idea of how to properly set up and run Solr in the Apache Tomcat environment. Getting ready First of all we need an Apache Tomcat servlet container. It can be found at the Apache Tomcat website – http://tomcat.apache.org. I concentrated on the Tomcat Version 7.x because at the time of writing of this book it was mature and stable. The version that I used during the writing of this recipe was Apache Tomcat 7.0.29, which was the newest one at the time. How to do it... To run Solr on Apache Tomcat we need to follow these simple steps: Firstly, you need to install Apache Tomcat. The Tomcat installation is beyond the scope of this book so we will assume that you have already installed this servlet container in the directory specified by the $TOMCAT_HOME system variable. The second step is preparing the Apache Tomcat configuration files. To do that we need to add the following inscription to the connector definition in the server.xml configuration file: URIEncoding="UTF-8" The portion of the modified server.xml file should look like the following code snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> The third step is to create a proper context file. To do that, create a solr.xml file in the $TOMCAT_HOME/conf/Catalina/localhost directory. The contents of the file should look like the following code: <Context path="/solr" docBase="/usr/share/tomcat/webapps/solr.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/ usr/share/solr/" override="true"/> </Context> The next thing is the Solr deployment. To do that we need the apache-solr-4.0.0.war file that contains the necessary files and libraries to run Solr that is to be copied to the Tomcat webapps directory and renamed solr.war. The one last thing we need to do is add the Solr configuration files. The files that you need to copy are files such as schema.xml, solrconfig.xml, and so on. Those files should be placed in the directory specified by the solr/home variable (in our case /usr/share/solr/). Please don't forget that you need to ensure the proper directory structure. If you are not familiar with the Solr directory structure please take a look at the example deployment that is provided with the standard Solr package. Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/ conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. Now we can start the servlet container, by running the following command: bin/catalina.sh start In the log file you should see a message like this: Info: Server startup in 3097 ms To ensure that Solr is running properly, you can run a browser and point it to an address where Solr should be visible, like the following: http://localhost:8080/solr/ If you see the page with links to administration pages of each of the cores defined, that means that your Solr is up and running. How it works... Let's start from the second step as the installation part is beyond the scope of this book. As you probably know, Solr uses UTF-8 file encoding. That means that we need to ensure that Apache Tomcat will be informed that all requests and responses made should use that encoding. To do that, we modified the server.xml file in the way shown in the example. The Catalina context file (called solr.xml in our example) says that our Solr application will be available under the /solr context (the path attribute). We also specified the WAR file location (the docBase attribute). We also said that we are not using debug (the debug attribute), and we allowed Solr to access other context manipulation methods. The last thing is to specify the directory where Solr should look for the configuration files. We do that by adding the solr/home environment variable with the value attribute set to the path to the directory where we have put the configuration files. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be the cores tag (with the adminPath variable set to the address where the Solr cores administration API is available and the defaultCoreName attribute describing which is the default core). The cores tag is a parent for cores definition – each core should have its own core tag with a name attribute specifying the core name and the instanceDir attribute specifying the directory where the core-specific files will be available (such as the conf directory). The shell command that is shown starts Apache Tomcat. There are some other options of the catalina.sh (or catalina.bat) script; the descriptions of these options are as follows: stop: This stops Apache Tomcat restart: This restarts Apache Tomcat debug: This start Apache Tomcat in debug mode run: This runs Apache Tomcat in the current window, so you can see the output on the console from which you run Tomcat. After running the example address in the web browser, you should see a Solr front page with a core (or cores if you have a multicore deployment). Congratulations! You just successfully configured and ran the Apache Tomcat servlet container with Solr deployed. There's more... There are some other tasks that are common problems when running Solr on Apache Tomcat. Changing the port on which we see Solr running on Tomcat Sometimes it is necessary to run Apache Tomcat on a different port other than 8080, which is the default one. To do that, you need to modify the port variable of the connector definition in the server.xml file located in the $TOMCAT_HOME/conf directory. If you would like your Tomcat to run on port 9999, this definition should look like the following code snippet: <Connector port="9999" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> While the original definition looks like the following snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> Installing a standalone ZooKeeper You may know that in order to run SolrCloud—the distributed Solr installation–you need to have Apache ZooKeeper installed. Zookeeper is a centralized service for maintaining configurations, naming, and provisioning service synchronization. SolrCloud uses ZooKeeper to synchronize configuration and cluster states (such as elected shard leaders), and that's why it is crucial to have a highly available and fault tolerant ZooKeeper installation. If you have a single ZooKeeper instance and it fails then your SolrCloud cluster will crash too. So, this recipe will show you how to install ZooKeeper so that it's not a single point of failure in your cluster configuration. Getting ready The installation instruction in this recipe contains information about installing ZooKeeper Version 3.4.3, but it should be useable for any minor release changes of Apache ZooKeeper. To download ZooKeeper please go to http://zookeeper.apache.org/releases.html This recipe will show you how to install ZooKeeper in a Linux-based environment. You also need Java installed. How to do it... Let's assume that we decided to install ZooKeeper in the /usr/share/zookeeper directory of our server and we want to have three servers (with IP addresses 192.168.1.1, 192.168.1.2, and 192.168.1.3) hosting the distributed ZooKeeper installation. After downloading the ZooKeeper installation, we create the necessary directory: sudo mkdir /usr/share/zookeeper Then we unpack the downloaded archive to the newly created directory. We do that on three servers. Next we need to change our ZooKeeper configuration file and specify the servers that will form the ZooKeeper quorum, so we edit the /usr/share/zookeeper/conf/ zoo.cfg file and we add the following entries: clientPort=2181 dataDir=/usr/share/zookeeper/data tickTime=2000 initLimit=10 syncLimit=5 server.1=192.168.1.1:2888:3888 server.2=192.168.1.2:2888:3888 server.3=192.168.1.3:2888:3888 And now, we can start the ZooKeeper servers with the following command: /usr/share/zookeeper/bin/zkServer.sh start If everything went well you should see something like the following: JMX enabled by default Using config: /usr/share/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED And that's all. Of course you can also add the ZooKeeper service to start automatically during your operating system startup, but that's beyond the scope of the recipe and the book itself. How it works... Let's skip the first part, because creating the directory and unpacking the ZooKeeper server there is quite simple. What I would like to concentrate on are the configuration values of the ZooKeeper server. The clientPort property specifies the port on which our SolrCloud servers should connect to ZooKeeper. The dataDir property specifies the directory where ZooKeeper will hold its data. So far, so good right ? So now, the more advanced properties; the tickTime property specified in milliseconds is the basic time unit for ZooKeeper. The initLimit property specifies how many ticks the initial synchronization phase can take. Finally, the syncLimit property specifies how many ticks can pass between sending the request and receiving an acknowledgement. There are also three additional properties present, server.1, server.2, and server.3. These three properties define the addresses of the ZooKeeper instances that will form the quorum. However, there are three values separated by a colon character. The first part is the IP address of the ZooKeeper server, and the second and third parts are the ports used by ZooKeeper instances to communicate with each other.
Read more
  • 0
  • 0
  • 2535
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introduction-risk-analysis
Packt
18 Feb 2013
21 min read
Save for later

An Introduction to Risk Analysis

Packt
18 Feb 2013
21 min read
(For more resources related to this topic, see here.) Risk analysis First, we must understand what risk is, how it is calculated, and then implement a solution to mitigate or reduce the calculated risk. At this point in the process of developing agile security architecture, we have already defined our data. The following sections assume we know what the data is, just not the true impact to the enterprise if a threat is realized. What is risk analysis? Simply stated, risk analysis is the process of assessing the components of risk; threats, impact, and probability as it relates to an asset, in our case enterprise data. To ascertain risk, the probability of impact to enterprise data must first be calculated. A simple risk analysis output may be the decision to spend capital to protect an asset based on value of the asset and the scope of impact if the risk is not mitigated. This is the most general form of risk analysis, and there are several methods that can be applied to produce a meaningful output. Risk analysis is directly impacted by the maturity of the organization in terms of being able to show value to the enterprise as a whole and understanding the applied risk methodology. If the enterprise does not have a formal risk analysis capability, it will be difficult for the security team to use this method to properly implement security architecture for enterprise initiatives. Without this capability, the enterprise will either spend on the products with the best marketing, or not spend at all. Let's take a closer look at the risk analysis components and figure out where useful analysis data can be obtained. Assessing threats First, we must define what a threat is in order to identify probable threats. It may be difficult to determine threats to the enterprise data if this analysis has never been completed. A threat is anything that can act negatively towards the enterprise assets. It may be a person, virus, malware, or a natural disaster. Due to the broad scope of threats, actions may be purposeful or unintentional in nature adding to the absolute unpredictability of impact. Once a threat is defined, the attributes of threats must be identified and documented. The documentation of threats should include the type of threat, identified threat groupings, motivations if any, and methods of actions. In order to gain understanding of pertinent threats for the enterprise, researching past events may be helpful. Historically, there have been challenges to getting realistic breach data, but better reporting of post-breach findings continues to reduce the uncertainty of analysis. Another method to getting data is leveraging existing security technologies implemented to build a realistic perspective of threats. The following are a few sample questions to guide you on the discovery of threats: What is being detected by the existing infrastructure? What are others in the same industry observing? What post-breach data is available in the same industry vertical? Who would want access to this data? What would motivate a person to attempt unauthorized access to the data? Data theft Destruction Notoriety Hacktivism Retaliation A sample table of data type, threat, and motivation is shown as follows: Data   Threat   Motivation   Credit card numbers   Hacker   Theft, Cybercrime   Trade secrets   Competitor   Competitive advantage   Personally Identifiable Information (PII)   Disgruntled employee   Retaliation, Destruction   Company confidential documents   Accidental leak   None   Client list   Natural disaster   None   This should be developed with as much detail as possible to form a realistic view of threats to the enterprise. There may also be several variations of threats and motivations for threat action on enterprise data. For example, accessing trade secrets by a competitor may be for competitive advantage, or a hacker may take action as part of hacktivism to bring negative press to the enterprise. The more you can elaborate on the possible threats and motivations that exist, the better you will be able to reduce the list to probable threats based on challenging the data you have gathered. It is important to continually challenge the logic used to have the most realistic perspective. Assessing impact Now that the probable threats have been identified, what kind of damage can be done or negative impact can be enacted upon the enterprise and the data. Impact is the outcome of threats acting against the enterprise. This could be a denial-of-service state where the agent, a hacker, uses a tool to starve the enterprise Internet web servers of resources causing a denial-of-service state for legitimate users. Another impact could be the loss of customer credit cards resulting in online fraud, reputation loss, and countless dollars in cleanup and remediation efforts. There are the immediate impacts and residual impacts. Immediate impacts are rather easy to determine because, typically, this is what we see in the news if it is big enough of an issue. Hopefully, the impact data does not come from first-hand experience, but in the case it is, executives should take action and learn from their mistakes. If there is no real-life experience with the impact, researching breach data will help using Internet sites such as DATALOSS db (http://datalossdb.org). Also, understanding the value of the data to the enterprise and its customers will aide in impact calculation. I think the latter impact analysis is more useful, but if the enterprise is unsure, then relying on breach data may be the only option. The following are a few sample discovery questions for business impact analysis: How is the enterprise affected by threat actions? Will we go out of business? Will we lose market share? If the data is deleted or manipulated, can it be recovered or restored? If the building is destroyed, do we have disaster recovery and business continuity capabilities? To get a more accurate assessment of the probable impact or total cost to the enterprise, map out what data is most desirable to steal, destroy, and manipulate. Align the identified threats to the identified data, and apply an impact level to the data indicating if the enterprise would suffer critical to minor loss. These should be as accurate as possible. Work the scenarios out on paper and base the impact analysis on the outcome of the exercises. The following is a sample table to present the identification and assessment of impact based on threat for a retailer. This is generally called a business impact analysis. Data   Threat   Impact   Credit card numbers   Hacker   Critical   Trade secrets   Competitor   Medium   PII   Disgruntled employee   High   Company confidential documents   Accidental leak   Low   Client list   Natural disaster   Medium   Enterprise industry vertical may affect the impact analysis. For instance, a retailer may have greater impact if credit card numbers are stolen than if their client list was stolen. Both scenarios have impact but one may warrant greater protection and more restricted access to limit the scope of impact, and reduce immediate and residual loss. Business impact should be measured in how the threat actions affect the business overall. Is it an annoyance or does it mean the business can no longer function? Natural disasters should also be accounted for and considered when assessing enterprise risk. Assessing probability Now that all conceived threats have been identified along with the business impact for each scenario, how do we really determine risk? Shouldn't risk be based on how likely the threat may take action, succeed, and cause an impact? Yes! The threat can be the most perilous thing imagined but if threat actions may only occur once in three thousand years, investment in protecting against the threat may not be warranted, at least in the near term. Probability data is as difficult, if not more difficult, to find than threat data. However, this calculation has the most influence on the derived risk. If the identified impact is expected to happen twice a year and the business impact is critical, perhaps security budget should be allocated to security mechanisms that mitigate or reduce the impact. The risk of the latter scenario would be higher because it is more probable, not possible, but probable. Anything is possible. I have heard an analogy for this to make the point. In the game of Russian roulette, a semi-automatic pistol either has a bullet in the chamber or it does not, this is possible. With a revolver and a quick spin of the cylinder, you now have a 1 in 6 chance on whether there is a bullet that will be fired when the firing pin strikes forward. This is oversimplified to illustrate possibility versus probability. There are several variables in the example that could affect the outcome such as a misfire, or the safety catch being enabled, stopping the gun's ability to fire. These would be calculated to form an accurate risk value. Make sense? This is how we need to approach probability. Technically, it is a semi-accurate estimation because there is just not enough detailed information on breaches and attacks to draw absolute conclusions. One approach may be to research what is happening in the same industry using online resources and peer groups, and then make intelligent estimates to determine if the enterprise could be affected too. Generally, there are outlier scenarios that require the utmost attention regardless; start here if these have not been identified as a probable risk scenario for the enterprise. The following are a few sample probability estimation questions: Has this event occurred before to the enterprise? Is there data to suggest it is happening now? Are there documented instances for similar enterprises? Do we know anything in regards to occurrence? Is the identified threat and impact really probable? The following table is the continuation of our risk analysis for our fictional retailer: Data   Threat   Impact   Probability   Credit card numbers   Hacker   Critical   High   Trade secrets   Competitor   Medium   Low   PII   Disgruntled employee   High   Medium   Company confidential documents   Accidental leak Low   Low Client list   Natural disaster   Medium   High   Based on the outcome of the probability exercises of identified threats and impacts, risk can be calculated and the appropriate course of action(s) developed and implemented. Assessing risk Now that the enterprise has agreed on what data has value, identified threats to the data, rated the impact to the enterprise, and the estimated probability of the impact occurring, the next logical step is to calculate the risk of the scenarios. Essentially, there are two methods to analyze and present risk: qualitative and quantitative. The decision to use one over the other should be based on the maturity of the enterprise's risk office. In general, a quantitative risk analysis will use descriptive labels like a qualitative method, however, there is more financial and mathematical analysis in quantitative analysis. Qualitative risk analysis Qualitative risk analysis provides a perspective of risk in levels with labels such as Critical, High, Medium, and Low. The enterprise must still define what each level means in a general financial perspective. For instance, a Low risk level may equate to a monetary loss of $1,000 to $100,000. The dollar ranges associated with each risk level will vary by enterprise. This must be agreed on by the entire enterprise so when risk is discussed, everyone is knowledgeable of what each label means financially. Do not confuse the estimated financial loss with the more detailed quantitative risk analysis approach; it is a simple valuation metric for deciding how much investment should be made based on probable monetary loss. The following section is an example qualitative risk analysis presenting the type of input required for the analysis. Notice that this is not a deep analysis of each of these inputs; it is designed to provide a relatively accurate perspective of risk associated with the scenario being analyzed. Qualitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Estimated impact: High, Medium, Low (as indicated in the following table).   Risk Estimated loss ($)   High   > 1,000,000   Medium   500,000 to 900,000   Low   < 500,000   Quantitative risk analysis Quantitative risk analysis is an in-depth assessment of what the monetary loss would be to the enterprise if the identified risk were realized. In order to facilitate this analysis, the enterprise must have a good understanding of its processes to determine a relatively accurate dollar amount for items such as systems, data restoration services, and man-hour break down for recovery or remediation of an impacting event. Typically, enterprises with a mature risk office will undertake this type of analysis to drive priority budget items or find areas to increase insurance, effectively transferring business risk. This will also allow for accurate communication to the board and enterprise executives to know at any given time the amount of risk the enterprise has assumed. With the quantitative approach a more accurate assessment of the threat types, threat capabilities, vulnerability, threat action frequency, and expected loss per threat action are required and must be as accurate as possible. As with qualitative risk analysis, the output of this analysis has to be compared to the cost to mitigate the identified threat. Ideally, the cost to mitigate would be less than the loss expectancy over a determined period of time. This is simple return on investment (ROI) calculation. Let's look again at the scenario used in the qualitative analysis and run it through a quantitative analysis. We will then compare against the price of a security product that would mitigate the risk to see if it is worth the capital expense. Before we begin the quantitative risk analysis, there are a couple of terms that need to be explained: Annual loss expectancy (ALE): The ALE is the calculation of what the financial loss would be to the enterprise if the threat event was to occur for a single year period. This is directly related to threat frequency. In the scenario this is once every three years, dividing the single lost expectancy by annual occurrence provides the ALE. Cost of protection (COP): The COP is the capital expense associated with the purchase or implementation of a security mechanism to mitigate or reduce the risk scenario. An example would be a firewall that costs $150,000 and $50,000 per each year of protection of the loss expectancy period. If the cost of protection over the same period is lower than the loss, this is a good indication that the capital expense is financially worthwhile. Quantitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Single loss expectation: $250,000. Threat frequency: 3 (how many times per year; this would be roughly once every three years). ALE: $83,000. COP: $150,000 (over 3 years). We will divide the total loss and the cost of protection over three years as, typically, capital expenses are depreciated over three to four years, and the loss is expected once every three years. This will give us the ALE and COP in the equation to determine the cost-benefit analysis. This is a simplified example, but the math would look as follows: $83,000 (ALE) - $50,000 (COP) = $33,000 (cost benefit) The loss is annually $33,000 more than the cost to protect against the threat. The assumption in our example is that the $250,000 figure is 85% of the total asset value, but because we have 15% protection capability, the number is now approximately $294,000. This step can be shortcut out of the equation if the ALE and rate of occurrence are known. When trying to figure out threat capability, try to be as realistic about the threat first. This will help us to better assess vulnerability because you will have a more accurate perspective on how realistic the threat is to the enterprise. For instance, if your scenario requires cracking advanced encryption and extensive system experience, the threat capability would be expert indicating current security controls may be acceptable for the majority of threat agents reducing probability and calculated risk. We tend to exaggerate in security to justify a purchase. We need to stop this trend and focus on what is the best area to spend precious budget dollars. The ultimate goal of a quantitative risk analysis is to ensure that spend for protection does not far exceed the threat the enterprise is protecting against. This is beneficial for the security team in justifying the expense of security budget line items. When the analysis is complete, there should still be a qualitative risk label associated with the risk. Using the above scenario with an annualized risk of $50,000 indicates this scenario is extremely low risk based on the defined risk levels in the qualitative risk exercise even if SLE is used. Does this analysis accurately represent acceptable loss? After an assessment is complete it is good practice to ensure all assumptions still hold true, especially the risk labels and associated monetary amounts. Applying risk analysis to trust models Well, now we can apply our risk methodology to our trust models to decide if we can continue with our implementation as is, or whether we need to change our approach based on risk. Our trust models, which are essentially use cases, rely on completing the risk analysis, which in turn decide the trust level and security mechanisms required to reduce the enterprise risk to an acceptable level. It would be foolish to think that we can shove all requests for similar access directly into one of these buckets without further analysis to determine the real risk associated with the request. After completing one of the risk analysis types we just covered, risk guidance can be provided for the scenario (and I stress guidance). For the sake of simplicity an implementation path may be chosen, but it will lead to compromises in the overall security of the enterprise and is cautioned. I have re-presented the table of one scenario, the external application user. This is a better representation of how a trust model should look with risk and security enforcement established for the scenario. If an enterprise is aware of how it conducts business, then a focused effort in this area should produce a realistic list of interactions with data by whom, with what level of trust, and based on risk, what controls need to be present and enforced by policy and standards. User type   External   Allowed access   Tier 1 DMZ only, Least privilege   Trust level   1 - Not trusted   Risk   Medium   Policy   Acceptable use, Monitoring, Access restrictions   Required security mechanisms   FW, IPS, Web application firewall   The user is assumed to have access to log in to the web application and have more possible interaction with the backend database(s). This should be a focal point for testing, because this is the biggest area of risk in this scenario. Threats such as SQL injection that can be waged against a web application with little to no experience are commonplace. Enterprises that have e-commerce websites typically do not restrict who can create an account. This should have input to the trust decision and ultimately the security architecture applied. Deciding on a risk analysis methodology We have covered the two general types of risk analysis, qualitative and quantitative, but which is best? It depends on several factors: risk awareness of the enterprise, risk analysts' capabilities, risk analysis data, and the influence of risk in the enterprise. If the idea of risk analysis or IT risk analysis is new to the enterprise, then a slow approach with qualitative analysis is recommended to get everyone thinking of risk and what it means to the business. It will be imperative to get an enterprise-wide agreement on the risk labels. Using the lesser involved method does not mean you will not be questioned on the data used in the analysis, so be prepared to defend the data used and explain estimation methods leveraged. If it is decided to use a quantitative risk analysis method, a considerable amount of effort is required along with meticulous loss figures and knowledge of the environment. This method is considered the most effective requiring risk expertise, resources, and an enterprise-wide commitment to risk analysis. This method is more accurate, though it can be argued that since both methods require some level of estimation, the accuracy lies in accurate estimation skills. I use the Douglas Hubbard school of thought on estimating with 90 percent accuracy. You will find his works at his website http://www.hubbardresearch.com/. I highly recommend his title How to Measure Anything: Finding the Value of "Intangibles" in Business, Tantor Media to learn estimation skills. It may be beneficial to have an external firm perform the analysis if the engagement is significant in size. The benefits of both should be that the enterprise is able to make risk-aware decisions on how to securely implement IT solutions. Both should be presented with common risk levels such as High, Medium, Low; essentially the common language everyone can speak knowing a range of financial risk without all the intimate details of how they arrived at the risk level. Other thoughts on risk and new enterprise endeavors Now that you have been presented with types of risk analysis, they should be applied as tools to best approach the new technologies being implemented in the networks of our enterprises. Unfortunately, there are broad brush strokes of trusted and untrusted approaches being applied that may or may not be accurate without risk analysis as a decision input. Two examples where this can be very costly are the new BYOD and cloud initiatives. At first glance these are the two most risky business maneuvers an enterprise can attempt from an information security perspective. Deciding if this really is the case requires an analysis based on trust models and data-centric security architecture. If the proper security mechanisms are implemented and security applied from users to data, the risk can be reduced to a tolerable level. The BYOD business model has many positive benefits to the enterprise, especially capital expense reduction. However, implementing a BYOD or cloud solution without further analysis of risk can introduce significant risk beyond the benefit of the initiative. Do not be quick to spread fear in order to avoid facing the changing landscape we have worked so hard to build and secure. It is different, but at one time, what we know today as the norm was new too. Be cautious but creative, or IT security will be discredited for what will be received as a difficult interaction. This is not the desired perception for IT security. Strive to understand the business case, risk to business assets (data, systems, people, processes, and so on), and then apply sound security architecture as we have discussed so far. Begin evangelizing the new approach to security in the enterprise by developing trust models that everyone can understand. Use this as the introduction to agile security architecture and get input to create models based on risk. By providing a risk-based perspective to emerging technologies and other radical requests, a methodical approach can bring better adoption and overall increased security in the enterprise. Summary In this article, we took a look at analyzing risk by presenting quantitative and qualitative methods including an exercise to understand the approach. The overall goal of security is to be integrated into business processes, so it is truly a part of the business and not an expensive afterthought simply there to patch a security problem. Resources for Article : Further resources on this subject: Microsoft Enterprise Library: Security Application Block [Article] Microsoft Enterprise Library: Authorization and Security Cache [Article] Getting Started with Enterprise Library [Article]
Read more
  • 0
  • 0
  • 2534

article-image-packaging-content-types-and-feeds-importers
Packt
25 Jan 2013
8 min read
Save for later

Packaging Content Types and Feeds Importers

Packt
25 Jan 2013
8 min read
(For more resources related to this topic, see here.) Features Let's get started. First, we will look at some background information on what Features does. The code that the Features module will give us is in the form of module files sitting in a module folder that we can save to our /sites/all/modules directory, as we would do for any other contributed module. Using this method, we will have the entire configuration that we spent hours on building, saved into a module file and in code. The Features module will keep track of the tweaks we make to our content type configuration or importer for us. If we make changes to our type or importer we simply save a new version of our Features module. The Features module configuration and the setup screen is at Structure | Features or you can go to this path: admin/structure/features. There is no generic configuration for Features that you need to worry about setting up. If you have the Feeds module installed as we do, you'll see two example features that the Feeds module provides—Feeds Import and Feeds News. You can use these provided features or create your own. We're going to create our own in the next section. You should see the following screen at this point: Building a content type feature We have two custom content types so far on our site, Fire Department and Organization Type. Let's package up the Fire Department content type as a feature so that the Features module can start to keep track of each content type configuration and any changes we make going forward. Creating and enabling the feature First click on the Create Feature tab on your Features administration screen. The screen will load a new create feature form. Now follow these steps to create your first feature. We're going to package up our Fire Department content type: Enter a name for your feature. This should be something specific such as Fire Department Content Type. Add a description for the feature. This should be something like This feature packages up our Fire Department Content type configuration. You can create a specific package for your feature. This will help to organize and group your features on the main Features admin screen. Let's call this package Content Types. Version your feature. This is very important as your feature is going to be a module. It's a good idea to version number your feature each time you make a change to it. Our first version will be 7.x-1.0. Leave the URL of update XML blank for now. By this point you should see the following: Now we're going to add our components to the feature. As this feature will be our Fire Department content type configuration, we need to choose this content type as our component. In the drop-down box select Content types: node. Now check the Fire Department checkbox. When you do this you'll see a timer icon appear for a second and then magically all of your content type fields, associated taxonomy, and dependencies will appear in the table to the right. This means that your feature is adding the entire content type configuration. Features is a smart module. It will automatically associate any fields, taxonomy or other dependencies and requirements to your specific feature configuration. As our content type has taxonomy vocabulary associated with it (in the form of the term reference fields) you'll notice that both country and fire_department_type are in the Taxonomy row of the feature table. You should now see the following: Now click on the Download feature button at the bottom of the screen to download the actual module code for our Fire Department feature module. Clicking on Download feature will download the .tar file of the module to your local computer. Find the .tar file and then extract it into your /sites/all/modules/directory. For organizational best practices, I recommend placing it into a / custom directory within your /sites/all/modules as this is really a custom module. So now you should see a folder called fire_department_content_ type in your /sites/all/modules/custom folder. This folder contains the feature module files that you just downloaded. Now if you go back to your main Features administration screen you will see a new tab titled Content Types that contains your new feature module called Fire Department Content Type. Currently this feature is disabled. You can notice the version number in the same row. Go ahead and check the checkbox next to your feature and then click on the Save settings button. What you are doing here is enabling your feature as a module on your site and now your content type's configuration will always be running from this codebase. When you click on Save settings, your feature should now be enabled and showing Default status. When a feature is in Default state this means that your configuration (in this case the Fire Department content type) matches your feature modules codebase. This specific feature is now set up to keep track of any changes that may occur to the content type. So for example if you added a new field to your content type or tweaked any of its existing fields, display formatters or any other part of its configuration, that feature module would have a status of Overridden. We'll demonstrate this in the next section. The custom feature module Before we show the overridden status however, let's take a look at the actual custom feature module code that we've saved. You'll recall that we added a new folder for our Fire Department content type feature to our /sites/all/modules/custom folder. If you look inside the feature module's folder you'll see the following files that are the same as the constructs of a Drupal module: fire_department_content_type.features.field.inc fire_department_content_type.features.inc fire_department_content_type.features.taxonomy.inc fire_department_content_type.info fire_department_content_type.module Anyone familiar with Drupal modules should see here that this is indeed a Drupal module with info, .module, and .inc files. If you inspect the .info file in an editor you'll see the following code (this is an excerpt): name = Fire Department Content Type description = This feature packages up our Fire Department Content type configuration core = 7.x package = Content Types version = 7.x-1.0 project = fire_department_content_type dependencies[] = features The brunt of our module is in the fire_department_content_type.features. field.inc file. This file contains all of our content type's fields defined as a series of the $fields array (see the following excerpt of code): /** * @file * fire_department_content_type.features.field.inc */ /** * Implements hook_field_default_fields(). */ function fire_department_content_type_field_default_fields() { $fields = array(); // Exported field: 'node-fire_department-body'. $fields['node-fire_department-body'] = array( 'field_config'=>array( 'active'=>'1', 'cardinality'=>'1', 'deleted'=>'0', 'entity_types'=>array( 0 =>'node', ), 'field_name'=>'body', 'foreign keys'=>array( 'format'=>array( 'columns'=>array( 'format'=>'format', ), 'table'=>'filter_format' If you view the taxonomy.inc file you'll see two arrays that return the vocabs which we're referencing via the term references of our content type. As you can see, this module has packaged up our entire content type configuration. It's beyond the scope of this book to get into more detail about the actual module files, but you can see how powerful this can be. If you are a module developer you could actually add code to the specific feature module's files to extend and expand your content type directly from the code. This would then be synced to your feature module codebase. Generally, you do not use this method for tweaking your feature but you do have access to the code and can make tweaks to the feature code. What we'll be doing is overriding our feature from the content type configuration level. Additionally, if you load your module's admin screen on your site and scroll down until you see the new package called Content Types, you'll see your feature module enabled here on the modules admin screen: If you disable the Features module here, it will also disable the feature from your Features admin screen. The best practice dictates that you should first disable a Features module via the Features admin screen. This will then disable the module from the modules admin screen.
Read more
  • 0
  • 0
  • 1039

article-image-securing-portal-contents
Packt
24 Jan 2013
8 min read
Save for later

Securing Portal Contents

Packt
24 Jan 2013
8 min read
(For more resources related to this topic, see here.) Introduction This article discusses the configurations aimed at providing security features to portals and all the related components. We will see that we can work using either the web console or the XML configuration files. As you would expect, the latter is more flexible in most instances. Many of the configuration snippets shown in the article are based on Enterprise Deployment Descriptors (DD). Keep in mind that XML always remains the best option for configuring a product. We will configure GateIn in different ways to show how to adapt some of the internal components for your needs. Enterprise Deployment Descriptors (DD) are configuration files related to an enterprise application component that must be deployed in an application server. The goal of the deployment descriptor is to define how a component must be deployed in the container, configuring the state of the application and its internal components. These configuration files were introduced in the Java Enterprise Platform to manage the deployment of Java Enterprise components such as Web Applications, Enterprise Java Beans, Web Services, and so on. Typically, for each specific container, you have a different definition of the descriptor depending on vendors and standard specifications. Typically, a portal consists of pages related to a public section and a private section. Depending on the purpose, of course, we can also work with a completely private portal. The two main mechanisms used in any user-based application are the following: Authentication Authorization In this article we will discuss authorization: how to configure and manage permissions for all the objects involved in the portal. As an example, a User is a member of a Group, which provides him with some authorizations. These authorizations are the things that members of the Groups can do in the portal. On the other side, as an example, a page is defined with some permissions, which says which Groups can access it. Now, we are going to see how to configure and manage these permissions, for the pages, components in a page, and so on in the portal. Securing portals The authorization model of the portal is based on the association between the following actors: groups, memberships, users, and any content inside the portal (pages, categories, or portlets). In this recipe, we will assign the admin role against a set of pages under a specific URL of the portal. This configuration can be found in the default portal provided with GateIn so you can take the complete code from there. Getting ready Locate the web.xml file inside your portal application. How to do it... We need to configure the web.xml file assigning the admin role to the following pages under the URL http://localhost:8080/portal/admin/* in the following way: <security-constraint> <web-resource-collection> <web-resource-name> admin authentication </web-resource-name> <url-pattern>/admin/*</url-pattern> <http-method>POST</http-method> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> The role must be declared in a different section under the security-constraint tag through the security-role tag. The role-name tag defines the id of the role: <security-role> <description>the admin role</description> <role-name>admin</role-name> </security-role> How it works... GateIn allows you to add different roles for every sections of the portal simply by adding a path expression that can include a set of sub-pages using wildcard notation (/*). This is done by first defining all the needed roles using the security-role element, and then defining a security-constraint element for each set of pages that you want to involve. PicketLink is also for users and memberships, and can manage the organization of the groups. There's more... Configuring GateIn with JAAS GateIn uses JAAS (Java Authentication Authorization Service) as the security model. JAAS (Java Authentication Authorization Service) is the most common framework used in the Java world to manage authentication and authorization. The goal of this framework is to separate the responsibility of users' permissions from the Java application. In this way, you can have a bridge for permissions management between your application and the security provider. For more information about JAAS, please see the following URL: http://docs.oracle.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Java EE Application servers and JSP/servlet containers, such as JBoss and Tomcat, also support JAAS with specific deployment descriptors. The default JAAS module implemented in GateIn synchronizes the users and roles from the database. In order to add your portal to a specific realm, add the following snippet in web.xml: <login-config> . . . <realm-name>gatein-domain</realm-name> . . . </login-config> Notice that a realm can be managed by JAAS or another authorization framework—it is not important which is used for the Java Enterprise Edition. gatein-domain is the ID of the default GateIn domain that we will use as the default reference for the following recipes. See also The Securing with JBoss AS recipe The Securing with Tomcat recipe Securing with JBoss AS In this recipe, we will configure GateIn with JAAS using JBoss AS (5.x and 6.x). Getting ready Locate the WEB-INF folder inside your portal application. How to do it... Create a new file named jboss-web.xml in the WEB-INF folder with the following content: <jboss-web> <security-domain> java:/jaas/gatein-domain </security-domain> </jboss-web> How it works... This is the JNDI URL where the JAAS module will be referenced. This URL will automatically search the JAAS modules called gatein-domain. The configuration of the modules can be found inside the file gatein-jboss-beans.xml. Usually, this file is inside the deployed <PORTAL_WAR_ROOT>/META-INF, but it could be placed anywhere inside the deploy directory of JBoss, thanks to the auto-discovery feature provided by the JBoss AS. Here is an example: <deployment > <application-policy name="gatein-domain"> <authentication> <login-module code= "org.gatein.wci.security.WCILoginModule" flag="optional"> <module-option name="portalContainerName"> portal </module-option> <module-option name="realmName"> gatein-domain </module-option> </login-module> <login-module code= "org.exoplatform.web.security.PortalLoginModule" flag="required"> ……….. </application-policy> </deployment> JAAS allows adding several login modules, which will be executed in cascade mode according to the flag attribute. The following represents a description of the valid values for the flag attribute and their respective semantics as mentioned in the Java standard API: Required: The LoginModule is required to succeed. If it succeeds or fails, authentication still continues to proceed to the next LoginModule in the list. Requisite: The LoginModule is required to succeed. If it succeeds, authentication continues on the next LoginModule in the list. If it fails, the control immediately returns to the application and the authentication process does not proceed to the next LoginModule. Sufficient: The LoginModule is not required to succeed. If it does succeed, the control immediately returns to the application and the authentication process does not proceed to the next LoginModule. If it fails, authentication continues forward to the next LoginModule Optional: The LoginModule is not required to succeed. If it succeeds or fails, authentication still continues to proceed to the next LoginModule. Look at the recipe Choosing the JAAS modules for details about each login module. See also The Securing portals recipe The Securing with Tomcat recipe The Choosing the JAAS modules recipe Securing with Tomcat In this recipe, we will configure a JAAS realm using Tomcat 6.x.x/7.x.x. Getting ready Locate the declaration of the realm inside <PORTAL_WAR_ROOT>/META-INF/context.xml. How to do it… Change the default configuration for your needs, as described in the previous recipe. The default configuration is the following: <Context path='/portal' docBase='portal' debug='0' reloadable='true' crossContext='true' privileged='true'> <Realm className= 'org.apache.catalina.realm.JAASRealm' appName='gatein-domain' userClassNames= 'org.exoplatform.services.security.jaas.UserPrincipal' roleClassNames= 'org.exoplatform.services.security.jaas.RolePrincipal' debug='0' cache='false'/> <Valve className= 'org.apache.catalina.authenticator.FormAuthenticator' characterEncoding='UTF-8'/> </Context> ; Change the default configuration of the JAAS domain that is defined in the TOMCAT_ HOME/conf/jaas.conf file. Here is the default configuration: <gatein-domain { org.gatein.wci.security.WCILoginModule optional; org.exoplatform.services.security.jaas.SharedStateLoginModule required; org.exoplatform.services.security.j2ee.TomcatLoginModule required; }; How it works… As we have seen in the previous recipe, we can configure the modules in Tomcat using a different configuration file. This means that we can change and add login modules that are related to a specific JAAS realm. The context.xml file is stored inside the web application. If you don't want to modify this file, you can add a new file called portal.xml in the conf folder to override the current configuration. See also The Security with JBoss AS recipe The Choosing the JAAS modules recipe
Read more
  • 0
  • 0
  • 982

article-image-creating-and-configuring-basic-mobile-application
Packt
17 Jan 2013
3 min read
Save for later

Creating and configuring a basic mobile application

Packt
17 Jan 2013
3 min read
(For more resources related to this topic, see here.) How to do it... Follow these steps: Inside your Magento Admin Panel, navigate to Mobile | Manage Apps on the main menu. Click on the Add App button in the top-right corner. The New App screen will be shown. Since we have to create a separate application for each mobile device type, let's choose our first targeted platform. Under the Device Type list, we can choose iPad, iPhone, or Android. For the purpose of this recipe, since the procedure is almost the same for all device types, I will choose Android. After choosing the desired Device Type, click on the Continue button, and click on the General tab under Manage Mobile App. First we have to fill in the box named App Name. Choose an appropriate name for your mobile application and insert it there. Under the Store View list, make sure to choose our earlier defined Store View with updated mobile theme exceptions, our mobile copyright information, and category thumbnail images. Set the Catalog Only App option to No. Click on the Save and Continue Edit button in the top-right corner of the screen. Now you will notice a warning message from Magento that says something like the following: Please upload an image for "Logo in Header" field from Design Tab. Please upload an image for "Banner on Home Screen" field from Design Tab. Don't worry, Magento expects us to add some basic images that we prepared for our mobile app. So let's add them. Click on the Design tab on the left-hand side of the screen. Locate the Logo in Header label and click on the Browse... button on the right to upload the prepared small header logo image. Make sure to upload the image with proper dimensions for the selected device type (iPhone, iPad, or Android). In the same way, click on the Browse... button on the right of the Banner on Home Screen label and choose the appropriate prepared and resized banner image. Now, let's click on the Save and Continue Edit button in order to save our settings. How it works For each device type, we will have to create a new Magento Mobile application in our Magento Mobile Admin Panel. When we once select Device Type and click on the Save button, we are unable to change Device Type later for that application. If we have chosen the wrong Device Type, the only solution is to delete this app and to create a new one with the proper settings. The same applies with our chosen Store View when configuring new app. There's more... When our configuration is saved for the first time, auto-generated App Code will appear on the screen and that will be the code which will uniquely identify our Device Type—the assigned application to be properly recognized with Magento Mobile. For example, defand1 means that this application is the first defined application for the default Store View targeted on android (def = default store view, and=android). How to use mobile application as catalog only Under step 7 we set Catalog Only App to No, but sometimes, if we don't need checkout and payment in our mobile app, but we want to use it just as catalog to show products to our mobile customers, we just need to set the Catalog Only option to Yes. Summary So this is how we create the basic configuration for our mobile app Resources for Article : Further resources on this subject: Integrating Twitter with Magento [Article] Integrating Facebook with Magento [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 2040
article-image-adding-geographic-capabilities-geoplaces-theme
Packt
03 Jan 2013
6 min read
Save for later

Adding Geographic Capabilities via the GeoPlaces Theme

Packt
03 Jan 2013
6 min read
(For more resources related to this topic, see here.) Introducing the GeoPlaces theme The GeoPlaces theme (http://templatic.com/app-themes/geo-places-city-directory-WordPress-theme/), by Templatic (http://templatic.com), is a cool theme that allows you to create and manage a city directory website. For a live demo of the site, visit http://templatic.com/demos/?theme=geoplaces4. An overview of the GeoPlaces theme The GeoPlaces theme is created as an out-of-the-box solution for city directory websites. It allows end users to submit places and events to your site. Best of all, you can even monetize the site by charging a listing fee. Some of the powerful features include the following: Widgetized homepage Menu widgets Featured events and listings Custom fields Payment options Price packages page view Let's now move on to the setting up of the theme. Setting up the GeoPlaces theme We'll start with the installation of the GeoPlaces theme. Installation The steps for installing the GeoPlaces theme are as follows: You will first have to purchase and download your theme (in a zip folder) from Templatic. Unzip the zipped file and place the GeoPlaces folder in your wp-content/themes folder. Log in to your WordPress site, which you have set up, and activate the theme. Alternatively, you can upload the theme by uploading the theme's zip folder via the admin interface, by going to Appearance | Install Themes | Upload. If everything goes well, you should see the following on the navigation bar of your admin page: If you see the previous screenshot in your navigation, than you are ready to move on to the next step. Populating the site with sample data After a successful installation of the theme, you can go ahead and play around with the site by creating sample data. GeoPlaces themes come with a nifty function that allows you to populate your site with sample data. Navigate to wp-admin/themes.php and you should see the following: Notice the message box asking if you want to install and populate your site with sample data. Click on the large green button and sample data will automatically be populated. Once done, you should see the following: You can choose to delete the sample data should you want to. But for now, let's leave the sample data for browsing purposes. Playing with sample data Now that we have populated the site with sample data, its time to explore it. Checking out cities With our site populated with sample data, let's take our WordPress site for a spin: First, navigate to your homepage; you should be greeted by a splash page that looks as follows: Now select New York and you will be taken to a page with a Google Map that looks like the following screenshot: GeoPlaces leverages on the Google Maps API to provide geographic capabilities to the theme. Feel free to click on the map and other places, such as Madison Square Park. If you click on Madison Square Park you will see a page that describes Madison Square Park. More importantly, on the right hand side of the page, you should see something like the following: Notice the Address row? The address is derived from the Google Maps API. How does it work? Let's try adding a place to find out. Adding a place from the frontend Here's how we can add a "place" from the frontend of the site: To add a place, you must first sign in. Sign in from the current page by clicking on the Sign In link found at the top right-hand side of the page. Sign in with your credentials. Notice that you remain on the frontend of the site as opposed to the administration side. Now click on the Add place link found on the upper right-hand side of the webpage. You should see the following: You will be greeted by a long webpage that requires you to fill up various fields that are required for listing a page. You should take note of this, as shown in the following screenshot: Try typing Little Italy in the Address field and click on the Set address on map button. You should notice that the map is now marked, and the Address Latitude and Address Longitude fields are now filled up for you. Your screen for this part of the webpage should now look as follows: The geographically related fields are now filled up. Continue to fill up the other fields, such as the description of this listing, the type of Google map view, special offers, e-mail address, website, and other social media related fields. With these steps, you should have a new place listing in no time. Adding a place from the admin side What you have just done is added a place listing from the frontend, as an end user (although you are logged in as admin). So, how do you add a place listing from the admin side of your WordPress site? Firstly, you need to log in to your site if you have not yet done so. Next, navigate to your admin homepage, and go to Places | Add a Place. You will see a page that resembles the Create a New Post page. Scroll down further and you should notice that the forms filled here are exactly the same as those you see in the frontend of the site. For example, fields for the geographic information are also found on this page: Adding a city from the admin side To add a city, all you have to do is to log in to the admin side of the site via /wpadmin. Once logged in, go to GeoPlaces | Manage City and click on Add City. From there you'll be able to fill up the details of the city. Summary We saw how to manage our WordPress site, covering topics such as populating the site with sample data, adding place listings, and adding a city. You should have a general idea of the geographic capabilities of the theme and how to add a new placelisting. Notice how the theme takes the heavy lifting away by providing built-in geographic functionalities through the Google Maps API. We also understood how themes and plugins can be used to extend WordPress. Resources for Article : WordPress Mobile Applications with PhoneGap: Increasing Traffic to Your Blog with WordPress MU 2.8: Part2 [Article] WordPress 3: Designing your Blog [Article] Adapting to User Devices Using Mobile Web Technology [Article]
Read more
  • 0
  • 0
  • 1901

article-image-components-reusing-rules-conditions-and-actions
Packt
03 Jan 2013
4 min read
Save for later

Components - Reusing Rules, Conditions, and Actions

Packt
03 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Enable the Rules and Rules UI modules on your site. How to do it... Go to Confguration | Workfow | Rules | Components. Add a new component and set the plugin to Condition set (AND). Enter a name for the component and add a parameter Entity | Node. Add a Condition, Data comparison, set the value to the author of the node, set OPERATOR to equals, enter 1 in the Data value field and tick Negate. Add an OR group by clicking on Add or, as shown in the following screenshot: Add a Condition, Node | Content is of type and set it to Article. Add a Condition, Entity | Entity has field, set Entity to node, and select the field, field_image, as shown in the following screenshot: Organize the Conditions so that the last two Conditions are in the OR group we created before. Create a new rule configuration and set the Event to Comment | After saving a new comment. Add a new Condition and select the component that we created. An example is shown in the following screenshot: Select comment:node as the parameter. Add a new Action, System | Show a message on the site and configure the message. How it works... Components require parameters to be specified, that will be used as placeholders for the objects we want to execute a rule configuration on. Depending on what our goal is, we can select from the core Rules data types, entities, or lists. In this example, we've added a Node parameter to the component, because we wanted to see who is the node's author, if it's an article or if it has an image field. Then in our Condition, we've provided the actual object on which we've evaluated the Condition. If you're familiar with programming, then you'll see that components are just like functions; they expect parameters and can be re-used in other scenarios. There's more... The main benefit of using Rules components is that we can re-use complex Conditions, Actions, and other rule configurations. That means that we don't have to configure the same settings over and over again. Instead we can create components and use them in our rule configurations. Other benefits also include exportability: components can be exported individually, which is a very useful addition when using configuration management, such as Features. Components can also be executed on the UI, which is very useful for debugging and can also save a lot of development time. Other component types Apart from Condition sets, there are a few other component types we can use. They are as follows: Action set As the name suggests, this is a set of Actions, executed one after the other. It can be useful when we have a certain chain of Actions that we want to execute in various scenarios. Rule We can also create a rule configuration as a component to be used in other rule configurations. Think about a scenario when you want to perform an action on a list of node references (which would require a looped Action) but only if those nodes were created before 2012. While it is not possible to create a Condition within an Action, we can create a Rule component so we can add a Condition and an Action within the component itself and then use it as the Action of the other rule configuration. Rule set Rule sets are a set of Rules, executed one after the other. It can be useful when we want to execute a chain of Rules when an event occurs. Parameters and provided variables Condition sets require parameters which are input data for the component. These are the variables that need to be specified so that the Condition can evaluate to FALSE or TRUE. Action sets, Rules, and Rule sets can provide variables. That means they can return data after the action is executed. Summary This article explained the benefits of using Rules components by creating a Condition that can be re-used in other rule configurations. Resources for Article : Further resources on this subject: Drupal 7 Preview [Article] Creating Content in Drupal 7 [Article] Drupal FAQs [Article]
Read more
  • 0
  • 0
  • 1692

article-image-advanced-indexing-and-array-concepts
Packt
26 Dec 2012
6 min read
Save for later

Advanced Indexing and Array Concepts

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) Installing SciPy SciPy is the scientific Python library and is closely related to NumPy. In fact, SciPy and NumPy used to be one and the same project many years ago. In this recipe, we will install SciPy. How to do it... In this recipe, we will go through the steps for installing SciPy. Installing from source: If you have Git installed, you can clone the SciPy repository using the following command: git clone https://github.com/scipy/scipy.gitpython setup.py buildpython setup.py install --user This installs to your home directory and requires Python 2.6 or higher. Before building, you will also need to install the following packages on which SciPy depends: BLAS and LAPACK libraries C and Fortran compilers There is a chance that you have already installed this software as a part of the NumPy installation. Installing SciPy on Linux: Most Linux distributions have SciPy packages. We will go through the necessary steps for some of the popular Linux distributions: In order to install SciPy on Red Hat, Fedora, and CentOS, run the following instructions from the command line: yum install python-scipy In order to install SciPy on Mandriva, run the following command line instruction: urpmi python-scipy In order to install SciPy on Gentoo, run the following command line instruction: sudo emerge scipy On Debian or Ubuntu, we need to type the following: sudo apt-get install python-scipy Installing SciPy on Mac OS X: Apple Developer Tools (XCode) is required, because it contains the BLAS and LAPACK libraries. It can be found either in the App Store, or in the installation DVD that came with your Mac, or you can get the latest version from Apple Developer's connection at https://developer.apple.com/technologies/tools/. Make sure that everything, including all the optional packages is installed. You probably already have a Fortran compiler installed for NumPy. The binaries for gfortran can be found at http://r.research.att.com/tools/. Installing SciPy using easy_install or pip: Install with either of the following two commands: sudo pip install scipyeasy_install scipy Installing on Windows: If you have Python installed already, the preferred method is to download and use the binary distribution. Alternatively, you may want to install the Enthought Python distribution, which comes with other scientific Python software packages. Check your installation: Check the SciPy installation with the following code: import scipy print scipy.__version__ print scipy.__file__ This should print the correct SciPy version. How it works... Most package managers will take care of any dependencies for you. However, in some cases, you will need to install them manually. Unfortunately, this is beyond the scope of this book. If you run into problems, you can ask for help at: The #scipy IRC channel of freenode, or The SciPy mailing lists at http://www.scipy.org/Mailing_Lists Installing PIL PIL, the Python imaging library, is a prerequisite for the image processing recipes in this article. How to do it... Let's see how to install PIL. Installing PIL on Windows: Install using the Windows executable from the PIL website http://www.pythonware.com/products/pil/. Installing on Debian or Ubuntu: On Debian or Ubuntu, install PIL using the following command: sudo apt-get install python-imaging Installing with easy_install or pip: At the t ime of writing this book, it appeared that the package managers of Red Hat, Fedora, and CentOS did not have direct support for PIL. Therefore, please follow this step if you are using one of these Linux distributions. Install with either of the following commands: easy_install PILsudo pip install PIL Resizing images In this recipe, we will load a sample image of Lena, which is available in the SciPy distribution, into an array. This article is not about image manipulation, by the way; we will just use the image data as an input. Lena Soderberg appeared in a 1972 Playboy magazine. For historical reasons, one of those images is often used in the field of image processing. Don't worry; the picture in question is completely safe for work. We will resize the image using the repeat function. This function repeats an array, which in practice means resizing the image by a certain factor. Getting ready A prerequisite for this recipe is to have SciPy, Matplotlib, and PIL installed. How to do it... Load the Lena image into an array. SciPy has a lena function , which can load the image into a NumPy array: lena = scipy.misc.lena() Some refactoring has occurred since version 0.10, so if you are using an older version, the correct code is: lena = scipy.lena() Check the shape. Check the shape of the Lena array using the assert_equal function from the numpy.testing package—this is an optional sanity check test: numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) Resize the Lena array. Resize the Lena array with the repeat function. We give this function a resize factor in the x and y direction: resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) Plot the arrays. We will plot the Lena image and the resized image in two subplots that are a part of the same grid. Plot the Lena array in a subplot: matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) The Matplotlib subplot function creates a subplot. This function accepts a 3-digit integer as the parameter, where the first digit is the number of rows, the second digit is the number of columns, and the last digit is the index of the subplot starting with 1. The imshow function shows images. Finally, the show function displays the end result. Plot the resized array in another subplot and display it. The index is now 2: matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() The following screenshot is the result with the original image (first) and the resized image (second): The following is the complete code for this recipe: import scipy.misc import sys import matplotlib.pyplot import numpy.testing # This script resizes the Lena image from Scipy. if(len(sys.argv) != 3): print "Usage python %s yfactor xfactor" % (sys.argv[0]) sys.exit() # Loads the Lena image into an array lena = scipy.misc.lena() #Lena's dimensions LENA_X = 512 LENA_Y = 512 #Check the shape of the Lena array numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) # Get the resize factors yfactor = float(sys.argv[1]) xfactor = float(sys.argv[2]) # Resize the Lena array resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) #Check the shape of the resized array numpy.testing.assert_equal((yfactor * LENA_Y, xfactor * LENA_Y), resized.shape) # Plot the Lena array matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) #Plot the resized array matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() How it works... The repeat function repeats arrays, which, in this case, resulted in changing the size of the original image. The Matplotlib subplot function creates a subplot. The imshow function shows images. Finally, the show function displays the end result. See also The Installing SciPy recipe The Installing PIL recipe
Read more
  • 0
  • 0
  • 2140
article-image-null-15
Packt
26 Dec 2012
6 min read
Save for later

Extending WordPress to the Mobile World

Packt
26 Dec 2012
6 min read
Introducing jQuery Mobile jQuery Mobile (http://jquerymobile.com/) is a unified HTML5-based user interface for most popular mobile device platforms. It is based on jQuery (http://jquery.com/) and jQuery UI (http://jqueryui.com/). Our focus in this section is on jQuery Mobile, so let's get our hands dirty. We'll start by implementing jQuery Mobile using the example we created in Chapter 3, Extending WordPress Using JSON-API. Installing jQuery Mobile and theming Installing jQuery Mobile is straightforward and easy: Open up app_advanced.html and copy and paste the following code directly within the <head> tags: <meta name="viewport" content="width=device-width, initialscale= 1"> <link rel="stylesheet" href="http://code.jquery.com/mobile/1.1.1/ jquery.mobile-1.1.1.min.css" /> <script src="http://code.jquery.com/jquery-1.7.1.min.js"> </script> <script src="http://code.jquery.com/mobile/1.1.1/jquery.mobile- 1.1.1.min.js"> </script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/ jquery.min.js"> </script> Now save your code and open up app_advanced.html in your favourite browser. You should be seeing the following screen: Well, it looks like the webpage has gotten some form of theming, but it looks a little weird. This is because we have not implemented various HTML elements required for jQuery Mobile. Again, as mentioned in the previous chapter, the code sample assumes that your app has Internet access and hence access to jQuery and jQuery Mobile's CDN. This might reduce the app's startup time. To avoid the problem related to having no network or flaky connectivity, one basic thing you can do is to package your app together with a local copy of jQuery and jQuery Mobile. Let us move on to the next section and see how we can fix this. jQuery Mobile page template Let's go back to app_advanced.html and do some editing. Let us focus on the HTML elements found within <body> tags; change them to look like the following code snippet: <div id="main" data-role="page"> <div data-role="header"> <div data-role="controlgroup" data-type="horizontal"> <a href="#" id="previous" data-role="button">Previous</a> <a href="#" id="next" data-role="button">Next</a> <!-- <button type="button" id="create" datarole=" button">Create</button> --> <a href="#create_form" data-role="button" datatransition=" slide">Create</a> </div> </div> <div id="contents" data-role="content"></div> </div> <div data-role="page" id="create_form" data-theme="c"> <div data-role="header" addBackBtn="true"> <a href="#" data-rel="back">Back</a> <h1>Create a new Post</h1> </div> <div id="form" style="padding:15px;"> Title: <br /><input type="text" name="post_title" id="post_ title" /><br /> Content: <br /> <textarea name="post_contents" id="post_contents"></textarea> <br /> <input type="submit" value="Submit" id="create_post"/> <div id="message"></div> </div> </div> Now save your code and open it in your favourite web browser. You should see the following screen: The app now looks great! Feel free to click on the Next button and see how the app works. How does this all work? For a start, check out the highlighted lines of code. In the world of HTML5, the additional lines of HTML code we wrote, such as data-role="page" or data-theme="c", are known as custom data attributes. jQuery Mobile makes use of these specifications to denote the things we need in our mobile web app. For example, data-role="page" denotes that this particular element (in our case, a div element) is a page component. Similarly, datatheme="c" in our case refers to a particular CSS style. For more information about data theme, feel free to check out http://jquerymobile.com/test/docs/content/content-themes.html. Animation effects Now let us try a little bit with animation effects . We can create animation effects by simply leveraging what we know with jQuery. What about jQuery Mobile? There are several animation effects that are distinct to jQuery Mobile, and in this section we will try out animation effects in terms of page transitions. We will create a page transition effect using the following steps: Click on the Create button, and we will get a page transition effect to a new page, where we see our post creation form. On this Create a new Post form, as usual, type in some appropriate text in the Title and Content fields. Finally, click on the Submit button. Let's see how we can achieve the page transition effect: We need to make changes to our code. For the sake of simplicity, delete all HTML code found within your <body> tags in app_advanced.html, and then copy the following code into your <body> tags: <div id="main" data-role="page"> <div data-role="header"> <div data-role="controlgroup" data-type="horizontal"> <a href="#" id="previous" data-role="button">Previous</a> <a href="#" id="next" data-role="button">Next</a> <!-- <button type="button" id="create" datarole=" button">Create</button> --> <a href="#create_form" data-role="button" datatransition=" slide">Create</a> </div> </div> <div id="contents" data-role="content"></div> </div> <div data-role="page" id="create_form" data-theme="c"> <div data-role="header" addBackBtn="true"> <a href="#" data-rel="back">Back</a> <h1>Create a new Post</h1> </div> <div id="form" style="padding:15px;"> Title: <br /><input type="text" name="post_title" id="post_ title" /><br /> Content: <br /> <textarea name="post_contents" id="post_contents"></ textarea> <br /> <input type="submit" value="Submit" id="create_post"/> <div id="message"></div> </div> </div> Take note that we have used the transition="slide" attribute, so we have a "slide" effect. For more details or options, visit http://jquerymobile.com/test/docs/pages/page-transitions.html. Now, save your code and open it in your favorite web browser. Click on the Create button, and you will first see a slide transition, followed by the post creation form, as follows: Now type in some text, and you will see that jQuery Mobile takes care of the CSS effects in this form as well: Now click on the Submit button, and you will see a Success message below the Submit button, as shown in the following screenshot: If you see the Success message, as shown in the earlier screenshot, congratulations! We can now move on to extending our PhoneGap app, which we built in Chapter 4, Building Mobile Applications Using PhoneGap.
Read more
  • 0
  • 0
  • 886

article-image-getting-started-couchdb-and-futon
Packt
27 Nov 2012
11 min read
Save for later

Getting Started with CouchDB and Futon

Packt
27 Nov 2012
11 min read
(For more resources related to this topic, see here.) What is CouchDB? The first sentence of CouchDB's definition (as defined by http://couchdb.apache.org/) is as follows: CouchDB is a document database server, accessible through the RESTful JSON API. Let's dissect this sentence to fully understand what it means. Let's start with the term database server. Database server CouchDB employs a document-oriented database management system that serves a flat collection of documents with no schema, grouping, or hierarchy. This is a concept that NoSQL has introduced, and is a big departure from relational databases (such as MySQL), where you would expect to see tables, relationships, and foreign keys. Every developer has experienced a project where they have had to force a relational database schema into a project that really didn't require the rigidity of tables and complex relationships. This is where CouchDB does things differently; it stores all of the data in a self-contained object with no set schema. The following diagram will help to illustrate this: In order to handle the ability for many users to belong to one-to-many groups in a relational database (such as MySQL), we would create a users table, a groups table, and a link table, called users_groups. This practice is common to most web applications. Now look at the CouchDB documents. There are no tables or link tables, just documents. These documents contain all of the data pertaining to a single object. This diagram is very simplified. If we wanted to create more logic around the groups in CouchDB, we would have had to create group documents, with a simple relationship between the user documents and group documents. Let's dig into what documents are and how CouchDB uses them. Documents To illustrate how you might use documents, first imagine that you are physically filling out the paper form of a job application. This form has information about you, your address, and past addresses. It also has information about many of your past jobs, education, certifications, and much more. A document would save all of this data exactly in the way you would see it in the physical form - all in one place, without any unnecessary complexity. In CouchDB, documents are stored as JSON objects that contain key and value pairs. Each document has reserved fields for metadata such as id, revision, and deleted. Besides the reserved fields, documents are 100 percent schema-less, meaning that each document can be formatted and treated independently with as many different variations as you might need. Example of a CouchDB document Let's take a look at an example of what a CouchDB document might look like for a blog post: { "_id": "431f956fa44b3629ba924eab05000553", "_rev": "1-c46916a8efe63fb8fec6d097007bd1c6", "title": "Why I like Chicken", "author": "Tim Juravich", "tags": [ "Chicken", "Grilled", "Tasty" ], "body": "I like chicken, especially when it's grilled." } Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. JSON format The first thing you might notice is the strange markup of the document, which is JavaScript Object Notation (JSON). JSON is a lightweight data-interchange format based on JavaScript syntax and is extremely portable. CouchDB uses JSON for all communication with it. Key-value storage The next thing that you might notice is that there is a lot of information in this document. There are key-value pairs that are simple to understand, such as "title", "author", and "body", but you'll also notice that "tags" is an array of strings. CouchDB lets you embed as much information as you want directly into a document. This is a concept that might be new to relational database users who are used to normalized and structured databases. Reserved fields Let's look at the two reserved fields: _id and _rev. _id is the unique identifier of the document. This means that _id is mandatory, and no two documents can have the same value. If you don't define an _id on creation of a document, CouchDB will choose a unique one for you. _rev is the revision version of the document and is the field that helps drive CouchDB's version control system. Each time you save a document, the revision number is required so that CouchDB knows which version of the document is the newest. This is required because CouchDB does not use a locking mechanism, meaning that if two people are updating a document at the same time, then the first one to save his/her changes first, wins. One of the unique things about CouchDB's revision system is that each time a document is saved, the original document is not overwritten, and a new document is created with the new data, while CouchDB stores a backup of the previous documents in its original form in an archive. Old revisions remain available until the database is compacted, or some cleanup action occurs. The last piece of the definition sentence is the RESTful JSON API. So, let's cover that next. RESTful JSON API In order to understand REST, let's first define HyperText Transfer Protocol (HTTP ). HTTP is the underlying protocol of the Internet that defines how messages are formatted and transmitted and how services should respond when using a variety of methods. These methods consist of four main verbs, such as GET, PUT, POST, and DELETE. In order to fully understand how HTTP methods function, let's first define REST. Representation State Transfer (REST ) is a stateless protocol that accesses addressable resources through HTTP methods. Stateless means that each request contains all of the information necessary to completely understand and use the data in the request, and addressable resources means that you can access the object via a URL. That might not mean a lot in itself, but, by putting all of these ideas together, it becomes a powerful concept. Let's illustrate the power of REST by looking at two examples: Resource GET PUT POST DELETE http://localhost/collection Read a list of all of the items inside of collection Update the Collection with another collection Create a new collection Delete the collection http://localhost/collection/abc123 Read the details of the abc123 item inside of collection Update the details of abc123 inside of collection Create a new object abc123 inside of a collection Delete abc123 from collection By looking at the table, you can see that each resource is in the form of a URL. The first resource is collection, and the second resource is abc123, which lives inside of collection. Each of these resources responds differently when you pass different methods to them. This is the beauty of REST and HTTP working together. Notice the bold words I used in the table: Read, Update, Create, and Delete. These words are actually, in themselves, another concept, and it, of course, has its own term; CRUD. The unflattering term CRUD stands for Create, Read, Update, and Delete and is a concept that REST uses to define what happens to a defined resource when an HTTP method is combined with a resource in the form of a URL. So, if you were to boil all of this down, you would come to the following diagram: This diagram means: In order to CREATE a resource, you can use either the POST or PUT method In order READ a resource, you need to use the GET method In order to UPDATE a resource, you need to use the PUT method In order to DELETE a resource, you need to use the DELETE method As you can see, this concept of CRUD makes it really clear to find out what method you need to use when you want to perform a specific action. Now that we've looked at what REST means, let's move onto the term API , which means Application Programming Interface. While there are a lot of different use cases and concepts of APIs, an API is what we'll use to programmatically interact with CouchDB. Now that we have defined all of the terms, the RESTful JSON API could be defined as follows: we have the ability to interact with CouchDB by issuing an HTTP request to the CouchDB API with a defined resource, HTTP method, and any additional data. Combining all of these things means that we are using REST. After CouchDB processes our REST request, it will return with a JSON-formatted response with the result of the request. All of this background knowledge will start to make sense as we play with CouchDB's RESTful JSON API, by going through each of the HTTP methods, one at a time. We will use curl to explore each of the HTTP methods by issuing raw HTTP requests. Time for action – getting a list of all databases in CouchDB Let's issue a GET request to access CouchDB and get a list of all of the databases on the server. Run the following command in Terminal curl -X GET http://localhost:5984/_all_dbs Terminal will respond with the following: ["_users"] What just happened? We used Terminal to trigger a GET request to CouchDB's RESTful JSON API. We used one of the options: -X, of curl, to define the HTTP method. In this instance, we used GET. GET is the default method, so technically you could omit -X if you wanted to. Once CouchDB processes the request, it sends back a list of the databases that are in the CouchDB server. Currently, there is only the _users database, which is a default database that CouchDB uses to authenticate users. Time for action – creating new databases in CouchDB In this exercise, we'll issue a PUT request , which will create a new database in CouchDB. Create a new database by running the following command in Terminal: curl -X PUT http://localhost:5984/test-db Terminal will respond with the following: {"ok":true} Try creating another database with the same name by running the following command in Terminal: curl -X PUT http://localhost:5984/test-db Terminal will respond with the following: {"error":"file_exists","reason":"The database could not becreated, the file already exists."} Okay, that didn't work. So let's to try to create a database with a different name by running the following command in Terminal: curl -X PUT http://localhost:5984/another-db Terminal will respond with the following: {"ok":true} Let's check the details of the test-db database quickly and see more detailed information about it. To do that, run the following command in Terminal: curl -X GET http://localhost:5984/test-db Terminal will respond with something similar to this (I re-formatted mine for readability): {"committed_update_seq": 1,"compact_running": false,"db_name": "test-db","disk_format_version": 5,"disk_size": 4182,"doc_count": 0,"doc_del_count": 0,"instance_start_time": "1308863484343052","purge_seq": 0,"update_seq": 1} What just happened? We just used Terminal to trigger a PUT method to the created databases through CouchDB's RESTful JSON API, by passing test-db as the name of the database that we wanted to create at the end of the CouchDB root URL. When the database was successfully created, we received a message that everything went okay. Next, we created a PUT request to create another database with the same name test-db. Because there can't be more than one database with the same name, we received an error message We then used a PUT request to create a new database again, named another-db. When the database was successfully created, we received a message that everything went okay. Finally, we issued a GET request to our test-db database to find out more information on the database. It's not important to know exactly what each of these statistics mean, but it's a useful way to get an overview of a database. It's worth noting that the URL that was called in the final GET request was the same URL we called when we first created the database. The only difference is that we changed the HTTP method from PUT to GET. This is REST in action!
Read more
  • 0
  • 0
  • 3419