Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-implementation-sass
Packt
21 Mar 2013
8 min read
Save for later

Implementation of SASS

Packt
21 Mar 2013
8 min read
(For more resources related to this topic, see here.) Downloading and installing SASS We're going to kick off the recipes by downloading and setting up SASS ready for use — this will get your system ready to compile SASS files as you create them. Getting ready For this recipe you will need a few things — you will need your choice of normal text editor; I normally use TextPad, which is available on a commercial license at http://www.textpad.com, although please feel free to use whichever editor you prefer. You will also need a copy of RubyInstaller for Windows, which you can download from http://www.rubyinstaller.org/downloads ; at the time of writing, the latest version is 1.9.3. If you are a Mac OS X user, you will already have Ruby installed as part of the operating system; Linux users can download and install it through their distribution's package manager. How to do it... Let's begin by running rubyinstaller-1.9.3-p286.exe , and clicking on Run when prompted. At the Select Setup Language window prompt, select your preferred language — the default is English. At the License Agreement window, select I accept the license then click on Next. At the Installation Destination and Optional Tasks window, select Add Ruby executables to your PATH, and Associate .rb and .rbw files with this Ruby installation: Click on Install. Ruby will now install, you will see a progress window displayed on the screen while the software is being installed. A dialog window will appear when this is completed. Click on Finish to close the window. The next stage is to install SASS. For this, you need to bring up a command prompt, then type gem install sass and press Enter. Ruby will now go ahead and install SASS – you will see an update similar to the following screenshot: You may find this takes a short while, with little apparent change on screen; there is no need to worry, as it will still be downloading and installing SASS. Now that SASS is installed, we can go ahead and start creating SASS files. Open up your text editor, add the following lines, and save it as example1.scss in a folder called c:sass: $blue: #3bbfce; $margin: 16px; .content-navigation { border-color: $blue; color: darken($blue, 9%); } .border { padding: $margin / 2; margin: $margin / 2; border-color: $blue; } We now need to activate the compiler. If you don't already have a session open, bring up a command prompt, and enter the following: sass --watch c:sassexample1.scss:c:sassexample1.css If you get a "permission denied" error when running sass --watch, then make sure you are running the command prompt as an administrator. This activates the SASS compiler which will then automatically generate a CSS file when any change is made to the SCSS file: As soon as you save the file, the SASS compiler will pick up the change, and update the appropriate CSS file accordingly: If you look in the c:sass folderexample1.css file that has been generated, you will see the resulting CSS, which you can then attach to your project: .content-navigation { border-color: #3bbfce; color: #2ca2af; } .border { padding: 8px; margin: 8px; border-color: #3bbfce; } How it works... In this recipe, we've installed Ruby and SASS, and looked at how to run the SASS --watch command to set the SASS compiler to automatically compile CSS files when you create SCSS files. In this instance, we created a basic SCSS file, which SASS has parsed; it works out where variable "placeholders" have been used, and replaces them with the appropriate value that has been specified at the start of the file. Any variables included in the SASS file that are not subsequently used, are automatically dropped by SASS. When using Ruby 1.9, SASS is able to automatically determine which character encoding it should use, and will handle this in the same way as CSS (which is UTF-8 by default or a more local encoding for some users). You can change this if needed; simply add @charset "ENCODING-NAME"; at the beginning of the stylesheet, where ENCODING-NAME is the name of a format that can be converted to Unicode. While the creation of CSS files using this method is simple, it is a manual process that has to be run outside of your text editor. In the next recipe, we'll take a look at how you can add support so you can compile code from within your text editor, using Sublime Text 2 as your example editor. Viewing SASS in a browser When you have compiled your SASS files into valid CSS, an important step is to then test to see that it works. If you start seeing unexpected results, you will want to troubleshoot the CSS code. The trouble is, most browsers don't include support to trace back a SASS-compiled style to its original SASS code; we can fix that (at least for Firefox), by using FireSASS. Getting ready For this exercise, you will need a copy of Firefox, with Firebug installed. You can get the latter from http://www.getfirebug.com. How to do it... Let's get FireSASS installed. To do this, you need to browse to https://addons.mozilla.org/en-US/firefox/addon/firesass-for-firebug/, then click on the Add to Firefox button: Firefox will prompt you to allow it to install. Click on Allow, then on Install Now on the window that appears. You will need to restart Firefox for FireSASS to complete its installation. To test it, we need to create a SASS file with some example code, and use it within a HTML test page. Go ahead and add the following to a copy of the template, and save it as example2.html, within the c:sass folder we created earlier: <body> <form action=""> Name: <input type="text" class="input" /> Password: <input type="password" class="input" /> <input type="submit" value="This is a button" id="submitfrm" /> </form> </body> Add the following to a new SCSS file within Sublime Text 2. Save this as example2.scss; you will also need to link this into the <head> section of your code: $color-button: #d24444; #submitfrm { color: #fff; background: $color-button; border: 1px solid $color-button - #222; padding: 5px 12px; } Activate the SASS compiler from the command prompt, using the following command: sass --watch c:sassexample2.scss:c:sassexample2.css --debug-info If all is well, you should see the following screenshot when you view the file in your browser: How it works... FireSASS works by replacing the line number of the CSS style in use, with that of the line number from the original SCSS file: It relies on the --watch function being activated with --debug-info enabled. Using the previous example, we can see from the screenshot that the border color calculation of border: 1px solid $color-button - #222 returned a color of #B02222, which is a slightly darker shade of red than the main button itself. The beauty of this is that no matter whatever color you decide to use, the calculation will automatically return the right shade of color for the border. Not convinced? Change the $color-button variable to something completely different. I've chosen #3bbfce. Now recompile the SASS file in Sublime Text 2 and the result is a nice shade of blue: Okay, so we've only changed the color for one button in this example – it doesn't matter if you make a change for one button or many; using this code means you only need to change one variable to update any button that uses the same variable in your code. There's more... If you look into the folder where you are storing your SASS files, you may notice the presence of a .sass-cache folder, inside which there will be one or more .scssc files as shown in the following screenshot: These are cache files, which live in the same folder as your source files by default. They are created by SASS when initially compiling SCSS files. This dramatically speeds up the compilation process, particularly when you have a large number of files to compile. It works even better if styles for a project have been separated into one of these files, which SASS can then compile into one large master file. Summary Thus in this article, we saw that the Role Tailored approach provides a single point of access into the system for each user through their assigned Role Center. The user's Role Center acts as their home page. We saw how Role Center focuses on the tasks needed to support its users' jobs throughout the day. Resources for Article : Further resources on this subject: Inheritance in Python [Article] Managing the IT Portfolio using Troux Enterprise Architecture [Article] Data Migration Scenarios in SAP Business ONE Application- part 1 [Article]
Read more
  • 0
  • 0
  • 3252

Packt
21 Mar 2013
13 min read
Save for later

Learn Cinder Basics – Now

Packt
21 Mar 2013
13 min read
(For more resources related to this topic, see here.) What is creative coding This is a really short introduction about what creative coding is, and I'm sure that it is possible to find out much more about this topic on the Internet. Nevertheless, I will try to explain how it looks from my perspective. Creative coding is a relatively new term for a field that combines coding and design. The central part of this term might be the "coding" one—to become a creative coder, you need to know how to write code and some other things about programming in general. Another part—the "creative" one—contains design and all the other things that can be combined with coding. Being skilled in coding and design at the same time lets you explain your ideas as working prototypes for interface designs, art installations, phone applications, and other fields. It can save time and effort that you would give in explaining your ideas to someone else so that he/she could help you. The creative coding approach may not work so well in large projects, unless there are more than one creative codes involved. A lot of new tools that make programming more accessible have emerged during the last few years. All of them are easy to use, but usually the less complicated a tool is, the less powerful it is, and vice versa. A few words about Cinder So we are up to some Cinder coding! Cinder is one of the most professional and powerful creative coding frameworks that you can get for free on the Internet. It can help you if you are creating some really complicated interactive real-time audio-visual piece, because it uses one of the most popular and powerful low-level programming languages out there—C++—and relies on minimum third-party code libraries. The creators of Cinder also try to use all the newest C++ language features, even those that are not standardized yet (but soon will be) by using the so called Boost libraries. This book is not intended as an A-to-Z guide about Cinder nor the C++ programming language, nor areas of mathematics involved. This is a short introduction for us who have been working with similar frameworks or tools and know some programming already. As Cinder relies on C++, the more we know about it the better. Knowledge of ActionScript, Java, or even JavaScript will help you understand what is going on here. Introducing the 3D space To use Cinder with 3D we need to understand a bit about 3D computer graphics. First thing that we need to know is that 3D graphics are created in a three-dimensional space that exists somewhere in the computer and is transformed into a two-dimensional image that can be displayed on our computer screens afterwards. Usually there is a projection (frustrum) that has different properties which are similar to the properties of cameras we have in the real world. Frustrum takes care of rendering all the 3D objects that are visible in frustrum. It is responsible for creating the 2D image that we see on the screen. As you can see in the preceding figure, all objects inside the frustrum are being rendered on the screen. Objects outside the view frustrum are being ignored. OpenGL (that is being used for drawing in Cinder) relies on the so called rendering pipeline to map the 3D coordinates of the objects to the 2D screen coordinates. Three kind of matrices are used for this process: the model, view, and projection matrices. The model matrix maps the 3D object's local coordinates to the world (or global) space, the view matrix maps it to the camera space, and finally the projection matrix takes care of the mapping to the 2D screen space. Older versions of OpenGL combine the model and view matrices into one—the modelview matrix. The coordinate system in Cinder starts from the top-left corner of the screen. Any object placed there has the coordinates 0, 0, 0 (these are values of x, y, and z respectively). The x axis extends to the right, y to the bottom, but z extends towards the viewer (us), as shown in the following figure: Drawing in 3D Let's try to draw something by taking into account that there is a third dimension. Create another project by using TinderBox and name it Basic3D. Open the project file ( xcode/Basic3D.xcodeproj on Mac or vc10\Basic3D.sln on Windows). Open Basic3DApp.cpp in the editor and navigate to the draw() method implementation. Just after the gl::clear() method add the following line to draw a cube: gl::drawCube( Vec3f(0,0,0), Vec3f(100,100,100) ); The first parameter defines the position of the center of the cube, the second defines its size. Note that we use the Vec3f() variables to de fine position and size within three (x, y and z) dimensions. Compile and run the project. This will draw a solid cube at the top-left corner of the screen. We are able to see just one quarter of it because the center of the cube is the reference point. Let's move it to the center of the screen by transforming the previous line as follows: gl::drawCube( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0), Vec3f(100,100,100) ); Now we are positioning the cube in the middle of the screen no matter what the window's width or height is, because we pass half of the window's width (getWindowWidth()/2 ) and half of the window's height ( getWindowHeight()/2) as values for the x and y coordinates of the cube's location. Compile and run the project to see the result. Play around with the size parameters to understand the logic behind it. We may want to rotate the cube a bit. There is a built-in rotate() function that we can use. One of the things that we have to remember, though, is that we have to use it before drawing the object. So add the following line before gl::drawCube(): gl::rotate( Vec3f(0,1,0) ); Compile and run the project. You should see a strange rotation animation around the y axis. The problem here is that the rotate() function rotates the whole 3D world of our application including the object in it and it does so by taking into account the scene coordinates. As the center of the 3D world (the place where all axes cross and are zero) is in the top-left corner, all rotation is happening around this point. To change that we have to use the translate() function. It is used to move the scene (or canvas) before we rotate() or drawCube(). To make our cube rotate around the center of the screen, we have to perform the following steps: Use the translate() function to translate the 3D world to the center of the screen. Use the rotate() function to rotate the 3D world. Draw the object (drawCube()). Use the translate()function to translate the scene back. We have to use the translate() function to translate the scene back to the location, because each time we call translate() values are added instead of being replaced. In code it should look similar to the following: gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f::yAxis()*1 ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::translate( Vec3f(-getWindowWidth()/2,-getWindowHeight()/2,0) ); So now we get a smooth rotation of the cube around the y axis. The rotation angle around y axis is increased in each frame by 1 degree as we pass the Vec3f::yAxis()*1 value to the rotate() function. Experiment with the rotation values to understand this a bit more. What if we want the cube to be in a constant rotated position? We have to remember that the rotate() function works similar to the translate function. It adds values to the rotation of the scene instead of replacing them. Instead of rotating the object back, we will use the pushMatrices() and popMatrices() functions. Rotation and translation are transformations. Every time you call translate() or rotate() , you are modifying the modelview matrix. If something is done, it is sometimes not so easy to undo it. Every time you transform something, changes are being made based on all previous transformations in the current state. So what is this state? Each state contains a copy of the current transformation matrices. By calling pushModelView() we enter a fresh state by making a copy of the current modelview matrix and storing it into the stack. We will make some crazy transformations now without worrying about how we will undo them. To go back, we call popModelView() that pops (or deletes) the current modelview matrix from the stack, and returns us to the state with the previous modelview matrix. So let's try this out by adding the following code after the gl::clear() call: gl::pushModelView(); gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f(35,20,0) ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::popModelView(); Compile and run our program now, you should see something similar to the following screenshot: As we can see, before doing anything, we create a copy of the current state with pushModelView(). Then we do the same as before, translate our scene to the middle of the screen, rotate it (this time 35 degrees around x axis and 20 degrees around y axis), and finally draw the cube! To reset the stage to the state it was before, we have to use just one line of code, popModelView(). Using built-in eases Now, say we want to make use of the easing algorithms that we saw in the EaseGallery sample. To do that, we have to change the code by following certain steps. To use the easing functions, we have to include the Easing.h header file:#include "cinder/Easing.h" First we are going to add two more variables, startPostition and circleTimeBase: Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; Then, in the setup() method implementation, we have to change the currentPosition parts to startPosition and add an initial value to the circleTimeBase array members: startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; Next, we have to change the update() method so that it can be used along with the easing functions. They are based on time and they return a floating point value between 0 and 1 that defines the playhead position on an abstract 0 to 1 timeline: void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( getElapsedSeconds()-circleTimeBase[i]) * difference + startPosition[i]; if ( currentPosition[i].distance(targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } The highlighted parts in the preceding code snippet are those that have been changed. The most important part of it is the currentPosition[i] calculation part. We take the distance between the start and end points of the timeline and multiply it with the position floating point number that is being returned by our easing function, which in this case is easeOutExpo() . Again, it returns a floating point variable between 0 and 1 that represents the position on an abstract 0 to 1 timeline. If we multiply any number with, say, 0.33f, we get one-third of that number, 0.5f, we get one-half of that number, and so on. So, we add this distance to the circle's starting position and we get it's current position! Compile and run our application now. You should see something as follows: Almost like a snow storm! We will add a small modification to the code though. I will add a TWEEN_SPEED definition at the top of the code and multiply the time parameter passed to the ease function with it, so we can control the speed of the circles: #define TWEEN_SPEED 0.2 Change the following line in the update() method implementation: currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i])*TWEEN_SPEED) * difference + startPosition[i]; I did this because the default time base for each tween is 1 second. That means that each transition is happening exactly for 1 second and that's a bit too fast for our current situation. We want it to be slower, so we multiply the time we pass to the easing function with a floating point number that is less than 1.0f and greater than 0.0f. By doing that we ensure that the time is scaled down and instead of 1 second we get 5 seconds for our transition. So try to compile and run this, and see for yourself! Here is the full source code of our circle-creation: #include "cinder/app/AppBasic.h" #include "cinder/gl/gl.h" #include "cinder/Rand.h" #include "cinder/Easing.h" #define CIRCLE_COUNT 100 #define TWEEN_SPEED 0.2 using namespace ci; using namespace ci::app; using namespace std; class BasicAnimationApp : public AppBasic { public: void setup(); void update(); void draw(); void prepareSettings( Settings *settings ); Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; }; void BasicAnimationApp::prepareSettings( Settings *settings ) { settings->setWindowSize(800,600); settings->setFrameRate(60); } void BasicAnimationApp::setup() { for(int i=0; i<CIRCLE_COUNT; i++) { currentPosition[i].x=Rand::randFloat(0, getWindowWidth()); currentPosition[i].y=Rand::randFloat(0, getWindowHeight()); targetPosition[i].x=Rand::randFloat(0, getWindowWidth()); targetPosition[i].y=Rand::randFloat(0, getWindowHeight()); circleRadius[i] = Rand::randFloat(1, 10); startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; } } void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i]) * TWEEN_SPEED) * difference + startPosition[i]; if ( currentPosition[i].distance( targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } void BasicAnimationApp::draw() { gl::clear( Color( 0, 0, 0 ) ); for (int i=0; i<CIRCLE_COUNT; i++) { gl::drawSolidCircle( currentPosition[i], circleRadius[i] ); } } CINDER_APP_BASIC( BasicAnimationApp, RendererGl ) Experiment with the properties and try to change the eases. Not all of them will work with this example, but at least you will understand how to use them to create smooth animations with Cinder. Summary This article explains what is Cinder, introduces the 3D space, how to draw in 3D, and also explains in short about using built-in eases. Resources for Article : Further resources on this subject: 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] Sage: 3D Data Plotting [Article] Building your First Application with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 2143

article-image-responsive-techniques
Packt
20 Mar 2013
9 min read
Save for later

Responsive techniques

Packt
20 Mar 2013
9 min read
(For more resources related to this topic, see here.) Media queries Media queries are an important part of responsive layouts. They are part of CSS to make it possible to add styles specific to a certain media. Media queries can target the output type, screen size, as well as the device orientation, and even the density of the display. But let's have a look at a simple example before we get lost in theory: #header { background-repeat: no-repeat; background-image: url(logo.png); } @media print { #header { display: none; } } The highlighted line in the preceding code snippet makes sure that all the nested styles are only used if the CSS file is used on a printer. To be a bit more precise, it will hide the element with the ID header once you print the document. The same could be achieved by creating two different files and including them with a code as follows: <link rel="stylesheet" media="all" href="normal.css" /> <link rel="stylesheet" media="print" href="print.css" /> It's not always clear whether you want to create a separate CSS file or not. Using too many CSS files might slow down the loading process a bit, but having one big file might make it a bit messier to handle it. Having a separate print CSS is something you'll see rather often while screen resolution dependent queries are usually in the main CSS file. Here's another example where we use the screen width to break a two columns layout into a single column layout if the screen gets smaller. The following screenshot shows you the layout on an average desktop computer as well as on a tablet: The HTML code we need for our example looks as follows: <div class="box">1</div> <div class="box">2</div> The CSS including the media queries could be as follows: .box { width: 46%; padding: 2%; float: left; } @media all and (max-width: 1023px) { .box { width: 96%; float: none; } } By default, each box has a width of 46 percent and a padding of 2 percent on each side, adding up to a total width of 50 percent. If you look at the highlighted line, you can see a media query relevant to all media types but with a restriction to a maximum width of 1023 pixels. This means that if you view the page on a device with a screen width less than 1023 pixels, the nested CSS styles will be used. In the preceding example, we're overriding the default width of 46 percent with 96 percent. In combination with the 2 percent padding that is still there, we're going to stretch the box to the width of the screen. Checking the maximum width can achieve a lot already, but there are different queries as well. Here are a few queries that could be useful: The following query matches the iPad but only if the orientation is landscape. You could also check the portrait mode by using orientation: portrait. @media screen and (device-width: 768px) and (device-height: 1024px) and (orientation: landscape) If you want to display content specific for a high-resolution screen such as a retina display, use this: @media all and (-webkit-min-device-pixel-ratio: 2) Instead of checking the maximum width, you can also do it the other way round and check the minimum width. The following query could be used to use styles only used in a normal-sized screen: @media screen and (min-width: 1224px) There are a lot more variables you can check in a media query. You can find a nicely arranged list of queries including a testing application on the following site: http://cssmediaqueries.com/ Try to get an overview of what's possible; but don't worry, you'll hardly ever need more than four media queries. How to scale pictures We've seen how you can check the device type in a few different ways, and also used this to change a two-column layout into a single-column layout. This allows you to rearrange the layout elements; but what happens to pictures if the container, in which the picture is located, changes size? If you look at the following screen, you can see a mock-up where the picture has to change its width from 1000 pixels to 700 pixels: Look at the code that follows: <div id="container"> <img src = "picture.jpg" width="1000" height="400"> </div> Assuming the HTML code looks like this, if we would add a responsive media query that resizes the container to match the width of the screen, we've to cut off a part of the picture. What we want to do is to scale the picture within the container to have the same or a smaller size than the container. The following CSS snippet can be added to any CSS file to make your pictures scale nicely in a responsive layout: img { max-width: 100%; height: auto; } Once you've added this little snippet, your picture will never get bigger than its parent container. The property max-width is pretty obvious; it restricts the maximum size. But why is height: auto necessary? If you look at the preceding HTML code, you can see that our picture has a fixed width and height. If we'd only specify max-width and look at the picture on a screen with a width of 500 pixels, we'd get a picture with the dimensions 500 x 400 pixels. The picture would be distorted. To avoid this, we specify height: auto; to make sure the height stays in relation to the width of the picture. Pictures on high-density screens If you printed some of your graphic ideas on a paper and not just displayed them on a computer screen, you'll probably have heard of pixels per inch (ppi) and dots per inch (dpi) before. It's a measure you can use to determine the number of pixels per inch. You'll get a higher density and more details if you have more pixels or dots per inch. You might have read about retina screens; those displays have a density of around 300 dpi, about twice as much as an average computer monitor. You don't have to, but it's nice if you also make sure that you give owners of such a device the chance to look at high-quality pictures. What's necessary to deliver a high-quality picture to a retina screen? Some of you might think that you have to save an image with more ppi; but that's not what you should do when creating a retina-ready site. Devices with a retina display simply ignore the pixels per inch information stored in an image. However, the dimension of your images matters. A picture saved for a retina display should have exactly twice the size of the original image. You could use an SVG picture instead but you still have to use a fallback picture because SVG isn't supported as well as image formats like PNG. In theory, SVG would be even better because it is vector-based and scales perfectly in any resolution. Enough theory, let's look at an example that uses CSS to display a different image for a retina display: #logo { background-image: url(logo.png); background-size: 400px 150px; width: 400px; height: 150px; } @media all screen and (-webkit-min-device-pixel-ratio: 2) { #logo { background-image: url([email protected]); } } We've got an HTML element <div id="logo"></div> where we display a background picture to show our logo. If you look at the media query, you can see that we're checking a variable called –webkit-min-device-pixel-ratio to detect a high-resolution display. In case the display has a pixel ratio of 2 or higher, we're using an alternative picture that is twice the size. But note the width and height of the container stays the same. What alternatives are there? As we quickly mentioned, a vector-based format such as SVG would have some benefits but isn't supported on IE7 and IE8. However, you can still use SVG. But you have to make sure that there's a fallback image for those two browsers. Have a look at the following code: <!--[if lte IE 8]> <img src = "logo.png" width="200" height="50" /> <![endif]--> <!--[if gt IE 8]> <img src = "logo.svg" width="200" height="50" /> <![endif]--> <!--[if !IE]>--> <img src = "logo.svg" width="200" height="50" /> <!--<![endif]--> In the preceding code, we're using conditional tags to make sure that the old versions of Internet Explorer use the logo saved as PNG, but switch to SVG for modern browsers. It takes a bit more effort because you have to work with vectors and still have to save it as a PNG file, but if you want to make sure your logo or illustrations are nicely displayed when zoomed, use this trick. Working with SVG is an option, but there's another vector-based solution that you might want to use in some situations. A simple text is always rendered using vectors. No matter what font size you're using, it will always have sharp edges. Most people probably think about letters, numbers, and punctuation marks when they think about fonts, but there are icon fonts as well. Also keep in mind that you can create your own icon web font too. Check this site:http://fontello.com/ , which is a nice tool that allows you to select the icons you need and use it as a web font. You can also create your very own icon web font from scratch; check out the following link for a detailed tutorial: http://www.webdesignerdepot.com/2012/01/how-to-make-your-own-iconwebfont/ One last word before we continue; Retina screens are relatively new. It's therefore no surprise that some of the specifications to create a perfect retina layout are still drafts. The Web is a fast moving place; expect some changes and new features to insert an image with multiple sources and more vector features. Summary In this article, we learned about responsive themes that can be added to our themes and how media queries are an important part of responsive layouts. This article also helped you on how to scale pictures on different types of devices. It also helped you understand areas regarding what it takes to display websites for retina screens. Resources for Article : Further resources on this subject: concrete5: Mastering Auto-Nav for Advanced Navigation [Article] Everything in a Package with concrete5 [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 939
Visually different images

article-image-creating-animated-gauge-css3
Packt
20 Mar 2013
15 min read
Save for later

Creating an Animated Gauge with CSS3

Packt
20 Mar 2013
15 min read
(For more resources related to this topic, see here.) A basic gauge structure Let's begin with a new project; as usual we need to create an index.html file. This time the markup involved is so small and compact that we can add it right now: <!doctype html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <title>Go Go Gauges</title> <link rel="stylesheet" type="text/css" href="css/application.css"> </head> <body> <div data-gauge data-min="0" data-max="100" data-percent="50"> <div data-arrow></div> </div> </body> </html> The gauge widget is identified by the data-gauge attribute and defined with three other custom data attributes; namely, data-min, data-max, and data-percent, which indicate the respective minimum and maximum value of the range and the current arrow position expressed in percentage value. Within the element marked with the data-gauge attribute, we have defined a div tag that will become the arrow of the gauge. To start with the styling phase, we first need to equip ourselves with a framework that is easy to use and can give us the opportunity to generate CSS code. We decide to go for SASS so we first need to install Ruby (http://www.ruby-lang.org/en/downloads/) and then enter the following from a command-line terminal: gem install sass You would probably need to execute the following command if you are working in Unix/Linux environments: sudo gem install sass   Installing Compass For this project we'll also use Compass, a SASS extension able to add some interesting features to our SASS stylesheet. To install Compass, we have to just enter gem install compass (or sudo gem install compass) in a terminal window. After the installation procedure is over, we have to create a small config.rb file in the root folder of our project using the following code: # Require any additional compass plugins here. # Set this to the root of your project when deployed: http_path = YOUR-HTTP-PROJECT-PATH css_dir = "css" sass_dir = "scss" images_dir = "img" javascripts_dir = "js" # You can select your preferred output style here (can be overridden via the command line): # output_style = :expanded or :nested or :compact or :compressed # To enable relative paths to assets via compass helper functions. Uncomment: relative_assets = true # To disable debugging comments that display the original location of your selectors. Uncomment: # line_comments = false preferred_syntax = :sass The config.rb file helps Compass to understand the location of the various assets of the project; let's have a look at these options in detail: http_path: This must be set to the HTTP URL related to the project's root folder css_dir: This contains the relative path to the folder where the generated CSS files should be saved sass_dir: This contains the relative path to the folder that contains our .scss files images_dir: This contains the relative path to the folder that holds all the images of the project javascripts_dir: This is similar to images_dir, but for JavaScript files There are other options available; we can decide whether the output CSS should be compressed or not, or we can ask Compass to use relative paths instead of absolute ones. For a complete list of all the options available, see the documentation at http://compass-style.org/help/tutorials/configuration-reference/. Next, we can create the folder structure we just described, providing our project with the css, img, js, and scss folders. Lastly, we can create an empty scss/application.scss file and start discovering the beauty of Compass. CSS reset and vendor prefixes We can ask Compass to regenerate the CSS file after each update to its SCSS counterpart. To do so, we need to execute the following command from the root of our project using a terminal: compass watch . Compass provides an alternative to the Yahoo! reset stylesheet we used in our previous project. To include this stylesheet, all we have to do is add a SASS include directive to our application.scss file: @import "compass/reset"; If we check css/application.css, the following is the result (trimmed): /* line 17, ../../../../.rvm/gems/ruby-1.9.3-p194/gems/compass- 0.12.2/frameworks/compass/stylesheets/compass/reset/_utilities.scss */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video { margin: 0; padding: 0; border: 0; font: inherit; font-size: 100%; vertical-align: baseline; } /* line 22, ../../../../.rvm/gems/ruby-1.9.3-p194/gems/compass- 0.12.2/frameworks/compass/stylesheets/compass/reset/_utilities.scss */ html { line-height: 1; } ... Notice also how the generated CSS keeps a reference to the original SCSS; this comes in handy when it's a matter of debugging some unexpected behaviors in our page. The next @import directive will take care of the CSS3 experimental vendor prefixes. By adding @import "compass/css3" on top of the application.scss file, we ask Compass to provide us with a lot of powerful methods for adding experimental prefixes automatically; for example, the following snippet: .round { @include border-radius(4px); } Is compiled into the following: .round { -moz-border-radius: 4px; -webkit-border-radius: 4px; -o-border-radius: 4px; -ms-border-radius: 4px; -khtml-border-radius: 4px; border-radius: 4px; } Equipped with this new knowledge, we can now start deploying the project. Using rem For this project we want to introduce rem, a measurement unit that is almost equivalent to em, but is always relative to the root element of the page. So, basically we can define a font size on the html element and then all the sizes will be related to it: html{ font-size: 20px; } Now, 1rem corresponds to 20px; the problem of this measurement is that some browsers, such as Internet Explorer version 8 or less, don't actually support it. To find a way around this problem, we can use the following two different fallback measurement units: em: The good news is that em, if perfectly tuned, works exactly as rem; the bad news is that this measurement unit is relative to the element's font-size property and is not relative to html. So, if we decide to pursue this method, we then have to take extra care every time we deal with font-size. px: We can use a fixed unit pixel size. The downside of this choice is that in older browsers, we're complicating the ability to dynamically change the proportions of our widget. In this project, we will use pixels as our unit of measurement. The reason we have decided this is because one of the rem benefits is that we can change the size of the gauge easily by changing the font-size property with media queries. This is only possible where media queries and rem are supported. Now, we have to find a way to address most of the duplication that would emerge from having to insert every statement containing a space measurement unit twice (rem and px). We can easily solve this problem by creating a SASS mixin within our application.scss file as follows (for more info on SASS mixins, we can refer to the specifications page at http://sass-lang.com/docs/yardoc/file.SASS_REFERENCE.html#mixins): @mixin px_and_rem($property, $value, $mux){ #{$property}: 0px + ($value * $mux); #{$property}: 0rem + $value; } So the next time instead of writing the following: #my_style{ width: 10rem; } We can instead write: #my_style{ @include px_and_rem(width, 10, 20); } In addition to that, we can also save the multiplier coefficient between px and rem in a variable and use it in every call to this function and within the html declaration; let's also add this to application.scss: $multiplier: 20px; html{ font-size: $multiplier; } Of course, there are still some cases in which the @mixin directive that we just created doesn't work, and in such situations we'll have to handle this duality manually. Basic structure of a gauge Now we're ready to develop at least the basic structure of our gauge, which includes the rounded borders and the minimum and maximum range labels. The following code is what we need to add to application.scss: div[data-gauge]{ position: absolute; /* width, height and rounded corners */ @include px_and_rem(width, 10, $multiplier); @include px_and_rem(height, 5, $multiplier); @include px_and_rem(border-top-left-radius, 5, $multiplier); @include px_and_rem(border-top-right-radius, 5, $multiplier); /* centering */ @include px_and_rem(margin-top, -2.5, $multiplier); @include px_and_rem(margin-left, -5, $multiplier); top: 50%; left: 50%; /* inset shadows, both in px and rem */ box-shadow: 0 0 #{0.1 * $multiplier} rgba(99,99,99,0.8), 0 0 #{0.1 * $multiplier} rgba(99,99,99,0.8) inset; box-shadow: 0 0 0.1rem rgba(99,99,99,0.8), 0 0 0.1rem rgba(99,99,99,0.8) inset; /* border, font size, family and color */ border: #{0.05 * $multiplier} solid rgb(99,99,99); border: 0.05rem solid rgb(99,99,99); color: rgb(33,33,33); @include px_and_rem(font-size, 0.7, $multiplier); font-family: verdana, arial, sans-serif; /* min label */ &:before{ content: attr(data-min); position: absolute; @include px_and_rem(bottom, 0.2, $multiplier); @include px_and_rem(left, 0.4, $multiplier); } /* max label */ &:after{ content: attr(data-max); position: absolute; @include px_and_rem(bottom, 0.2, $multiplier); @include px_and_rem(right, 0.4, $multiplier); } } With box-shadow and border, we can't use the px_and_rem mixin, so we duplicated these properties using px first and then rem. The following screenshot shows the result: Gauge tick marks How to handle tick marks? One method would be by using images, but another interesting alternative is to benefit from multiple background support and create those tick marks out of gradients. For example, to create a vertical mark, we can use the following within the div[data-gauge] selector: linear-gradient(0deg, transparent 46%, rgba(99, 99, 99, 0.5) 47%, rgba(99, 99, 99, 0.5) 53%, transparent 54%) Basically, we define a very small gradient between transparent and another color in order to obtain the tick mark. That's the first step, but we're yet to deal with the fact that each tick mark must be defined with a different angle. We can solve this problem by introducing a SASS function that takes the number of tick marks to print and iterates up to that number while also adjusting the angles of each mark. Of course, we also have to take care of experimental vendor prefixes, but we can count on Compass for that. The following is the function. We can create a new file called scss/_gauge.scss for this and other gauge-related functions; the leading underscore is to tell SASS to not create a .css file out of this .scss file, because it will be included in a separate file. @function gauge-tick-marks($n, $rest){ $linear: null; @for $i from 1 through $n { $p: -90deg + 180 / ($n+1) * $i; $linear: append($linear, linear-gradient( $p, transparent 46%, rgba (99,99,99,0.5) 47%, rgba(99,99,99,0.5) 53%, transparent 54%), comma); } @return append($linear, $rest); } We start with an empty string adding the result of calling the linear-gradient Compass function, which handles experimental vendor prefixes, with an angle that varies based on the current tick mark index. To test this function out, we first need to include _gauge.scss in application.scss: @import "gauge.scss"; Next, we can insert the function call within the div[data-gauge] selector in application.scss, specifying the number of tick marks required: @include background(gauge-tick-marks(11,null)); The background function is also provided by Compass and it is just another mechanism to deal with experimental prefixes. Unfortunately, if we reload the projects the results are far from expected: Although we can see a total of 11 stripes, they are of the wrong sizes and in the wrong position. To resolve this, we will create some functions to set the correct values for background-size and background-position. Dealing with background size and position Let's start with background-size, the easiest. Since we want each of the tick marks to be exactly 1rem in size, we can proceed by creating a function that prints 1rem 1rem as many times as the number of the passed parameter; so let's add the following code to _gauge.scss: @function gauge-tick-marks-size($n, $rest){ $sizes: null; @for $i from 1 through $n { $sizes: append($sizes, 1rem 1rem, comma); } @return append($sizes, $rest, comma); } We already noticed the append function; an interesting thing to know about it is that the last parameter of this function lets us decide if some letter must be used to concatenate the strings being created. One of the available options is comma, which perfectly suits our needs. Now, we can add a call to this function within the div[data-gauge] selector: background-size: gauge-tick-marks-size(11, null); And the following is the result: Now the tick marks are of the right size, but they are displayed one above the other and are repeated all across the element. To avoid this behavior, we can simply add background-repeat: no-repeat just below the previous instruction: background-repeat: no-repeat; On the other hand, to handle the position of the tick marks we need another SASS function; this time it's a little more complex and involves a bit of trigonometry. Each gradient must be placed in the function of its angle—x is the cosine of that angle and y the sine. The sin and cos functions are provided by Compass, we need just to handle the shift, because they are referred to the center of the circle whereas our css property's origin is in the upper-left corner: @function gauge-tick-marks-position($n, $rest){ $positions: null; @for $i from 1 through $n { $angle: 0deg + 180 / ($n+1) * $i; $px: 100% * ( cos($angle) / 2 + 0.5 ); $py: 100% * (1 - sin($angle)); $positions: append($positions, $px $py, comma); } @return append($positions, $rest, comma); } Now we can go ahead and add a new line inside the div[data-gauge] selector: background-position: gauge-tick-marks-position(11, null); And here's the much-awaited result: The next step is to create a @mixin directive to hold these three functions together, so we can add the following to _gauge.scss: @mixin gauge-background($ticks, $rest_gradient, $rest_size, $rest_position) { @include background-image( gauge-tick-marks($ticks, $rest_gradient) ); background-size: gauge-tick-marks-size($ticks, $rest_size); background-position: gauge-tick-marks-position($ticks, $rest_position); background-repeat: no-repeat; } And replace what we placed inside div[data-gauge] in this article with a single invocation: @include gauge-background(11, null, null, null ); We've also left three additional parameters to define extra values for background, background-size, and background-position, so we can, for example, easily add a gradient background: @include gauge-background(11, radial-gradient(50% 100%, circle, rgb(255,255,255), rgb(230,230,230)), cover, center center ); And following is the screenshot: Creating the arrow To create an arrow we can start by defining the circular element in the center of the gauge that holds the arrow. This is easy and doesn't introduce anything really new; here's the code that needs to be nested within the div[data-gauge] selector: div[data-arrow]{ position: absolute; @include px_and_rem(width, 2, $multiplier); @include px_and_rem(height, 2, $multiplier); @include px_and_rem(border-radius, 5, $multiplier); @include px_and_rem(bottom, -1, $multiplier); left: 50%; @include px_and_rem(margin-left, -1, $multiplier); box-sizing: border-box; border: #{0.05 * $multiplier} solid rgb(99,99,99); border: 0.05rem solid rgb(99,99,99); background: #fcfcfc; } The arrow itself is a more serious business; the basic idea is to use a linear gradient that adds a color only to half the element starting from its diagonal. Then we can rotate the element in order to move the pointed end at its center. The following is the code that needs to be placed within div[data-arrow]: &:before{ position: absolute; display: block; content: ''; @include px_and_rem(width, 4, $multiplier); @include px_and_rem(height, 0.5, $multiplier); @include px_and_rem(bottom, 0.65, $multiplier); @include px_and_rem(left, -3, $multiplier); background-image: linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -webkit-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -moz-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); background-image: -o-linear-gradient(83.11deg, transparent, transparent 49%, orange 51%, orange); @include apply-origin(100%, 100%); @include transform2d( rotate(-3.45deg)); box-shadow: 0px #{-0.05 * $multiplier} 0 rgba(0,0,0,0.2); box-shadow: 0px -0.05rem 0 rgba(0,0,0,0.2); @include px_and_rem(border-top-right-radius, 0.25, $multiplier); @include px_and_rem(border-bottom-right-radius, 0.35, $multiplier); } To better understand the trick behind this implementation, we can temporarily add border: 1px solid red within the &:before selector to the result and zoom a bit: Moving the arrow Now we want to position the arrow to the correct angle depending on the data-percent attribute value. To do so we have to take advantage of the power of SASS. In theory the CSS3 specification would allow us to valorize some properties using values taken from attributes, but in practice this is only possible while dealing with the content property. So what we're going to do is create a @for loop from 0 to 100 and print in each iteration a selector that matches a defined value of the data-percent attribute. Then we'll set a different rotate() property for each of the CSS rules. The following is the code; this time it must be placed within the div[data-gauge] selector: @for $i from 0 through 100 { $v: $i; @if $i < 10 { $v: '0' + $i; } &[data-percent='#{$v}'] > div[data-arrow]{ @include transform2d(rotate(#{180deg * $i/100})); } } If you are too scared about the amount of CSS generated, then you can decide to adjust the increment of the gauge, for example, to 10: @for $i from 0 through 10 { &[data-percent='#{$i*10}'] > div[data-arrow]{ @include transform2d(rotate(#{180deg * $i/10})); } } And the following is the result:    
Read more
  • 0
  • 0
  • 3379

Packt
20 Mar 2013
8 min read
Save for later

Quick start – Firebug window overview and inspecting

Packt
20 Mar 2013
8 min read
(For more resources related to this topic, see here.) Console panel The console panel offers a JavaScript command line. The JavaScript command line is a very powerful weapon of Firebug. This feature provides you with the power of executing arbitrary JavaScript expressions and commands on the fly at the command line without even reloading the document. You can validate and execute any JavaScript on the command line before integrating it on your web page. When something goes wrong, Firebug will quickly let you know the details and information about: JavaScript errors and warnings CSS errors XML errors External errors Chrome errors and messages Network errors Strict warnings (performance penalty) Stack trace with errors XHR (xmlHttpRequest) info HTML panel The HTML panel displays the generated HTML of the currently opened web page. It allows you to edit HTML on the fly and play with your HTML DOM (document object model) in Firefox. It differs from the normal source code view, because it also displays all manipulations on the DOM tree. On the right-hand side it shows the CSS styles defined for the currently selected tag, the computed styles for it, layout information, and the DOM variables assigned to it under different tabs. CSS panel When you want to alter CSS rules, the CSS panel is the right place for this. It allows the user to tweak the CSS stylesheet in his/her taste. You can use this panel for viewing/editing CSS stylesheets on the fly and view the results live on the current document. This tab is mostly used by CSS developers to tweak the pixels, position, look and feel, or area of an HTML element. This tab is also useful for web developers when they want to view those elements whose CSS property display is set to none. Furthermore it offers an exclusive editing mode, in which you can edit the content of the CSS files directly via a text area. Script panel The Script panel is the next gem that Firebug provides for us. It allows you to debug JavaScript code on the browser. The script panel integrates a powerful debugging tool based on features such as different kinds of breakpoints, step-by-step execution of scripts, a display for the variable stack, watch expressions, and more. Things you can perform under the Script panel A few tasks that you can perform under the Script panel and play with JavaScript are as follows: View the list of JavaScript files that are loaded with your document Debug the JavaScript code Insert and remove breakpoints Insert and remove conditional breakpoints View the stack trace View list of breakpoints Add new Watch expressions Keep an eye on the watches for viewing the values of variables Debug an Ajax call DOM panel HTML elements are also known as DOM elements. Document Object Model (DOM) represents the hierarchy of the HTML elements and functions. You can traverse in any direction within the DOM using JavaScript. The DOM elements have parent, child, and sibling relationships between them therefore for objects and arrays it offers an expandable tree view. Clicking on them (objects and arrays) limits the display of the DOM tree to the context of these objects and shows the current element path as a breadcrumb list on the DOM panel's toolbar. The DOM panel in Firebug shows: DOM properties DOM functions DOM constants User-defined properties User-defined functions Inline event handlers Enumerable properties Own properties Net panel The Net panel allows you to monitor the network traffic initiated by your web page. It shows all collected and computed information in a tabular form to the user. Each row in the table represents one request or response round trip made by the web page. You can quickly measure the performance of your web page by analyzing the following information: Time taken to load the page The size of each file The loading time of each file The number of requests the browser sends to load the page The Net panel also provides the facility to filter the type of request or response that you want to focus on. You can choose filters to view only HTML, CSS, JS, Image, Flash, Media, and even XHR (Ajax) requests. Cookies panel The Cookies panel allows you to view, manage, and debug cookies within your browser via Firebug. Within the context of the current domain, this panel displays a list of all cookies associated with the current web page. Each entry in the list shows the basic information about a cookie (Name, Value, Domain, Path, Expires, and so on). Inspecting page components Sometimes, something in your page doesn't quite look correct and you want to know why; you want to understand how a particular section of the page has been constructed. Firebug's Inspect functionality allows you to do that in the fastest possible manner using minimal clicks or mouse strokes. As you move around the web page, Firebug creates visual frames around various page components/sections and shows the HTML and CSS behind the page section. In fact, this is one of the most frequently used features while tweaking the look and feel of your web application. Inspecting page elements and doing tweaks (editing HTML attributes, CSS rules, and so on) are real time savers during the fine-tuning of a web application's look and feel. In this section, you will learn how to perform one of the core tasks of Firebug; inspecting and editing HTML. The following steps will help you perform this task. You can activate Firebug by pressing the F12 key or by clicking on the (bug) icon on the top-right side of the browser's toolbar. Similarly, you can close or deactivate Firebug by pressing the F12 key or by clicking on the (bug) again.   Step 1 – open Firebug To start inspection of your web page, you first need to activate Firebug. Go to your web application's page in the browser Press the F12 key or click on the (bug) icon on the top-right side of the Firefox browser's toolbar Step 2 – activate inspection tool In this step, you activate the inspection tool and search for the block/component/HTML element in your web page using the mouse: Click on the Inspect button on the Firebug console toolbar (the inspect button is like a mouse arrow pointer on the toolbar of Firebug), as shown in the following screenshot: Move your mouse cursor over the page component/section that you want to inspect Step 3 – selecting the element While moving the mouse pointer, a visual frame is created around the element and the HTML source of the element is shown highlighted in the Firebug window. Now, after activating the inspection tool, select the block/component/HTML element to inspect. Click on the page component/section to select the element in the HTML panel. Quick inspect You can even inspect an element with minimal fuss by right-clicking on the element in the Firefox window and selecting the Inspect Element with Firebug option from the context menu. This allows you to inspect an element without first activating the Firebug window. In this way you can inspect an element with minimal mouse-clicks when the Firebug window is not already activated. Editing page components Firefox and most other browsers have a feature for viewing the source of the HTML document sent by the server. Firebug's HTML panel shows you what the HTML currently looks like. In addition to the HTML panel, there are three tabs on the right which let you view and modify properties on an individual element, including the CSS rules that are being applied to the element, the layout of the element in the document, and its DOM properties. Firebug's HTML panel has more advanced features than the default View Source of Firefox. It shows the HTML DOM in a hierarchical structure or tree view with a highlight color. It allows you to expand or collapse the HTML DOM to navigate and provides easy visualization of the HTML page. It is a viewer as well as an editor, and it allows you to edit/delete the HTML elements or attributes on the fly and the changes are immediately applied to the page being currently viewed and rendered by the browser. In order to edit the HTML source on a web page, do the following steps: Open Firebug. Click on the HTML tab. This will show the source of the document. Simply click on the Edit button under the HTML panel. On clicking this button, the HTML panel turns into an editable text area. You can now edit the HTML and see it taking effect instantly as shown in the following screenshot: Summary Firebug is armed with lot of weapons to deal with UI and JavaScript bugs. You have the JavaScript command line, the Console panel, HTML panel, CSS panel, Script panel, DOM panel, Net panel, and Cookies panel to tackle a variety of issues. Inspection is the most common task that you will perform almost every time you find yourself surrounded by hair pulling client side issues. Editing and viewing changes on the fly (without making changes on the server side code) is the specialty of Firebug. Resources for Article : Further resources on this subject: Configuring the ChildBrowser plugin [Article] Tips and Tricks for Working with jQuery and WordPress [Article] Skinner's Toolkit for Plone 3 Theming (Part 1) [Article]
Read more
  • 0
  • 0
  • 848

article-image-animating-built-button
Packt
19 Mar 2013
8 min read
Save for later

Animating a built-in button

Packt
19 Mar 2013
8 min read
(For more resources related to this topic, see here.) Note that there might be more states to a fully functioning button, for example, there is also a Disabled state, but whether a button is disabled or not usually does not depend on mouse movements or positions, it does not have to be animated, and we do not describe it here. Our purpose in this section is not to create a fully functioning button, but rather to demonstrate some generic concepts for re-styling a control and providing custom animations for it. Let's create a Silverlight project containing a single page with the button in its center. The following is the resulting XAML code of the main page: <UserControl x_Class="AnimatingButtonStates.MainPage" > <Grid x_Name="LayoutRoot" Background="White"> <Button Width="100" Height="25" Background="LightGray" Content="Press Me" /> </Grid> </UserControl> We want to re-style this button completely, modifying it shape, border, colors, and creating custom animations for the transition between states. Now, we'll create a very simple custom template for the button by changing the button code to the following code: <!-- Here we provide custom button template --> <Button> <Button.Template> <ControlTemplate TargetType="Button"> <Grid x_Name="TopLevelButtonGrid"> <!--Button Border--> <Border x_Name="ButtonBorder" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" CornerRadius="5" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> </Border> <!-- button content is placed here--> <ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </ControlTemplate> </Button.Template> <Button> If we run the code, as it is, we shall be able to see the button and its Press Me content, but the button will not react visually to mouse over or press events. That is because once we replace the button's template we will have to provide our own solution to the visual changes for different button states. Now, let's discuss how we want the button to look in the different states and how we want it to handle the transitions between states. When the mouse is over the button, we want a blue border to appear. The animation to achieve this can be fast or even instantaneous. When the button is pressed, we want it to scale down significantly and we want the button to scale up and down several times, each time with lower amplitude before achieving a steady pressed state. Note that the control template developers and designers usually try to avoid changing colors within animations (they are considered to be more complex and less intuitive); instead, they try to achieve color-changing effects by changing opacities of several template parts. So to change the border to blue on mouse over, let's create another border element MouseOverBorder with blue BorderBrush, and non-zero BorderThickness within the control template. At normal state, its opacity property will be 0, and it will be completely transparent. When the state of the button changes to MouseOver, the opacity of this border will change to 1. After we add the MouseOverBorder element together with the visual state manager functionality, the resulting template code will look as follows: <Button> <Button.Template> <ControlTemplate TargetType="Button"> <Grid x_Name="TopLevelButtonGrid"> <VisualStateManager.VisualStateGroups> <VisualStateGroup> <VisualStateGroup.Transitions> <!-- duration for the MouseOver animation is set here to 0.2 seconds --> <VisualTransition To="MouseOver" GeneratedDuration="0:0:0.2" /> </VisualStateGroup.Transitions> <VisualState x_Name="Normal" /> <VisualState x_Name="MouseOver"> <VisualState.Storyboard> <Storyboard> <!--animation performed when the button gets into "MouseOver" State--> <DoubleAnimation Storyboard. TargetName="MouseOverBorder" Storyboard.TargetProperty="Opacity" To="1" /> </Storyboard> </VisualState.Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <!--Button Border--> <Border x_Name="ButtonBorder" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" CornerRadius="5" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> </Border> <!--MouseOverBorder has opacity 0 normally. Only when the mouse moves over the button, the opacity is changed to 1--> <Border x_Name="MouseOverBorder" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" CornerRadius="5" BorderBrush="Blue" BorderThickness="2" Opacity="0"> </Border> <!-- button content is placed here--> <ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </ControlTemplate> </Button.Template> </Button> Now, if we start the application, we'll see that the border of the button becomes blue, if the mouse pointer is placed over it, and returns to its usual color when the mouse pointer is moved away from the button, as shown in the following screenshot: The next step is to animate the pressed state. To achieve this, we add a ScaleTransform object to the top-level grid of the button's template: <ControlTemplate TargetType="Button"> <Grid x_Name="TopLevelButtonGrid" RenderTransformOrigin="0.5,0.5"> <Grid.RenderTransform> <!-- scale transform is used to shrink the button when it is pressed --> <ScaleTransform x_Name="OnPressedScaleTransform" ScaleX="1" ScaleY="1" /> </Grid.RenderTransform> ... The purpose of the ScaleTransform object is to shrink the button once it is pressed. Originally, its ScaleX and ScaleY parameters are set to 1, while the animation that starts when the button is pressed changes them to 0.5. This animation is defined within VisualState defined as Pressed: <VisualStateGroup> ... <VisualState x_Name="Pressed"> <VisualState.Storyboard> <Storyboard> <!-- animation performed when the button gets into "Pressed" State will scale down the button by a factor of 0.5 in both dimensions --> <DoubleAnimation Storyboard.TargetProperty="ScaleX" Storyboard.TargetName="TheScaleTransform" To="0.5" /> <DoubleAnimation Storyboard.TargetProperty="ScaleY" Storyboard.TargetName="TheScaleTransform" To="0.5" /> </Storyboard> </VisualState.Storyboard> </VisualState> ... </VisualStateGroup> VisualState defines the animation storyboard to be triggered once the button switches to the Pressed state. We can also add VisualStateTransition to the VisualStateGroup element's Transition property: <VisualStateManager.VisualStateGroups> <VisualStateGroup> <VisualStateGroup.Transitions> ... <VisualTransition To="Pressed" GeneratedDuration="0:0:0.5"> <VisualTransition.GeneratedEasingFunction> <!-- elastic ease will provide a few attenuating bounces before the pressed button reaches a steady state --> <ElasticEase /> </VisualTransition.GeneratedEasingFunction> </VisualTransition> </VisualStateGroup.Transitions> ... </VisuateStateGroup> </VisualStateManager.VisualStateGroups> VisualTransition elements allow us to modify the animation behavior depending on what the original and final states of the transition are. It has properties such as From and To for the purpose of specifying the original and final states. In our case, we set only its To property to Pressed, which means that it applies to transit from any state to the Pressed state. VisualTransition sets the duration of the animation to 0.5 second and adds the ElasticEase easing function to it, which results in the button size bouncing effect. Once we started talking about easing functions, we can explain in detail how they work, and give examples of other easing functions. Easing functions provide a way to modify Silverlight (and WPF) animations. A good article describing easing functions can be found at http://tinyurl.com/arbitrarypathanimations. The easing formula presented in this article is: v = (V2 – V1)/T * f(t/T) + V1 Here v is the resulting animation value, t is the time parameter, T is the time period in question (either time between two frames in an animation with frames or time between the To and From values in the case of a simple animation), V2 and V1 are the animation values at the end and beginning of the animation correspondingly at the absence of easing, and f is the easing function. In the previous formula, we assumed a linear animation (not a spline one). There are a bunch of built-in easing functions that come together with the Silverlight framework, for example, BackEase, BounceEase, CircleEase, and so on. For a comprehensive list of built-in easing functions, please check the following website: http://tinyurl.com/silverlighteasing.. Most easing functions have parameters described on this website. As an exercise you can change the easing function in the previous VisualTransition XAML code, modify its parameters, and observe the changes in button animation. Summary Here we have just learned how to animate a built-in button using Silverlight. Resources for Article : Further resources on this subject: Customized Effects with jQuery 1.4 [Article] Python Multimedia: Fun with Animations using Pyglet [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 3690
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-remotely-preview-and-test-mobile-web-pages-actual-devices-adobe-edge-inspect
Packt
19 Mar 2013
8 min read
Save for later

Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect

Packt
19 Mar 2013
8 min read
(For more resources related to this topic, see here.) Creating a sample mobile web application page for our testing purpose Let's first check out a very simple demo application that we will set up for our test. Our demo application will be a very simple structured HTML page targeted for mobile browsers. The main purpose behind building it is to showcase the various inspection and debugging capabilities of Adobe Edge Inspect. Now, let's get started by creating a directory named adobe_inspect_test inside your local web server's webroot directory. Since I have WAMP server installed on my Windows computer, I have created the directory inside the www folder (which is the webroot for WAMP server). Create a new empty HTML file named index.html inside the adobe_inspect_test directory. Fill it with the following code snippet: <html> <head> <title>Simple demo</title> <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"/> <style type='text/css' > html, body, p, div, br, input{ margin:0; padding:0; } html,body{ font-family:Helvetica; font-size:14px; font-weight:bold; color:#222; width:100%; height:100%; } #wrapper{ width:100%; height:100%; overflow:hidden; } .divStack{ padding:10px; margin:20px; text-align:center; } #div1{ background:#ABA73C; } #div2{ background:#606873; } input[type=button] { padding:5px; } <script type="text/javascript"> window.addEventListener('load',init,false); function init() { document.getElementById('btn1').addEventListener ('click',button1Clicked,false); document.getElementById('btn2').addEventListener ('click',button2Clicked,false); } function button1Clicked() { console.log('Button 1 Clicked'); } function button2Clicked() { console.log('Button 2 Clicked'); } </script> </style> </head> <body> <div id="wrapper"> <div id="div1" class="divStack"><p>First Div</p></div> <div id="div2" class="divStack"><p>Second Div</p></div> <div class="divStack"> <input id="btn1" type="button" value="Button1" /> <input id="btn2" type="button" value="Button2" /> </div> </div> </body> </html> As you can make out we have two div elements (#div1 and #div2) and two buttons (#btn1 and #btn2). We will play around with the two div elements, make changes to their HTML markup and CSS styles when we start remote debugging. And then with our two buttons we will see how we can check JavaScript console log messages remotely. Adobe Edge Inspect is compatible with Google Chrome only, so throughout our testing we will be running our demo application in Chrome. Let’s run the index.html page from our web server. This is how it looks as of now: Now, let’s check if the two buttons are generating the console log messages. For that, right-click inside Chrome and select Inspect element. This will open up the Chrome web developer tools window. Click on the Console tab. Now click on the two buttons and you will see console messages based on the button clicked. The following screenshot shows it: Everything seems to work fine and with that our demo application is ready to be tested in a mobile device with Adobe Edge Inspect. With the demo application running in Chrome and your mobile devices paired to your computer , you will instantly see the same page opening in the Edge Inspect client app in all your mobile devices. The following image shows how the page looks in an iPhone paired to my computer. Open the Edge Inspect web inspector window Now that you can see our demo application in all your paired mobile devices we are ready to remotely inspect and debug on a targeted mobile device. Click on the Edge Inspect extension icon in Chrome and select a device for inspection. I am selecting the iPhone from the list of paired devices. Now click on the Remote Inspection button to the right of the selected device name. The following image should help you: This will open up the Edge Inspect web inspector window also known as the weinre (WEb INspector REmote) web inspector. This looks very similar to the Chrome web inspector window, doesn't it? So if you have experience with the Chrome web debugging tools then this should look familiar to you. The following screenshot shows it: As you can see, by default the Remote tab opens up. The title bar says target not connected. So, although your mobile device is paired it is not yet ready for remote inspection. Under Devicess you can see your paired mobile device name with the URL of the page opened in it. Now, click on the device name to connect it. As soon as you do that, you will notice that it turns green. Congratulations! You are now ready for remote inspection as your mobile device is connected. The following screenshot shows it: Changing HTML markup and viewing the results Your target mobile device (the iPhone in my case) and the debug client (the Edge Inspect web inspector) are connected to the weinre debug server now and we are ready to make some changes to the HTML. Click on the Elements tab in the Edge Inspect web inspector on your computer and you will see the HTML markup of our demo application in it. It may take a few seconds to load since the communication is via Ajax over HTTP. Now, hover your mouse over any element and you would instantly see it being highlighted in your mobile device. Now let's make some changes to the HTML and see if it is reflected in the target mobile device. Let's change the text inside #div2 from Second Div to Second Div edited. The following screenshot shows the change made in #div2 in the Edge Inspect web inspector in my computer: And magic! It is changed on the iPhone too. Cool, isn't it? The following screenshot shows the new text inside #div2. This is what remote debugging is all about. We are utilizing the power of Adobe Edge Inspect to overcome the limitations of pure debugging tools in mobile browsers. Instead make changes on your computer and see them directly in your handset. Similarly you can make other markup changes such as removing an HTML element, changing element properties, and so on. Changing CSS style rules Now, let's make some CSS changes. Select an element in the Elements tab and the corresponding CSS styles are listed on the right. Now make changes to the style and the results will reflect on the mobile device as well. I have selected the First Div (#div1) element. Let's remove its padding. Uncheck the checkbox against padding and the 10 px padding is removed. The following screenshot shows the CSS change made in the Edge Inspect web inspector on my computer: And the following image shows the result being reflected on the iPhone instantly: Similarly, you can change other style rules such as width, height, padding, and margin and see the changes directly on your device. Viewing console log messages Click on the Console tab in the Edge Inspect web inspector to open it. Now click/tap on the buttons one by one on your mobile device. You will see that log messages are being printed on the console for the corresponding button clicked/tapped on the mobile device. This way you can debug JavaScript code remotely. Although Edge Inspect lacks JavaScript debugging using breakpoints (which would have been really handy had we been able to watch local and global variables, function arguments by pausing the execution and observing the state) but nevertheless, by using the console messages you can at least know that your JavaScript code is executing correctly to the point where the log is generated. So basic script debugging can be done. The following screenshot shows the console messages printed remotely from the paired iPhone: Similarly you can target another device for remote inspection and see the changes directly in the device. With that we have covered how to remotely debug and test web applications running on a mobile device browser. Summary In this article, we have seen the most important feature of Adobe Edge Inspect, that is, how we can remotely preview, inspect, and debug a mobile web page. We first created a sample mobile web page, edited it using Adobe Edge Inspect, changed the HTML markup and CSS styles by remote testing, and saw the results as to how this special feature of Adobe Edge Inspect is utilized. Resources for Article : Further resources on this subject: Adapting to User Devices Using Mobile Web Technology [Article] Getting Started on UDK with iOS [Article] Getting Started with Internet Explorer Mobile [Article]
Read more
  • 0
  • 0
  • 1143

article-image-creating-mobile-friendly-themes
Packt
15 Mar 2013
3 min read
Save for later

Creating mobile friendly themes

Packt
15 Mar 2013
3 min read
(For more resources related to this topic, see here.) Getting ready Before we start working on the code, we'll need to copy the basic gallery example code from the previous example and rename the folder and files to be pre fixed with mobile. After this is done, our folder and file structure should look like the following screenshot: How to do it... We'll start off with our galleria.mobile.js theme JavaScript file: (function($) { Galleria.addTheme({ name: 'mobile', author: 'Galleria How-to', css: 'galleria.mobile.css', defaults: { transition: 'fade', swipe: true, responsive: true }, init: function(options) { } }); }(jQuery)); The only difference here from our basic theme example is that we've enabled the swipe parameter for navigating images and the responsive parameter so we can use different styles for different device sizes. Then, we'll provide the additional CSS file using media queries to match only smartphones. Add the following rules in the already present code in the galleria. mobile.css file: /* for smartphones */ @media only screen and (max-width : 480px){ .galleria-stage, .galleria-thumbnails-container, .galleria-thumbnails-container, .galleria-image-nav, .galleria-info { width: 320px; } #galleria{ height: 300px; } .galleria-stage { max-height: 410px; } } Here, we're targeting any device size with a width less than 480 pixels. This should match smartphones in landscape and portrait mode. These styles will override the default styles when the width of the browser is less than 480 pixels. Then, we wire it up just like the previous theme example. Modify the galleria. mobile-example.html file to include the following code snippet for bootstrapping the gallery: <script> $(document).ready(function(){ Galleria.loadTheme('galleria.mobile.js'); Galleria.run('#galleria'); }); </script> We should now have a gallery that scales well for smaller device sizes, as shown in the following screenshot: How to do it... The responsive option tells Galleria to actively detect screen size changes and to redraw the gallery when they are detected. Then, our responsive CSS rules style the gallery differently for different device sizes. You can also provide additional rules so the gallery is styled differently when in portrait versus landscape mode. Galleria will detect when the device screen orientation has changed and apply the new styles accordingly. Media queries A very good list of media queries that can be used to target different devices for responsive web design is available at http://css-tricks.com/snippets/css/media-queriesfor-standard-devices/. Testing for mobile The easiest way to test the mobile theme is to simply resize the browser window to the size of the mobile device. This simulates the mobile device screen size and allows the use of standard web development tools that modern browsers provide. Integrating with existing sites In order for this to work effectively with existing websites, the existing styles will also have to play nicely with mobile devices. This means that an existing site, if trying to integrate a mobile gallery, will also need to have its own styles scale correctly to mobile devices. If this is not the case and the mobile devices switch to zoom-like navigation mode for the site, the mobile gallery styles won't ever kick in. Summary In this article we have seen how to create mobile friendly themes using responsive web design. Resources for Article : Further resources on this subject: jQuery Mobile: Organizing Information with List Views [Article] An Introduction to Rhomobile [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 1527

article-image-common-design-patterns-and-how-prototype-them
Packt
15 Mar 2013
7 min read
Save for later

Common design patterns and how to prototype them

Packt
15 Mar 2013
7 min read
(For more resources related to this topic, see here.) When you tackle a design challenge and decide to re-use a solution that was used in the past, you probably have a design pattern. Design patterns are commonly used concepts that plead to be re-used. They save time, they are well recognized by a majority of users, they provide their intended functionality, and they always work (well, assuming you selected the right design pattern and used it in the right place). In this article , we will prototype one of the commonly used design pattern,: Module tabs pattern Once you created a design pattern prototype, you can copy and paste it into an Axure project. A better alternative is to create a custom widgets library and load it into your Axure environment, making it handy for every new prototype you make. Module tabs Module tabs pattern allows the user to browse through a series of tabs without refreshing the page. Why use it? Module tabs design pattern is commonly used in the following situations: When we have a group of closely related content, and we do not want to expose it to the user all at once. (In order to decrease cognitive load, because we have limited space, or maybe in order to avoid a second navigation bar.) When we do not care for (or maybe even prefer) a use case where a visitor interacts only with one pane at a time, thereby limiting his capability to easily compare information or process data simultaneously. Hipmunk.com is using it A module tabs design is used in www.hipmunk.com for a quick toggle between Flights and Hotels. By default, the user is exposed to Hipmunk's prime use case, which is the flight search. Prototyping it This is a classic use of the Axure dynamic panels. We will use a mixture of button-shaped widgets along with a dynamic panel. A module tabs design is comprised of a set of clickable tabs, located horizontally on top of a dynamic content area. Tabs can also be located vertically on the left side of the content area. We will start with creating a content area having three states: Drag a Rectangle-Dropshadow widget from the Better Defaults library into the wireframe and resize it to fit a large enough content area. This will be our background image for all content states. Now drag a Dynamic Panel widget, locate it on top of the rectangle, and resize it to fit the blank area. Label the dynamic panel with a descriptive name (the Label field is in the Widget Properties pane). In this example we will use the label: content-area dPanel. Examine your dynamic panel on the Dynamic Panel Manager pane, and add two more states under it. Since we are in the travel business, you can name them as flight, hotel, and car (these states are going to include each tab's associated content). Put some text in each of the three dynamic panel states to easily designate them upon state change (later on you can replace this text with some real content). In the screenshot that follows, you can see the content-area dPanel with its three states. Next, we will add three tab buttons to control these three states: We are going to experiment a little bit with styles and aesthetics, so let's hide the dynamic panel object from view by clicking on the small, blue box next to its name on the Dynamic Panel Manager pane. Hiding a dynamic panel has no functional implication; it will still be generated in the prototype. Drag one basic Rectangle widget, and place it above our large dropshadow rectangle. Resize the widget so that it will have the right proportions. In the following screenshot, you can see the new rectangle widget location. Also you can see what happens when we hide the dynamic panel from view: When a tab is selected (becomes active), it is important to make it look like it is part of the active area. We can achieve this by giving it the same color of the active state background as well as eliminating any border line separating them. But let's start with reshaping the rectangle widget, making it look like a tab: Right-click on the Rectangle widget and edit its shape (click on Edit button shape and select a shape). Choose the rounded top shape. This shape has no bottom border line and that's why we like it. Notice the small yellow triangle on top of the rounded top. Try to drag it to the left. You will see that we can easily control its rounded corners' angle, making it look much more like a tab button (See the screenshot that follows). Now that we have a nice looking tab button we can move forward. By default, the tab should have a gray color, which designates an unselected state. So, select the widget and click on Fill color button on the Axure toolbar to set its default color to gray. Once the tab is selected, it needs to be painted white, so that it easily blends with the content area background color under it. To do that, right-click on the tab and edit its selected style (Edit button shape Edit selected style|); the fill color should be set to white (select the preview checkbox to verify that). We are almost done. Duplicate the tab and place three identical widgets above the content area. Move them one pixel down so that they overlap and hide the rectangle border line underneath them. In the screenshot that follows, you can see how the three tabs are placed on top of the white rectangle in such a way that they hide the rectangle's border line. Examine the split square in the upper-left corner of each tab widget you created. You can preview the rollover style and selected state style of the widget by hovering over and clicking on it. Try it! The last thing to do is to add interactions. We would like to set a tab to the selected state upon mouse click, and to change the state of content-area dPanel accordingly. Put a name on each tab (Flight, Hotel, and Car), and give each tab a descriptive label. Set the OnClick event for each tab to perform two actions: Set Widget(s) to selected state: Setting the widget after it was clicked to be in a selected style. Set Panel state(s) to state(s): Setting the dynamic panel state to the relevant content (Flight, Hotel, or Car). Since we would like to have only one selected widget at a time, we should put the three of them in a selection group. Do that by selecting all the three widgets, right-clicking, and choosing the menu option Assign Selection Group. Each time this page is loaded, the Flight tab needs to be selected. To do that, we will configure the OnPageLoad event to set the Flight tab to be selected (find it at the pane located under the wireframe). By the way, there is no need to set the dynamic panel state to flight as well since it is already located on top of the three states in the Dynamic Panel Manager pane, making it the default state. In the screenshot that follows, you can see the Page Interactions pane located under the wireframe area. You can also see that the flight state is the one on top of the list, which makes it the dynamic panel's default view. The menu navigation widget cannot be used as an alternative for the tab buttons design explained here. The reason is that the navigation menu does not support the Selection Group option. Summary In this article, we learned about one of the three commonly used UI design patterns, Module Tabs and learned how to prototype them using the key features of Axure. We can now work more efficiently with Axure and will be able to deliver detailed designs much faster. Resources for Article : Further resources on this subject: Using Prototype Property in JavaScript [Article] Creating Your First Module in Drupal 7 Module Development [Article] Axure RP 6 Prototyping Essentials: Advanced Interactions [Article]
Read more
  • 0
  • 0
  • 1725

article-image-magento-payment-and-shipping-method
Packt
13 Mar 2013
4 min read
Save for later

Magento : Payment and shipping method

Packt
13 Mar 2013
4 min read
(For more resources related to this topic, see here.) Payment and shipping method Magento CE comes with several Payment and Shipping methods out of the box. Since total payment is calculated based on the order and shipping cost, it makes sense to first define our shipping method. The available shipping methods can be found in Magento Admin Panel under the Shipping Methods section in System | Configuration | Sales. Flat Rate, Table Rates, and Free Shipping methods fall under the category of static methods while others are dynamic. Dynamic means retrieval of rates from various shipping providers. Static means that shipping rates are based on a predefined set of rules. For the live production store you might be interested in obtaining the merchant account for one of the dynamic methods because they enable potentially more precise shipping cost calculation in regards to product weight. Clean installation of Magento CE comes with the Flat Rate shipping method turned on, so be sure to turn it off in production if not required by setting the Enabled option in System | Configuration | Sales | Shipping Methods | Flat Rate to No. Setting up the dynamic methods is pretty easy; all you need to do is to obtain the access data from the shipping provider such as FedEx then configure that access data under the proper shipping method configuration area, for example, System | Configuration | Sales | Shipping Methods | FedEx. Payment method configuration is available under System | Configuration | Sales | Payment Methods. Similar to shipping methods, these are defined into two main groups, static and dynamic. Dynamic in this case means that an external payment gateway provider such as PayPal will actually charge the customer's credit card upon successful checkout. Static simply means that checkout will be completed but you as a merchant will have to make sure that the customer actually paid the order prior to shipping the products to him. Clean installation of Magento CE comes with Saved CC, Check / Money order, and Zero Subtotal Checkout turned on, so be sure to turn these off in production if they are not required. How to do it... To configure the shipping method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Shipping Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. To configure payment method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Payment Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. How it works... Once a certain shipping method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Shipping Method step, as shown in the screenshot that follows. The shipping method's price is based on the customer's shipping address, products in cart, applied promo rules, and possibly other parameters. Numerous other shipping modules are provided at Magento Connect on the site http://www.magentocommerce.com/magento-connect/integrations/shippingfulfillment.html and new ones are uploaded often so this is by no means a final list of shipping methods. Once a certain payment method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Payment Information step, as shown in the following screenshot: Additionally, there are numerous other payment modules provided at Magento Connect on the site at http://www.magentocommerce.com/magento-connect/integrations/payment-gateways.html. Summary In this article, we have explained the payment and shipping method you may require when you build your own shop using Magneto. Resources for Article : Further resources on this subject: Creating and configuring a basic mobile application [Article] Installing WordPress e-Commerce Plugin and Activating Third-party Themes [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 1764
article-image-article-yui-test
Packt
11 Mar 2013
5 min read
Save for later

YUI Test

Packt
11 Mar 2013
5 min read
(For more resources related to this topic, see here.) YUI Test is one of the most popular JavaScript unit testing frameworks. Although YUI Test is part of the Yahoo! User Interface (YUI) JavaScript library (YUI is an open source JavaScript and CSS library that can be used to build Rich Internet Applications), it can be used to test any independent JavaScript code that does not use the YUI library. YUI Test provides a simple syntax for creating JavaScript test cases that can run either from the browser or from the command line; it also provides a clean mechanism for testing asynchronous (Ajax) JavaScript code. If you are familiar with the syntax of xUnit frameworks (such as JUnit), you will find yourself familiar with the YUI Test syntax. In YUI Test, there are different ways to display test results . You can display the test results in the browser console or develop your custom test runner pages to display the test results. It is preferable to develop custom test runner pages in order to display the test results in all the browsers because some browsers do not support the console object. The console object is supported in Firefox with Firebug installed, Safari 3+, Internet Explorer 8+, and Chrome. Before writing your first YUI test, you need to know the structure of a custom YUI test runner page. We will create the test runner page, BasicRunner.html, that will be the basis for all the test runner pages used in this article. In order to build the test runner page, first of all you need to include the YUI JavaScript file yui-min.js—from the Yahoo! Content Delivery Network (CDN)—in the BasicRunner.html file, as follows: code 1 At the time of this writing, the latest version of YUI Test is 3.6.0, which is the one used in this chapter. After including the YUI JavaScript file, we need to create and configure a YUI instance using the YUI().use API, as follows: code 1 The YUI().use API takes the list of YUI modules to be loaded. For the purpose of testing, we need the YUI 'test' and 'console' modules (the 'test' module is responsible for creating the tests, while the 'console' module is responsible for displaying the test results in a nifty console component). Then, the YUI().use API takes the test's callback function that is called asynchronously once the modules are loaded. The Y parameter in the callback function represents the YUI instance. As shown in the following code snippet taken from the BasicRunner.html file, you can write the tests in the provided callback and then create a console component using the Y.Console object: code 1 The console object is rendered as a block element by setting the style attribute to 'block', and the results within the console can be displayed in the sequence of their executions by setting the newestOnTop attribute to false . Finally, the console component is created on the log div element. Now you can run the tests, and they will be displayed automatically by the YUI console component. The following screenshot shows the BasicRunner.html file's console component without any developed tests: Image Writing your first YUI test The YUI test can contain test suites, test cases, and test functions. A YUI test suite is a group of related test cases. Each test case includes one or more test functions for the JavaScript code. Every test function should contain one or more assertion in order to perform the tests and verify the outputs. The YUI Test.Suite object is responsible for creating a YUI test suite, while the YUI Test.Case object creates a YUI test case. The add method of the Test.Suite object is used for attaching the test case object to the test suite. The following code snippet shows an example of a YUI test suite: code 1 As shown in the preceding code snippet, two test cases are created. The first test case is named testcase1; it contains two test functions, testFunction1 and testFunction2. In YUI Test, you can create a test function simply by starting the function name with the word "test". The second test case is named testcase2; and it contains a single test function, testAnotherFunction. A test suite is created with the name Testsuite. Finally, testcase1 and testcase2 are added to the Testsuite test suite. In YUI Test, you have the option of creating a friendly test name for the test function, as follows: code 1 The "some Testcase" test case contains two tests with the names "The test should do X" and "The test should do Y". Summary In this article, we learned about the basics of YUI Test framework which is used for testing JavaScript applications, and also a walkthrough on how to write your YUI test using the Test.Suite object (that is responsible for creating a YUI test suite) and the Test.Case object (for creating a YUI test case). Resources for Article :   Further resources on this subject: Using Javascript Effects to enhance your Joomla! website for Visitors [Article] Exclusive Offer On jQuery Books: Winner Of 2010 Open-Source JavaScript Libraries [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 1749

article-image-look-high-level-programming-operations-php-language
Packt
11 Mar 2013
3 min read
Save for later

A look into the high-level programming operations for the PHP language

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Accessing documentation PhpStorm offers four different operations that will help you to access the documentation: Quick Definition, Quick Documentation, Parameter Info, and External Documentation. The first one, Quick Definition, presents the definition of a given symbol. You can use it for a variable, function, method, or class. Quick Documentation allows easy access to DocBlocks. It can be used for all kinds of symbols: variables, functions, methods, and classes. The next operation, Parameter Info, presents simplified information about a function or method interface. Finally, External Documentation will help you to access the official PHP documentation available at php.com. Their shortcuts are as follows: Quick Definition (Ctrl + Shift + I, Mac: alt + Space bar) Quick Documentation (Ctrl + Q, Mac: F1) Parameter Info (Ctrl + P, Mac: command + P) External Documentation (Shift + F1, Mac: shift + F1) The Esc (Mac: shift + esc) hotkey will close any of the previous windows. If you place the cursor inside the parenthesis of $s = str_replace(); and run Parameter Info (Ctrl + P, Mac: command + P), you will get the hint showing all the parameters for the str_replace() function. If that is not enough, place the cursor inside the str_replace function name and press Shift + F1 (Mac: shift + F1). You will get the manual for the function. If you want to test the next operation, open the project created in the Quick start – your first PHP application section and place the cursor inside the class name Controller in the src/My/HelloBundle/Controller/DefaultController.php file. The place where you should place the cursor is denoted with bar | in the following code: class DefaultController extends Cont|roller { } The Quick Definition operation will show you the class definition: The Quick Documentation operation will show you the documentation defined with PhpDoc blocks: It is a formal standard for commenting on the PHP code. The official documentation is available at http://www.phpdoc.org. Generators PhpStorm enables you to do the following: Implement magic methods Override inherited methods Generate constructor, getters, setters, and docblocks All of these operations are available in Code | Generate (Alt + Insert, Mac: command + N). Perform the following steps: Create a new class Foo and place the cursor at the position of | : class Person { | } The Generate dialog box will contain the following operations: The Implement Methods dialog box contains all available magic methods: Create the class with two private properties: class Lorem { private $ipsum; private $dolor; } Then go to Code | Generate | Getters and Setters. In the dialog box select both properties: Then press OK. PhpStorm will generate the following methods: class Lorem { private $ipsum; private $dolor; public function setDolor($dolor) { $this->dolor = $dolor; } public function getDolor() { return $this->dolor; } public function setIpsum($ipsum) { $this->ipsum = $ipsum; } public function getIpsum() { return $this->ipsum; } } Next, go to Code | Generate | DocBlocks and in the dialog box select all the properties and methods: PhpStorm will generate docblocks for each of the selected properties and methods, for example: /** * @param $dolor */ public function setDolor($dolor) { $this->dolor = $dolor; } Summary We just learned that in some cases you don't have to type the code at all, as it can be generated automatically. Generators discussed in this article lifted the burden of type setters, getters and magic functions from your shoulders. We also dived into different ways to access documentation here. Resources for Article : Further resources on this subject: Installing PHP-Nuke [Article] An Introduction to PHP-Nuke [Article] Creating Your Own Theme—A Wordpress Tutorial [Article]
Read more
  • 0
  • 0
  • 1305

article-image-modules
Packt
08 Mar 2013
6 min read
Save for later

Modules

Packt
08 Mar 2013
6 min read
(For more resources related to this topic, see here.) We are going to try mastering modules which will help us break up our application into sensible components. Our configuration file for modules is given as follows: var guidebookConfig = function($routeProvider) { $routeProvider .when('/', { controller: 'ChaptersController', templateUrl: 'view/chapters.html' }) .when('/chapter/:chapterId', { controller: 'ChaptersController', templateUrl: 'view/chapters.html' }) .when('/addNote/:chapterId', { controller: 'AddNoteController', templateUrl: 'view/addNote.html' }) .when('/deleteNote/:chapterId/:noteId', { controller: 'DeleteNoteController', templateUrl: 'view/addNote.html' }) ; }; var Guidebook = angular.module('Guidebook', []). config(guidebookConfig); The last line is where we define our Guidebook module. This creates a namespace for our application. Look at how we defined the add and delete note controllers: Guidebook.controller('AddNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; $scope.cancel = function() { $location.path('/chapter/' + chapterId); } $scope.createNote = function() { NoteModel.addNote(chapterId, $scope.note.content); $location.path('/chapter/' + chapterId); } } ); Guidebook.controller('DeleteNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; NoteModel.deleteNote(chapterId, $routeParams.noteId); $location.path('/chapter/' + chapterId); } ); Since Guidebook is a module, we're really making a call to module.controller() to define our controller. We also called Guidebook.service(), which is really just module.service(), to define our models. One advantage of defining controllers, models, and other components as part of a module is that it groups them together under a common name. This way, if we have multiple applications on the same page, or content from another framework, it's easy to remember which components belong to which application. The second advantage of defining components as part of a module is that AngularJS will do some heavy lifting for us. Take our NoteModel, for instance. We defined it as a service by calling module.service(). AngularJS provides some extra functionality to services within a module. One example of that extra functionality is dependency injection. This is why in our DeleteNoteController, we can include our note model simply by asking for it in the controller's function: Guidebook.controller('DeleteNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; NoteModel.deleteNote(chapterId, $routeParams.noteId); $location.path('/chapter/' + chapterId); } ); This only works because we registered both DeleteNoteController and NoteModel as parts of the same module. Let's talk a little more about NoteModel. When we define controllers on our module, we call module.controller(). For our models, we called module.service(). Why isn't there a module.model()? Well, it turns out models are a good bit more complicated than controllers. In our Guidebook application, our models were relatively simple, but what if we were mapping our models to some external API, or a full- fledged relational database? Because there are so many different types of models with vastly different levels of complexity, AngularJS provides three ways to define a model: as a service, as a factory, and as a provider. We'll now see how we define a service in our NoteModel: Guidebook.service('NoteModel', function() { this.getNotesForChapter = function(chapterId) { ... }; this.addNote = function(chapterId, noteContent) { ... }; this.deleteNote = function(chapterId, noteId) { ... }; }); It turns out this is the simplest type of model. We're simply defining an object with a number of functions that our controllers can call. This is all we needed in our case, but what if we were doing something a little more complicated? Let's pretend that instead of using HTML5 local storage to store and retrieve our note data, we're getting it from a web server somewhere. We'll need to add some logic to set up and manage the connection with that server. The service definition can help us a little here with the setup logic; we're returning a function, so we could easily add some initialization logic. When we get into managing the connection, though, things get a little messy. We'll probably want a way to refresh the connection, in case it drops. Where would that go? With our service definition, our only option is to add it as a function like we have for getNotesForChapter(). This is really hacky; we're now exposing how the model is implemented to the controller, and worse, we would be allowing the controller to refresh the connection. In this case, we would be better off using a factory. Here's what our NoteModel would look like if we had defined it as a factory: Guidebook.factory('NoteModel', function() { return { getNotesForChapter: function(chapterId) { ... }, addNote: function(chapterId, noteContent) { ... }, deleteNote: function(chapterId, noteId) { ... } }; }); Now we can add more complex initialization logic to our model. We could define a few functions for managing the connection privately in our factory initialization, and give our data methods references to them. The following is what that might look like: Guidebook.factory('NoteModel', function() { var refreshConnection = function() { ... }; return { getNotesForChapter: function(chapterId) { ... refreshConnection(); ... }, addNote: function(chapterId, noteContent) { ... refreshConnection(); ... }, deleteNote: function(chapterId, noteId) { ... refreshConnection(); ... } }; }); Isn't that so much cleaner? We've once again kept our model's internal workings completely separate from our controller. Let's get even crazier. What if we backed our models with a complex database server with multiple endpoints? How could we configure which endpoint to use? With a service or a factory, we could initialize this in the model. But what if we have a whole bunch of models? Do we really want to hardcode that logic into each one individually? At this point, the model shouldn't be making this decision. We want to choose our endpoints in a configuration file somewhere. Reading from a configuration file in either a service or a factory will be awkward. Models should focus solely on storing and retrieving data, not messing around with configuration updates. Provider to the rescue! If we define our NoteModel as a provider, we can do all kinds of configuration from some neatly abstracted configuration file. Here's how our NoteModel looks if we convert it into a provider: Guidebook.provider('NoteModel', function() { this.endpoint = 'defaultEndpoint'; this.setEndpoint = function(newEndpoint) { this.endpoint = newEndpoint; }; this.$get = function() { var endpoint = this.endpoint; var refreshConnection = function() { // reconnect to endpoint }; return { getNotesForChapter: function(chapterId) { ... refreshConnection(); ... }, addNote: function(chapterId, noteContent) { ... refreshConnection(); ... }, deleteNote: function(chapterId, noteId) { ... refreshConnection(); ... } }; }; }); Now if we add a configuration file to our application, we can configure the endpoint from outside the model: Guidebook.config(function(NoteModelProvider) { NoteModelProvider.setEndpoint('anEndpoint'); }); Providers give us a very powerful architecture. If we have several models, and we have several endpoints, we can configure all of them in one single configuration file. This is ideal, as our models remain single-purpose and our controllers still have no knowledge of the internals of the models they depend on. Summary Thus we saw in this article modules provide an easy, flexible way to create components in AngularJS. We've seen three different ways to define a model, and we've discussed the benefits of using a module to create an application namespace. Resources for Article : Further resources on this subject: Framework Comparison: Backbase AJAX framework Vs Other Similar Framework (Part 1) [Article] Getting Started With Spring MVC - Developing the MVC components [Article] ASP.NET MVC Framework [Article]
Read more
  • 0
  • 0
  • 2330
article-image-planning-your-lessons-using-ipad
Packt
04 Mar 2013
5 min read
Save for later

Planning your lessons using iPad

Packt
04 Mar 2013
5 min read
(For more resources related to this topic, see here.) Getting ready We begin by downloading the Planbook app from the App Store. It costs $9.99 but is definitely worth the money spent. You may also like to download its Mac app from Mac App Store, if you wish to keep your iPad and Mac in sync. How to do it... The home screen of the app shows the available Planbooks, but as it is your first launch of the app there will be no available Planbooks. To start creating lesson plans you need to choose the option Create New Planbook or import one from your dropbox folder. Planbook provides excellent customization features to suit your exact needs. You can add as many courses as you want by clicking on Add Courses. You can specify the dates in the Dates you want included in your Planbook textbox as a date range, and even select the type of schedule you use as illustrated in the next screenshot: Now that you have specified the major features of your plan, you need to click on Continue to specify the time range of each course of a day. Planbook provides standard iOS date pickers to choose start and end times for each course. After putting in all details for a lesson plan, you need to click on Continue, which will take you back to the home screen, where you can now see the Planbook you just made in the list of available files. If you wish to, you can edit the name of your Planbook, delete it, or e-mail it. As we want to move forward to create your detailed plan, you should select Open Planbook File of your plan. You should now see your lesson plan for one entire week. Each course is represented by a different color for each day, and its contents are shown at the very same place minimizing unnecessary navigation. As you have created the complete outline of your lesson plan, it is time you should specify the content of each course. It's easy—just tap on the course you want to create content for, and a screen with many user-friendly customization features will appear. This is shown in the next screenshot: Here you can enter your course content divided into six different text fields in the Text Fields column at the right-hand side, with even the title of each field being editable! You can specify assignments in the textbox Assignments for the course, use attachments in Files and Web Links in the Attachments option related to the course content, and even have keywords in Keywords which might help you with easy look up. If you wish to see your lesson plan on the basis of a single day, simply click on Day tab at the bottom as shown in the following screenshot and you will see a nice, clean day plan. Tapping on the course still works to edit it! You can tap on a course with two fingers if you want to view it in detail. You will find this small feature very useful when you need to have a quick look at the course content while in class. One of the reasons I was so impressed with this app was the set of powerful editing and navigation options it provides. The gear icon at the bottom-right corner of each course pops up a box of tools. You can use this box to: Bump Lesson: You can shift the course further by one day using this option. The course content for the selected day becomes blank. Yank Lesson: This option removes the course for the selected day. The entire course shifts back by one day. Make Non-School Day: This option gets either all the courses of the day shifted further by one day or the content of all courses of the day is deleted, depending on the user's choice. You have a Courses button in the top-right corner, which lets you show/hide courses in the plan. You also have a Share button besides the Courses button that will let you print or e-mail the Planbook, in a day-wise or week-wise arrangement depending on the active tab. To navigate between weeks/days, swipe from the left- to right-hand side or from the right- to left-hand side to move to the previous or next week/day respectively. You can also click on the Week or Day options present at the bottom of the screen to navigate to that particular week/day. How it works... We have now created a Planbook as per your requirements. You can create multiple Planbooks in the same way. The Planbook app saves each Planbook as a file on the device that you can retrieve, modify, and delete at any later stage even when you are offline. There's more... The iPad App Store has many other apps apart from Planbook (which we chose due to its simplicity and powerful features) that facilitate planning lessons. There is an app My LessonPlan, which costs $2.99, but is more advantageous in classes where even students have access to personal devices. iPlanLessons, costing $6.99 is another powerful app and can be considered as an alternative to Planbook. Summary In this article we saw how the Planbook app makes it very easy to create, modify, and share plans for daily planning purposes. We also saw how it is a an easy-to-use app for a beginner who is not comfortable and proficient with iPad apps. Resources for Article : Further resources on this subject: iPhone User Interface: Starting, Stopping, and Multitasking Applications [Article] iPhone: Issues Related to Calls, SMS, and Contacts [Article] Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher [Article]
Read more
  • 0
  • 0
  • 861

article-image-core-net-recipes
Packt
26 Feb 2013
15 min read
Save for later

Core .NET Recipes

Packt
26 Feb 2013
15 min read
(For more resources related to this topic, see here.) Implementing the validation logic using the Repository pattern The Repository pattern abstracts out data-based validation logic. It is a common misconception that to implement the Repository pattern you require a relational database such as MS SQL Server as the backend. Any collection can be treated as a backend for a Repository pattern. The only point to keep in mind is that the business logic or validation logic must treat it as a database for saving, retrieving, and validating its data. In this recipe, we will see how to use a generic collection as backend and abstract out the validation logic for the same. The validation logic makes use of an entity that represents the data related to the user and a class that acts as the repository for the data allowing certain operations. In this case, the operation will include checking whether the user ID chosen by the user is unique or not. How to do it... The following steps will help check the uniqueness of a user ID that is chosen by the user: Launch Visual Studio .NET 2012. Create a new project of Class Library project type. Name it CookBook.Recipes.Core.CustomValidation. Add a folder to the project and set the folder name to DataModel. Add a new class and name it User.cs. Open the User class and create the following public properties: Property name Data type UserName String DateOfBirth DateTime Password String Use the automatic property functionality of .NET to create the properties. The final code of the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Next, let us create the repository. Add a new folder and name it Repository. Add an interface to the Repository folder and name it IRepository.cs. The interface will be similar to the following code snippet: public interface IRepository { } Open the IRepository interface and add the following methods: Name Description Parameter(s) Return Type AddUser Adds a new user User object Void IsUsernameUnique Determines whether the username is already taken or not string Boolean After adding the methods, IRepository will look like the following code: namespace CookBook.Recipes.Core.CustomValidation { public interface IRepository { void AddUser(User user); bool IsUsernameUnique(string userName); } } Next, let us implement IRepository. Create a new class in the Repository folder and name it MockRepository. Make the MockRepository class implement IRepository. The code will be as follows: namespace CookBook.Recipes.Core.Data.Repository { public class MockRepository:IRepository { #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already ///exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } } Now, add a private variable of type List<Users> in the MockRepository class. Name it _users. It will hold the registered users. It is a static variable so that it can hold usernames across multiple instantiations. Add a constructor to the class. Then initialize the _users list and add two users to the list: public class MockRepository:IRepository { #region Variables Private static List<User> _users; #endregion public MockRepository() { _users = new List<User>(); _users.Add(new User() { UserName = "wayne27", DateOfBirth = new DateTime(1950, 9, 27), Password = "knight" }); DateOfBirth = new DateTime(1955, 9, 27), Password = "justice" }); } #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } Now let us add the code to check whether the username is unique. Add the following statements to the IsUsernameUnique method: bool exists = _users.Exists(u=>u.UserName==userName); return !exists; The method turns out to be as follows: public bool IsUsernameUnique(string userName) { bool exists = _users.Exists(u=>u.UserName==userName); return !exists; } Modify the AddUser method so that it looks as follows: public void AddUser(User user) { _users.Add(user); } How it works... The core of the validation logic lies in the IsUsernameUnique method of the MockRespository class. The reason to place the logic in a different class rather than in the attribute itself was to decouple the attribute from the logic to be validated. It is also an attempt to make it future-proof. In other words, tomorrow, if we want to test the username against a list generated from an XML file, we don't have to modify the attribute. We will just change how IsUsernameUnique works and it will be reflected in the attribute. Also, creating a Plain Old CLR Object (POCO) to hold values entered by the user stops the validation logic code from directly accessing the source of input, that is, the Windows application. Coming back to the IsUsernameUnique method, it makes use of the predicate feature provided by .NET. Predicate allows us to loop over a collection and find a particular item. Predicate can be a static function, a delegate, or a lambda. In our case it is a lambda. bool exists = _users.Exists(u=>u.UserName==userName); In the previous statement, .NET loops over _users and passes the current item to u. We then make use of the item held by u to check whether its username is equal to the username entered by the user. The Exists method returns true if the username is already present. However, we want to know whether the username is unique. So we flip the value returned by Exists in the return statement, as follows: return !exists; Creating a custom validation attribute by extending the validation data annotation .NET provides data annotations as a part of the System.ComponentModel. DataAnnotation namespace. Data annotations are a set of attributes that provides out of the box validation, among other things. However, sometimes none of the in-built validations will suit your specific requirements. In such a scenario, you will have to create your own validation attribute. This recipe will tell you how to do that by extending the validation attribute. The attribute developed will check whether the supplied username is unique or not. We will make use of the validation logic implemented in the previous recipe to create a custom validation attribute named UniqueUserValidator. How to do it... The following steps will help you create a custom validation attribute to meet your specific requirements: Launch Visual Studio 2012. Open the CustomValidation solution. Add a reference to System.ComponentModel.DataAnnotations. Add a new class to the project and name it UniqueUserValidator. Add the following using statements: using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation.MessageRepository; using CookBook.Recipes.Core.Data.Repository; Derive it from ValidationAttribute, which is a part of the System. ComponentModel.DataAnnotations namespace. In code, it would be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { } } Add a property of type IRepository to the class and name it Repository. Add a constructor and initialize Repository to an instance of MockRepository. Once the code is added, the class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } } } Override the IsValid method of ValidationAttribute. Convert the object argument to string. Then call the IsUsernameUnique method of IRepository, pass the string value as a parameter, and return the result. After the modification, the code will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } public override bool IsValid(object value) { string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); } } } We have completed the implementation of our custom validation attribute. Now let's test it out. Add a new Windows Forms Application project to the solution and name it CustomValidationApp. Add a reference to the System.ComponentModel.DataAnnotations and CustomValidation projects. Rename Form1.cs to Register.cs. Open Register.cs in the design mode. Add controls for username, date of birth, and password and also add two buttons to the form. The form should look like the following screenshot: Name the input control and button as given in the following table: Control Name Textbox txtUsername Button btnOK Since we are validating the User Name field, our main concern is with the textbox for the username and the OK button. I have left out the names of other controls for brevity. Switch to the code view mode. In the constructor, add event handlers for the Click event of btnOK as shown in the following code: public Register() { InitializeComponent(); this.btnOK.Click += new EventHandler(btnOK_Click); } void btnOK_Click(object sender, EventArgs e) { } Open the User class of the CookBook.Recipes.Core.CustomValidation project. Annotate the UserName property with UniqueUserValidator. After modification, the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Go back to Register.cs in the code view mode. Add the following using statements: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation; using CookBook.Recipes.Core.Data.Repository; Add the following code to the event handler of btnOK: //create a new user User user = new User() { UserName = txtUsername.Text, DateOfBirth=dtpDob.Value }; //create a validation context for the user instance ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) { foreach (ValidationResult result in errors) MessageBox.Show(result.ErrorMessage); } else { IRepository repository = new MockRepository(); repository.AddUser(user); MessageBox.Show("New user added"); } Hit F5 to run the application. In the textbox add a username, say, dreamwatcher. Click on OK. You will get a message box stating User has been added successfully Enter the same username again and hit the OK button. This time you will get a message saying User name already exists. This means our attribute is working as desired. Go to File | Save Solution As…, enter CustomValidation for Name, and click on OK. We will be making use of this solution in the next recipe. How it works... To understand how UniqueUserValidator works, we have to understand how it is implemented and how it is used/called. Let's start with how it is implemented. It extends ValidationAttribute. The ValidationAttribute class is the base class for all the validation-related attributes provided by data annotations. So the declaration is as follows: public class UniqueUserValidator:ValidationAttribute This allowed us to make use of the public and protected methods/attribute arguments of ValidationAttribute as if it is a part of our attribute. Next, we have a property of type IRepository, which gets initialized to an instance of MockRepository. We have used the interface-based approach so that the attribute will only need a minor change if we decide to test the username against a database table or list generated from a file. In such a scenario, we will just change the following statement: this.Repository = new MockRepository(); The previous statement will be changed to something such as the following: this.Repository = new DBRepository(); Next, we overrode the IsValid method. This is the method that gets called when we use UniqueUserValidator. The parameter of the IsValid method is an object. So we have to typecast it to string and call the IsUniqueUsername method of the Repository property. That is what the following statements accomplish: string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); Now let us see how we used the validator. We did it by decorating the UserName property of the User class: [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName {get; set;} As I already mentioned, deriving from ValidatorAttribute helps us in using its properties as well. That's why we can use ErrorMessage even if we have not implemented it. Next, we have to tell .NET to use the attribute to validate the username that has been set. That is done by the following statements in the OK button's Click handler in the Register class: ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) First, we instantiate an object of ValidationContext. Its main purpose is to set up the context in which validation will be performed. In our case the context is the User object. Next, we call the TryValidateObject method of the Validator class with the User object, the ValidationContext object, and a list to hold the errors. We also tell the method that we need to validate all properties of the User object by setting the last argument to true. That's how we invoke the validation routine provided by .NET. Using XML to generate a localized validation message In the last recipe you saw that we can pass error messages to be displayed to the validation attribute. However, by default, the attributes accept only a message in the English language. To display a localized custom message, it needs to be fetched from an external source such as an XML file or database. In this recipe, we will see how to use an XML file to act as a backend for localized messages. How to do it... The following steps will help you generate a localized validation message using XML: Open CustomValidation.sln in Visual Studio .NET 2012. Add an XML file to the CookBook.Recipes.Core.CustomValidation project. Name it Messages.xml. In the Properties window, set Build Action to Embedded Resource. Add the following to the Messages.xml file: <?xml version="1.0" encoding="utf-8" ?> <messages> <en> <message key="not_unique_user">User name is not unique</message> </en> <fr> <message key="not_unique_user">Nom d'utilisateur n'est pas unique</message> </fr> </messages> Add a folder to the CookBook.Recipes.Core.CustomValidation project. Name it MessageRepository. Add an interface to the MessageRepository folder and name it IMessageRepository. Add a method to the interface and name it GetMessages. It will have IDictionary<string,string> as a return type and will accept a string value as parameter. The interface will look like the following code: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public interface IMessageRepository { IDictionary<string, string> GetMessages(string locale); } } Add a class to the MessageRespository folder. Name it XmlMessageRepository Add the following using statements: using System.Xml; Implement the IMessageRepository interface. The class will look like the following code once we implement the interface: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public class XmlMessageRepository:IMessageRepository { #region IMessageRepository Members public IDictionary<string, string> GetMessages(string locale) { return null; } #endregion } } Modify GetMessages so that it looks like the following code: ublic IDictionary<string, string> GetMessages(string locale) { XmlDocument xDoc = new XmlDocument(); xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceS tream("CustomValidation.Messages.xml")); XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node. InnerText); } return dictionary; } Next let us see how to modify UniqueUserValidator so that it can localize the error message. How it works... The Messages.xml file and the GetMessages method of XmlMessageRespository form the core of the logic to generate a locale-specific message. Message.xml contains the key to the message within the locale tag. We have created the locale tag using the two-letter ISO name of a locale. So, for English it is <en></en> and for French it is <fr></fr>. Each locale tag contains a message tag. The key attribute of the tag will have the key that will tell us which message tag contains the error message. So our code will be as follows: <message key="not_unique_user">User name is not unique</message> not_unique_user is the key to the User is not unique error message. In the GetMessages method, we first load the XML file. Since the file has been set as an embedded resource, we read it as a resource. To do so, we first got the executing assembly, that is, CustomValidation. Then we called GetManifestResourceAsStream and passed the qualified name of the resource, which in this case is CustomValidation.Messages.xml. That is what we achieved in the following statement: xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceStream( "CustomValidation.Messages.xml")); Then we constructed an XPath to the message tag using the locale passed as the parameter. Using the XPath query/expression we got the following message nodes: XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); After getting the node list, we looped over it to construct a dictionary. The value of the key attribute of the node became the key of the dictionary. And the value of the node became the corresponding value in the dictionary, as is evident from the following code: SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node.InnerText); } The dictionary was then returned by the method. Next, let's understand how this dictionary is used by UniqueUserValidator.
Read more
  • 0
  • 0
  • 1306