Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-wpf-45-application-and-windows
Packt
24 Sep 2012
14 min read
Save for later

WPF 4.5 Application and Windows

Packt
24 Sep 2012
14 min read
Creating a window Windows are the typical top level controls in WPF. By default, a MainWindow class is created by the application wizard and automatically shown upon running the application. In this recipe, we'll take a look at creating and showing other windows that may be required during the lifetime of an application. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a new class derived from Window and show it when a button is clicked: Create a new WPF application named CH05.NewWindows. Right-click on the project node in Solution explorer, and select Add | Window…: In the resulting dialog, type OtherWindow in the Name textbox and click on Add. A file named OtherWindow.xaml should open in the editor. Add a TextBlock to the existing Grid, as follows: <TextBlock Text="This is the other window" FontSize="20" VerticalAlignment="Center" HorizontalAlignment="Center" /> Open MainWindow.xaml. Add a Button to the Grid with a Click event handler: <Button Content="Open Other Window" FontSize="30" Click="OnOpenOtherWindow" /> In the Click event handler, add the following code: void OnOpenOtherWindow(object sender, RoutedEventArgs e) { var other = new OtherWindow(); other.Show(); } Run the application, and click the button. The other window should appear and live happily alongside the main window: How it works... A Window is technically a ContentControl, so can contain anything. It's made visible using the Show method. This keeps the window open as long as it's not explicitly closed using the classic close button, or by calling the Close method. The Show method opens the window as modeless—meaning the user can return to the previous window without restriction. We can click the button more than once, and consequently more Window instances would show up. There's more... The first window shown can be configured using the Application.StartupUri property, typically set in App.xaml. It can be changed to any other window. For example, to show the OtherWindow from the previous section as the first window, open App.xaml and change the StartupUri property to OtherWindow.xaml: StartupUri="OtherWindow.xaml" Selecting the startup window dynamically Sometimes the first window is not known in advance, perhaps depending on some state or setting. In this case, the StartupUri property is not helpful. We can safely delete it, and provide the initial window (or even windows) by overriding the Application.OnStartup method as follows (you'll need to add a reference to the System.Configuration assembly for the following to compile): protected override void OnStartup(StartupEventArgs e) {    Window mainWindow = null;    // check some state or setting as appropriate          if(ConfigurationManager.AppSettings["AdvancedMode"] == "1")       mainWindow = new OtherWindow();    else       mainWindow = new MainWindow();    mainWindow.Show(); } This allows complete flexibility in determining what window or windows should appear at application startup. Accessing command line arguments The WPF application created by the New Project wizard does not expose the ubiquitous Main method. WPF provides this for us – it instantiates the Application object and eventually loads the main window pointed to by the StartupUri property. The Main method, however, is not just a starting point for managed code, but also provides an array of strings as the command line arguments passed to the executable (if any). As Main is now beyond our control, how do we get the command line arguments? Fortunately, the same OnStartup method provides a StartupEventArgs object, in which the Args property is mirrored from Main. The downloadable source for this chapter contains the project CH05.CommandLineArgs, which shows an example of its usage. Here's the OnStartup override: protected override void OnStartup(StartupEventArgs e) { string text = "Hello, default!"; if(e.Args.Length > 0) text = e.Args[0]; var win = new MainWindow(text); win.Show(); } The MainWindow instance constructor has been modified to accept a string that is later used by the window. If a command line argument is supplied, it is used. Creating a dialog box A dialog box is a Window that is typically used to get some data from the user, before some operation can proceed. This is sometimes referred to as a modal window (as opposed to modeless, or non-modal). In this recipe, we'll take a look at how to create and manage such a dialog box. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a dialog box that's invoked from the main window to request some information from the user: Create a new WPF application named CH05.Dialogs. Add a new Window named DetailsDialog.xaml (a DetailsDialog class is created). Visual Studio opens DetailsDialog.xaml. Set some Window properties: FontSize to 16, ResizeMode to NoResize, SizeToContent to Height, and make sure the Width is set to 300: ResizeMode="NoResize" SizeToContent="Height" Width="300" FontSize="16" Add four rows and two columns to the existing Grid, and add some controls for a simple data entry dialog as follows: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="Please enter details:" Grid.ColumnSpan="2" Margin="4,4,4,20" HorizontalAlignment="Center"/> <TextBlock Text="Name:" Grid.Row="1" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="1" Margin="4" x_Name="_name"/> <TextBlock Text="City:" Grid.Row="2" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="2" Margin="4" x_Name="_city"/> <StackPanel Grid.Row="3" Orientation="Horizontal" Margin="4,20,4,4" Grid.ColumnSpan="2" HorizontalAlignment="Center"> <Button Content="OK" Margin="4"  /> <Button Content="Cancel" Margin="4" /> </StackPanel> This is how it should look in the designer: The dialog should expose two properties for the name and city the user has typed in. Open DetailsDialog.xaml.cs. Add two simple properties: public string FullName { get; private set; } public string City { get; private set; } We need to show the dialog from somewhere in the main window. Open MainWindow.xaml, and add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Enter Data" Click="OnEnterData"         Margin="4" FontSize="16"/> <TextBlock FontSize="24" x_Name="_text" Grid.Row="1"     VerticalAlignment="Center" HorizontalAlignment="Center"/> In the OnEnterData handler, add the following: private void OnEnterData(object sender, RoutedEventArgs e) { var dlg = new DetailsDialog(); if(dlg.ShowDialog() == true) { _text.Text = string.Format( "Hi, {0}! I see you live in {1}.", dlg.FullName, dlg.City); } } Run the application. Click the button and watch the dialog appear. The buttons don't work yet, so your only choice is to close the dialog using the regular close button. Clearly, the return value from ShowDialog is not true in this case. When the OK button is clicked, the properties should be set accordingly. Add a Click event handler to the OK button, with the following code: private void OnOK(object sender, RoutedEventArgs e) { FullName = _name.Text; City = _city.Text; DialogResult = true; Close(); } The Close method dismisses the dialog, returning control to the caller. The DialogResult property indicates the returned value from the call to ShowDialog when the dialog is closed. Add a Click event handler for the Cancel button with the following code: private void OnCancel(object sender, RoutedEventArgs e) { DialogResult = false; Close(); } Run the application and click the button. Enter some data and click on OK: You will see the following window: How it works... A dialog box in WPF is nothing more than a regular window shown using ShowDialog instead of Show. This forces the user to dismiss the window before she can return to the invoking window. ShowDialog returns a Nullable (can be written as bool? in C#), meaning it can have three values: true, false, and null. The meaning of the return value is mostly up to the application, but typically true indicates the user dismissed the dialog with the intention of making something happen (usually, by clicking some OK or other confirmation button), and false means the user changed her mind, and would like to abort. The null value can be used as a third indicator to some other application-defined condition. The DialogResult property indicates the value returned from ShowDialog because there is no other way to convey the return value from the dialog invocation directly. That's why the OK button handler sets it to true and the Cancel button handler sets it to false (this also happens when the regular close button is clicked, or Alt + F4 is pressed). Most dialog boxes are not resizable. This is indicated with the ResizeMode property of the Window set to NoResize. However, because of WPF's flexible layout, it certainly is relatively easy to keep a dialog resizable (and still manageable) where it makes sense (such as when entering a potentially large amount of text in a TextBox – it would make sense if the TextBox could grow if the dialog is enlarged). There's more... Most dialogs can be dismissed by pressing Enter (indicating the data should be used) or pressing Esc (indicating no action should take place). This is possible to do by setting the OK button's IsDefault property to true and the Cancel button's IsCancel property to true. The default button is typically drawn with a heavier border to indicate it's the default button, although this eventually depends on the button's control template. If these settings are specified, the handler for the Cancel button is not needed. Clicking Cancel or pressing Esc automatically closes the dialog (and sets DiaglogResult to false). The OK button handler is still needed as usual, but it may be invoked by pressing Enter, no matter what control has the keyboard focus within the Window. The CH05.DefaultButtons project from the downloadable source for this chapter demonstrates this in action. Modeless dialogs A dialog can be show as modeless, meaning it does not force the user to dismiss it before returning to other windows in the application. This is done with the usual Show method call – just like any Window. The term dialog in this case usually denotes some information expected from the user that affects other windows, sometimes with the help of another button labelled "Apply". The problem here is mostly logical—how to convey the information change. The best way would be using data binding, rather than manually modifying various objects. We'll take an extensive look at data binding in the next chapter. Using the common dialog boxes Windows has its own built-in dialog boxes for common operations, such as opening files, saving a file, and printing. Using these dialogs is very intuitive from the user's perspective, because she has probably used those dialogs before in other applications. WPF wraps some of these (native) dialogs. In this recipe, we'll see how to use some of the common dialogs. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a simple image viewer that uses the Open common dialog box to allow the user to select an image file to view: Create a new WPF Application named CH05.CommonDialogs. Open MainWindow.xaml. Add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Open Image" FontSize="20" Click="OnOpenImage"         HorizontalAlignment="Center" Margin="4" /> <Image Grid.Row="1" x_Name="_img" Stretch="Uniform" /> Add a Click event handler for the button. In the handler, we'll first create an OpenFileDialog instance and initialize it (add a using to the Microsoft.Win32 namespace): void OnOpenImage(object sender, RoutedEventArgs e) { var dlg = new OpenFileDialog { Filter = "Image files|*.png;*.jpg;*.gif;*.bmp", Title = "Select image to open", InitialDirectory = Environment.GetFolderPath( Environment.SpecialFolder.MyPictures) }; Now we need to show the dialog and use the selected file (if any): if(dlg.ShowDialog() == true) { try { var bmp = new BitmapImage(new Uri(dlg.FileName)); _img.Source = bmp; } catch(Exception ex) { MessageBox.Show(ex.Message, "Open Image"); } } Run the application. Click the button and navigate to an image file and select it. You should see something like the following: How it works... The OpenFileDialog class wraps the Win32 open/save file dialog, providing easy enough access to its capabilities. It's just a matter of instantiating the object, setting some properties, such as the file types (Filter property) and then calling ShowDialog. This call, in turn, returns true if the user selected a file and false otherwise (null is never returned, although the return type is still defined as Nullable for consistency). The look of the Open file dialog box may be different in various Windows versions. This is mostly unimportant unless some automated UI testing is done. In this case, the way the dialog looks or operates may have to be taken into consideration when creating the tests. The filename itself is returned in the FileName property (full path). Multiple selections are possible by setting the MultiSelect property to true (in this case the FileNames property returns the selected files). There's more... WPF similarly wraps the Save As common dialog with the SaveFileDialog class (in the Microsoft.Win32 namespace as well). Its use is very similar to OpenFileDialog (in fact, both inherit from the abstract FileDialog class). What about folder selection (instead of files)? The WPF OpenFileDialog does not support that. One solution is to use Windows Forms' FolderBrowseDialog class. Another good solution is to use the Windows API Code Pack described shortly. Another common dialog box WPF wraps is PrintDialog (in System.Windows.Controls). This shows the familiar print dialog, with options to select a printer, orientation, and so on. The most straightforward way to print would be calling PrintVisual (after calling ShowDialog), providing anything that derives from the Visual abstract class (which include all elements). General printing is a complex topic and is beyond the scope of this book. What about colors and fonts? Windows also provides common dialogs for selecting colors and fonts. However, these are not wrapped by WPF. There are several alternatives: Use the equivalent Windows Forms classes (FontDialog and ColorDialog, both from System.Windows.Forms) Wrap the native dialogs yourself Look for alternatives on the Web The first option is possible, but has two drawbacks: first, it requires adding reference to the System.Windows.Forms assembly; this adds a dependency at compile time, and increases memory consumption at run time, for very little gain. The second drawback has to do with the natural mismatch between Windows Forms and WPF. For example, ColorDialog returns a color as a System.Drawing.Color, but WPF uses System.Windows.Media.Color. This requires mapping a GDI+ color (WinForms) to WPF's color, which is cumbersome at best. The second option of doing your own wrapping is a non-trivial undertaking and requires good interop knowledge. The other downside is that the default color and font common dialogs are pretty old (especially the color dialog), so there's much room for improvement. The third option is probably the best one. There are more than a few good candidates for color and font pickers. For a color dialog, for example, you can use the ColorPicker or ColorCanvas provided with the Extended WPF toolkit library on CodePlex (http://wpftoolkit.codeplex.com/). Here's how these may look (ColorCanvas on the left-hand side, and one of the possible views of ColorPicker on the right-hand side): The Windows API Code Pack The Windows API Code Pack is a Microsoft project on CodePlex (http://archive.msdn.microsoft.com/WindowsAPICodePack) that provides many .NET wrappers to native Windows features, in various areas, such as shell, networking, Windows 7 features (this is less important now as WPF 4 added first class support for Windows 7), power management, and DirectX. One of the Shell features in the library is a wrapper for the Open dialog box that allows selecting a folder instead of a file. This has no dependency on the WinForms assembly.
Read more
  • 0
  • 0
  • 3874

Packt
11 Sep 2012
7 min read
Save for later

Getting Started with RapidWeaver

Packt
11 Sep 2012
7 min read
In this article by Joe Workman, the author of RapidWeaver Beginner's Guide, we will learn the basics of RapidWeaver. Mainly, we will cover the following topics: What is RapidWeaver? Installing RapidWeaver Creating our first web page Publishing our website on the Internet So strap your seat belts on and let's have some fun! What is RapidWeaver? RapidWeaver is a web development and design application for Mac that was developed by Realmac Software. It allows you to build stunning, professional websites very easily. RapidWeaver has both the novice and professional web designer covered. If you don't know (or don't want to know) how to code, RapidWeaver supports full code-free creation of your website; from blogs to site maps, photo albums to contact forms, you can build your entire website without a single line of code! Without a doubt, RapidWeaver appeals to the aspiring novice web designer. However, it does not forget about the geeky, code loving, power users! And in case you were wondering…yeah, that includes me! RapidWeaver gives us geeks full access to peek under the hood. You can effortlessly add your own HTML or PHP file to any page. You can customize the look and feel with your own CSS file. For example, maybe you would like to add your own JavaScript for the latest and greatest animations out there; not a problem, RapidWeaver has got you covered. We even have full access to the amazing WebKit Developer Tools from directly inside the application. As RapidWeaver has all of these advanced features, it really serves as a catalyst to help an aspiring, novice web designer become a geeky, code loving, power user. RapidWeaver's theme engine is a godsend for those users who are design challenged. However, it's also for those who don't want to spend time developing a site theme as they can leverage the work that some amazing theme developers have already done. Yeah, this includes me too! RapidWeaver ships with over 45 stunning themes built-in. This means that you can have a website that was designed by some world-class web designers. Each theme can be customized to your liking with just a few clicks. If you ever get tired of how your website looks, you can change your theme as often as you like. And your website content will remain 100 percent intact. Once you have your website fully constructed, RapidWeaver makes it very simple to publish your website online. It will be able to publish to pretty much every web host around through its native support for both FTP and SFTP. You will be able to publish your website for the world to see with a single click. iWeb versus RapidWeaver versus Dreamweaver RapidWeaver is most commonly compared with both iWeb and Dreamweaver. While there are definitely direct feature comparisons, we are trying to compare apples with oranges. RapidWeaver is a great tool that falls somewhere between iWeb at one end of the scale and Dreamweaver at the other end. Apple's iWeb was their first foray into personal web development software. In true Apple fashion, the application was extremely user friendly and developed beautiful websites. However, the application was really geared towards users who wanted to create a small website to share family photos and maybe have a blog. iWeb was not very extensible at all. If you ever wanted to try to steer outside the bounds of the default templates, you had to drive directly into full custom HTML. One of the biggest downsides that I came across was that once you choose the look and feel of your site, there was no going back. If you wanted to change the theme of your website, you had to redo every single page manually! For those of you who love the drag-and-drop abilities of iWeb, look no further than the RapidWeaver Stacks plugin from YourHead Software. Apple has acknowledged iWeb's shortcomings by pretty much removing iWeb from its lineup. You cannot purchase iWeb from Apple's Mac App Store. Furthermore, if you look at Apple's iLife page on their website, all traces of iWeb have been removed—if this is not a clear sign of iWeb's future, I don't know what is. Now, let's jump to the opposite end of the spectrum with Adobe Dreamweaver. Dreamweaver has a much steeper learning curve than RapidWeaver (not to mention a much steeper price tag). Dreamweaver has a lot of capability for site management and can be used collaboratively on projects, and is designed to play well with Adobe's other design software. The Adobe Creative Suite with Dreamweaver is the package of choice for very large organizational websites being developed and managed by a team, or for complex dynamic sites. I am talking about websites such as http://www.apple.com or http://www.nytimes.com. For individual and small to mid-sized business websites, I can't think of a reason why one would prefer Dreamweaver to RapidWeaver. So as I stated at the beginning, RapidWeaver provides a perfect middle ground for novice web designers and geeky code lovers! It's more than an app So far, I have talked about the RapidWeaver application itself. However, RapidWeaver is so much more than just an application. The user community that has been built around the RapidWeaver product is like nothing I have seen with any other application. The RapidWeaver forums hosted by Realmac are by far the most active and useful forums that I have seen. Users and developers spend countless hours helping each other with tips and tricks on design, code, and product support. It's a worldwide community that is truly active 24/7. You can find the forums at http://forums.realmacsoftware.com. A part of the success of the strong RapidWeaver community is the strong third-party developers that exist. RapidWeaver provides a strong and flexible platform for developers to extend the application beyond its default feature set. There are currently three primary ways to extend your RapidWeaver application: Themes, Plugins, and Stacks. As you may guess, third-party theme developers design custom themes that go above and beyond the themes that ship out of the box with RapidWeaver. With the number of amazing theme developers out there, it would be impossible not to develop a site that fits your style and looks amazing. RapidWeaver ships with 11 page styles out of the box. Blog Contact Form File Sharing HTML Code iFrame Movie Album Offsite Page Photo Album QuickTime Sitemap Styled Text However, RapidWeaver plugins can create even more page styles for you. There are a plethora of different page plugins from calendars to file uploads, and shopping carts to image galleries. To illustrate the power of RapidWeaver's platform, YourHead Software developed the Stacks plugin for fluid page layout. The Stacks plugin created an entire new class of third-party RapidWeaver developer: the stack developer! A stack is simply a widget that can be used as a building block to construct your web page. There are stacks for just about anything: animated banners, menu systems, photo galleries, or even full-blown blog integrations. If you can dream it up, there is probably a stack for it! If you have visited my website, then you should know that my origins in the RapidWeaver community are as a Stacks Developer. I think that Stacks is amazing and should probably be the first plugin that you should consider acquiring. Realmac Software has added a section on their website in order to make it easier for users to explore and locate useful third-party add-ons. So make sure that you go check it out and peruse through all the great themes, plugins, and stacks! You can browse the add-ons at http://www.realmacsoftware.com/addons.
Read more
  • 0
  • 0
  • 1797

article-image-getting-started-with-rapidweaver
Packt
09 Sep 2012
7 min read
Save for later

Getting Started withRapidWeaver

Packt
09 Sep 2012
7 min read
In this article by Joe Workman, the author of RapidWeaver Beginner's Guide, we will learn the basics of RapidWeaver. Mainly, we will cover the following topics: What is RapidWeaver? Installing RapidWeaver Creating our first web page Publishing our website on the Internet So strap your seat belts on and let's have some fun! What is RapidWeaver? RapidWeaver is a web development and design application for Mac that was developed by Realmac Software. It allows you to build stunning, professional websites very easily. RapidWeaver has both the novice and professional web designer covered. If you don't know (or don't want to know) how to code, RapidWeaver supports full code-free creation of your website; from blogs to site maps, photo albums to contact forms, you can build your entire website without a single line of code! Without a doubt, RapidWeaver appeals to the aspiring novice web designer. However, it does not forget about the geeky, code loving, power users! And in case you were wondering…yeah, that includes me! RapidWeaver gives us geeks full access to peek under the hood. You can effortlessly add your own HTML or PHP file to any page. You can customize the look and feel with your own CSS file. For example, maybe you would like to add your own JavaScript for the latest and greatest animations out there; not a problem, RapidWeaver has got you covered. We even have full access to the amazing WebKit Developer Tools from directly inside the application. As RapidWeaver has all of these advanced features, it really serves as a catalyst to help an aspiring, novice web designer become a geeky, code loving, power user. RapidWeaver's theme engine is a godsend for those users who are design challenged. However, it's also for those who don't want to spend time developing a site theme as they can leverage the work that some amazing theme developers have already done. Yeah, this includes me too! RapidWeaver ships with over 45 stunning themes built-in. This means that you can have a website that was designed by some world-class web designers. Each theme can be customized to your liking with just a few clicks. If you ever get tired of how your website looks, you can change your theme as often as you like. And your website content will remain 100 percent intact. Once you have your website fully constructed, RapidWeaver makes it very simple to publish your website online. It will be able to publish to pretty much every web host around through its native support for both FTP and SFTP. You will be able to publish your website for the world to see with a single click. iWeb versus RapidWeaver versus Dreamweaver RapidWeaver is most commonly compared with both iWeb and Dreamweaver. While there are definitely direct feature comparisons, we are trying to compare apples with oranges. RapidWeaver is a great tool that falls somewhere between iWeb at one end of the scale and Dreamweaver at the other end. Apple's iWeb was their first foray into personal web development software. In true Apple fashion, the application was extremely user friendly and developed beautiful websites. However, the application was really geared towards users who wanted to create a small website to share family photos and maybe have a blog. iWeb was not very extensible at all. If you ever wanted to try to steer outside the bounds of the default templates, you had to drive directly into full custom HTML. One of the biggest downsides that I came across was that once you choose the look and feel of your site, there was no going back. If you wanted to change the theme of your website, you had to redo every single page manually! For those of you who love the drag-and-drop abilities of iWeb, look no further than the RapidWeaver Stacks plugin from YourHead Software. Apple has acknowledged iWeb's shortcomings by pretty much removing iWeb from its lineup. You cannot purchase iWeb from Apple's Mac App Store. Furthermore, if you look at Apple's iLife page on their website, all traces of iWeb have been removed—if this is not a clear sign of iWeb's future, I don't know what is. Now, let's jump to the opposite end of the spectrum with Adobe Dreamweaver. Dreamweaver has a much steeper learning curve than RapidWeaver (not to mention a much steeper price tag). Dreamweaver has a lot of capability for site management and can be used collaboratively on projects, and is designed to play well with Adobe's other design software. The Adobe Creative Suite with Dreamweaver is the package of choice for very large organizational websites being developed and managed by a team, or for complex dynamic sites. I am talking about websites such as http://www.apple.com or http://www.nytimes.com. For individual and small to mid-sized business websites, I can't think of a reason why one would prefer Dreamweaver to RapidWeaver. So as I stated at the beginning, RapidWeaver provides a perfect middle ground for novice web designers and geeky code lovers! It's more than an app So far, I have talked about the RapidWeaver application itself. However, RapidWeaver is so much more than just an application. The user community that has been built around the RapidWeaver product is like nothing I have seen with any other application. The RapidWeaver forums hosted by Realmac are by far the most active and useful forums that I have seen. Users and developers spend countless hours helping each other with tips and tricks on design, code, and product support. It's a worldwide community that is truly active 24/7. You can find the forums at http://forums.realmacsoftware.com. A part of the success of the strong RapidWeaver community is the strong third-party developers that exist. RapidWeaver provides a strong and flexible platform for developers to extend the application beyond its default feature set. There are currently three primary ways to extend your RapidWeaver application: Themes, Plugins, and Stacks. As you may guess, third-party theme developers design custom themes that go above and beyond the themes that ship out of the box with RapidWeaver. With the number of amazing theme developers out there, it would be impossible not to develop a site that fits your style and looks amazing. RapidWeaver ships with 11 page styles out of the box. Blog Contact Form File Sharing HTML Code iFrame Movie Album Offsite Page Photo Album QuickTime Sitemap Styled Text However, RapidWeaver plugins can create even more page styles for you. There are a plethora of different page plugins from calendars to file uploads, and shopping carts to image galleries. To illustrate the power of RapidWeaver's platform, YourHead Software developed the Stacks plugin for fluid page layout. The Stacks plugin created an entire new class of third-party RapidWeaver developer: the stack developer! A stack is simply a widget that can be used as a building block to construct your web page. There are stacks for just about anything: animated banners, menu systems, photo galleries, or even full-blown blog integrations. If you can dream it up, there is probably a stack for it! If you have visited my website, then you should know that my origins in the RapidWeaver community are as a Stacks Developer. I think that Stacks is amazing and should probably be the first plugin that you should consider acquiring. Realmac Software has added a section on their website in order to make it easier for users to explore and locate useful third-party add-ons. So make sure that you go check it out and peruse through all the great themes, plugins, and stacks! You can browse the add-ons at http://www.realmacsoftware.com/addons.
Read more
  • 0
  • 0
  • 775
Banner background image

article-image-null-12
Packt
23 Jul 2012
13 min read
Save for later

Ruby with MongoDB for Web Development

Packt
23 Jul 2012
13 min read
Creating documents Let's first see how we can create documents in MongoDB. As we have briefly seen, MongoDB deals with collections and documents instead of tables and rows. Time for action – creating our first document Suppose we want to create the book object having the following schema: book = { name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] }   On the Mongo CLI, we can add this book object to our collection using the following command: > db.books.insert(book)   Suppose we also add the shelf collection (for example, the floor, the row, the column the shelf is in, the book indexes it maintains, and so on that are part of the shelf object), which has the following structure: shelf : { name : 'Fiction', location : { row : 10, column : 3 }, floor : 1 lex : { start : 'O', end : 'P' }, }   Remember, it's quite possible that a few years down the line, some shelf instances may become obsolete and we might want to maintain their record. Maybe we could have another shelf instance containing only books that are to be recycled or donated. What can we do? We can approach this as follows: The SQL way: Add additional columns to the table and ensure that there is a default value set in them. This adds a lot of redundancy to the data. This also reduces the performance a little and considerably increases the storage. Sad but true! The NoSQL way: Add the additional fields whenever you want. The following are the MongoDB schemaless object model instances: > db.book.shelf.find() { "_id" : ObjectId("4e81e0c3eeef2ac76347a01c"), "name" : "Fiction", "location" : { "row" : 10, "column" : 3 }, "floor" : 1 } { "_id" : ObjectId("4e81e0fdeeef2ac76347a01d"), "name" : "Romance", "location" : { "row" : 8, "column" : 5 }, "state" : "window broken", "comments" : "keep away from children" } What just happened? You will notice that the second object has more fields, namely comments and state. When fetching objects, it's fine if you get extra data. That is the beauty of NoSQL. When the first document is fetched (the one with the name Fiction), it will not contain the state and comments fields but the second document (the one with the name Romance) will have them. Are you worried what will happen if we try to access non-existing data from an object, for example, accessing comments from the first object fetched? This can be logically resolved—we can check the existence of a key, or default to a value in case it's not there, or ignore its absence. This is typically done anyway in code when we access objects. Notice that when the schema changed we did not have to add fields in every object with default values like we do when using a SQL database. So there is no redundant information in our database. This ensures that the storage is minimal and in turn the object information fetched will have concise data. So there was no redundancy and no compromise on storage or performance. But wait! There's more. NoSQL scores over SQL databases The way many-to-many relations are managed tells us how we can do more with MongoDB that just cannot be simply done in a relational database. The following is an example: Each book can have reviews and votes given by customers. We should be able to see these reviews and votes and also maintain a list of top voted books. If we had to do this in a relational database, this would be somewhat like the relationship diagram shown as follows: (get scared now!) The vote_count and review_count fields are inside the books table that would need to be updated every time a user votes up/down a book or writes a review. So, to fetch a book along with its votes and reviews, we would need to fire three queries to fetch the information: SELECT * from book where id = 3; SELECT * from reviews where book_id = 3; SELECT * from votes where book_id = 3; We could also use a join for this: SELECT * FROM books JOIN reviews ON reviews.book_id = books.id JOIN votes ON votes.book_id = books.id; In MongoDB, we can do this directly using embedded documents or relational documents. Using MongoDB embedded documents Embedded documents, as the name suggests, are documents that are embedded in other documents. This is one of the features of MongoDB and this cannot be done in relational databases. Ever heard of a table embedded inside another table? Instead of four tables and a complex many-to-many relationship, we can say that reviews and votes are part of a book. So, when we fetch a book, the reviews and the votes automatically come along with the book. Embedded documents are analogous to chapters inside a book. Chapters cannot be read unless you open the book. Similarly embedded documents cannot be accessed unless you access the document. For the UML savvy, embedded documents are similar to the contains or composition relationship. Time for action – embedding reviews and votes In MongoDB, the embedded object physically resides inside the parent. So if we had to maintain reviews and votes we could model the object as follows: book : { name: "Oliver Twist", reviews : [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] votes: [ "Gautam", "Tom", "Dick"] } What just happened? We now have reviews and votes inside the book. They cannot exist on their own. Did you notice that they look similar to JSON hashes and arrays? Indeed, they are an array of hashes. Embedded documents are just like hashes inside another object. There is a subtle difference between hashes and embedded objects as we shall see later on in the book. Have a go hero – adding more embedded objects to the book Try to add more embedded objects such as orders inside the book document. It works! order = { name: "Toby Jones" type: "lease", units: 1, cost: 40 } Fetching embedded objects We can fetch a book along with the reviews and the votes with it. This can be done by executing the following command: > var book = db.books.findOne({name : 'Oliver Twist'}) > book.reviews.length 2 > book.votes.length 3 > book.reviews [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] > book.votes [ "Gautam", "Tom", "Dick"] This does indeed look simple, doesn't it? By fetching a single object, we are able to get the review and vote count along with the data. Use embedded documents only if you really have to! Embedded documents increase the size of the object. So, if we have a large number of embedded documents, it could adversely impact performance. Even to get the name of the book, the reviews and the votes are fetched. Using MongoDB document relationships Just like we have embedded documents, we can also set up relationships between different documents. Time for action – creating document relations The following is another way to create the same relationship between books, users, reviews, and votes. This is more like the SQL way. book: { _id: ObjectId("4e81b95ffed0eb0c23000002"), name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] } Every document that is created in MongoDB has an object ID associated with it. In the next chapter, we shall soon learn about object IDs in MongoDB. By using these object IDs we can easily identify different documents. They can be considered as primary keys. So, we can also create the reviews collection and the votes collection as follows: users: [ { _id: ObjectId("8d83b612fed0eb0bee000702"), name: "Gautam" }, { _id : ObjectId("ab93b612fed0eb0bee000883"), name: "Harry" } ] reviews: [ { _id: ObjectId("5e85b612fed0eb0bee000001"), user_id: ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Very interesting read" }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Who is Oliver Twist?" } ] votes: [ { _id: ObjectId("6e95b612fed0eb0bee000123"), user_id : ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), } ] What just happened? Hmm!! Not very interesting, is it? It doesn't even seem right. That's because it isn't the right choice in this context. It's very important to know how to choose between nesting documents and relating them. In your object model, if you will never search by the nested document (that is, look up for the parent from the child), embed it. Just in case you are not sure about whether you would need to search by an embedded document, don't worry too much – it does not mean that you cannot search among embedded objects. You can use Map/Reduce to gather the information. Comparing MongoDB versus SQL syntax This is a good time to sit back and evaluate the similarities and dissimilarities between the MongoDB syntax and the SQL syntax. Let's map them together: SQL commands NoSQL (MongoDB) equivalent SELECT * FROM books db.books.find() SELECT * FROM books WHERE id = 3; db.books.find( { id : 3 } ) SELECT * FROM books WHERE name LIKE 'Oliver%' db.books.find( { name : /^Oliver/ } ) SELECT * FROM books WHERE name like '%Oliver%' db.books.find( { name : /Oliver/ } ) SELECT * FROM books WHERE publisher = 'Dover Publications' AND published_date = "2011-8-01" db.books.find( { publisher : "Dover Publications", published_date : ISODate("2011-8-01") } ) SELECT * FROM books WHERE published_date > "2011-8-01" db.books.find ( { published_date : { $gt : ISODate("2011-8-01") } } ) SELECT name FROM books ORDER BY published_date db.books.find( {}, { name : 1 } ).sort( { published_date : 1 } ) SELECT name FROM books ORDER BY published_date DESC db.books.find( {}, { name : 1 } ).sort( { published_date : -1 } ) SELECT votes.name from books JOIN votes where votes.book_id = books.id db.books.find( { votes : { $exists : 1 } }, { votes.name : 1 } ) Some more notable comparisons between MongoDB and relational databases are: MongoDB does not support joins. Instead it fires multiple queries or uses Map/Reduce. We shall soon see why the NoSQL faction does not favor joins. SQL has stored procedures. MongoDB supports JavaScript functions. MongoDB has indexes similar to SQL. MongoDB also supports Map/Reduce functionality. MongoDB supports atomic updates like SQL databases. Embedded or related objects are used sometimes instead of a SQL join. MongoDB collections are analogous to SQL tables. MongoDB documents are analogous to SQL rows. Using Map/Reduce instead of join We have seen this mentioned a few times earlier—it's worth jumping into it, at least briefly. Map/Reduce is a concept that was introduced by Google in 2004. It's a way of distributed task processing. We "map" tasks to works and then "reduce" the results. Understanding functional programming Functional programming is a programming paradigm that has its roots from lambda calculus. If that sounds intimidating, remember that JavaScript could be considered a functional language. The following is a snippet of functional programming: $(document).ready( function () { $('#element').click( function () { # do something here }); $('#element2').change( function () { # do something here }) }); We can have functions inside functions. Higher-level languages (such as Java and Ruby) support anonymous functions and closures but are still procedural functions. Functional programs rely on results of a function being chained to other functions. Building the map function The map function processes a chunk of data. Data that is fed to this function could be accessed across a distributed filesystem, multiple databases, the Internet, or even any mathematical computation series! function map(void) -> void The map function "emits" information that is collected by the "mystical super gigantic computer program" and feeds that to the reducer functions as input. MongoDB as a database supports this paradigm making it "the all powerful" (of course I am joking, but it does indeed make MongoDB very powerful). Time for action – writing the map function for calculating vote statistics Let's assume we have a document structure as follows: { name: "Oliver Twist", votes: ['Gautam', 'Harry'] published_on: "December 30, 2002" } The map function for such a structure could be as follows: function() { emit( this.name, {votes : this.votes} ); } What just happened? The emit function emits the data. Notice that the data is emitted as a (key, value) structure. Key: This is the parameter over which we want to gather information. Typically it would be some primary key, or some key that helps identify the information. For the SQL savvy, typically the key is the field we use in the GROUP BY clause. Value: This is a JSON object. This can have multiple values and this is the data that is processed by the reduce function. We can call emit more than once in the map function. This would mean we are processing data multiple times for the same object. Building the reduce function The reduce functions are the consumer functions that process the information emitted from the map functions and emit the results to be aggregated. For each emitted data from the map function, a reduce function emits the result. MongoDB collects and collates the results. This makes the system of collection and processing as a massive parallel processing system giving the all mighty power to MongoDB. The reduce functions have the following signature: function reduce(key, values_array) -> value Time for action – writing the reduce function to process emitted information This could be the reduce function for the previous example: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } What just happened? reduce takes an array of values – so it is important to process an array every time. There are various options to Map/Reduce that help us process data. Let's analyze this function in more detail: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The variable result has a structure similar to what was emitted from the map function. This is important, as we want the results from every document in the same format. If we need to process more results, we can use the finalize function (more on that later). The result function has the following structure: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The values are always passed as arrays. It's important that we iterate the array, as there could be multiple values emitted from different map functions with the same key. So, we processed the array to ensure that we don't overwrite the results and collate them.
Read more
  • 0
  • 0
  • 3039

article-image-common-api-liferay-portal-systems-development
Packt
01 Feb 2012
11 min read
Save for later

Common API in Liferay Portal Systems Development

Packt
01 Feb 2012
11 min read
(For more resources on Liferay, see here.) User management The portal has defined user management with a set of entities, such as, User, Contact, Address, EmailAddress, Phone, Website, and Ticket, and so on at /portal/service.xml. In the following section, we're going to address the User entity, its association, and relationship. Models and services The following figure depicts these entities and their relationships. The entity User has a one-to-one association with the entity Contact, which may have many contacts as children. And the entity Contact has a one-to-one association with the entity Account, which may have many accounts as children. The entity Contact can have a many-to-many association with the entities Address, EmailAddress, Phone, Website, and Ticket. Logically, the entities Address, EmailAddress, Phone, Website, and Ticket may have a many-to-many association with the other entities, such as Group, Organization, and UserGroup as shown in the following image: Services The following table shows user-related service interfaces, extensions, utilities, wrappers, and their main methods: Interface Extension Utility/Wrapper Main methods UserService, UserLocalService PersistedModelLocalService User(Local)ServiceUtil, User(Local)ServiceWrapper add*, authenticate*, check*, decrypt*, delete*, get*, has*, search, unset*, update*, and so on. ContactService, ContactLocalService persistedmodellocalservice> Contact(Local)ServiceUtil, Contact(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AccountService, AccountLocalService Account(Local)ServiceUtil, Account(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AddressService, AddressLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. EmailAddressService, EmailAddressLocalService PersistedModelLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. PhoneService, PhoneLocalService PersistedModelLocalService Phone(Local)ServiceUtil, Phone(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. WebsiteService, WebsiteLocalService PersistedModelLocalService Website(Local)ServiceUtil, Website(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. TicketLocalService PersistedModelLocalService TicketLocalServiceUtil, TicketLocalServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on.   Relationships The portal also defined many-to-many relationships between User and Group, User and Organization, User and Team, User and UserGroup, as shown in the following code: <column name="groups" type="Collection" entity="Group" mapping-table="Users_Groups" /> <column name="userGroups" type="Collection" entity="UserGroup" mapping-table="Users_UserGroups" /> In particular, you will be able to find a similar definition at /portal/service.xml. Sample portal service portlet The portal provides a sample portal service plugin called sample-portal-service-portlet (refer to the plugin details at /portlets/sample-portal-service-portlet). The following is the code snippet: List organizations = OrganizationServiceUtil.getUserOrganizations( themeDisplay.getUserId()); // add your logic The previous code shows how to consume Liferay services through regular Java calls. These services include com.liferay.portal.service.OrganizationServiceUtil and the model involves com.liferay.portal.model.Organization. Similarly, you can use other services, for example, com.liferay.portal.service.UserServiceUtil and com.liferay.portal.service.GroupServiceUtil; and models, for example, com.liferay.portal.model.User, com.liferay.portal.model.Group. Of course, you can find other services and models—you will find services located at the com. liferay.portal.service package in the /portal-service/src folder. In the same way, you will find models located at the com.liferay.portal.model package in the /portal-service/src folder. What's the difference between *LocalServiceUtil and *ServiceUtil? The sign * represents models, for example, Organization, User, Group, and so on. Generally speaking, *Service is the remote service interface that defines the service methods available to remote code. *ServiceUtil has an additional permission check, since this method might be called as a remote service. *ServiceUtil is a facade class that combines the service locator with the actual call to the service *Service. While *LocalService is the internal service interface,*LocalServiceUtil is a facade class that combines the service locator with the actual call to the service *LocalService. *Service has a PermissionChecker in each method, and *LocalService usually doesn't have the same. Authorization Authorization is a process of finding out if the user, once identified, is permitted to have access to a resource. The portal implemented authorization by assigning permissions via roles and checking permissions, and this is called Role-Based Access Control (RBAC). The following figure depicts an overview of authorization. A user can be a member of Group, UserGroup, Organization, or Team. And a user or a group of users, such as Group, UserGroup, or Organization can be a member of Role. And the entity Role can have many ResourcePermission entities associated with it, while the entity ResourcePermission may contain many ResourceAction entities, as shown in the following diagram: The following table shows the entities Role, ResourcePermission, and ResourceAction: Interface Extension Wrapper/SOAP Main methods Role RoleModel, PersistedModel RoleWrapper, RoleSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourceAction ResourceActionModel, PersistedModel ResourceActionWrapper, ResourceActionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourcePermission ResourcePermissionModel, PersistedModel ResourcePermissionWrapper, ResourcePermissionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. In addition, the portal specifies role constants in the class RoleConstants. The entity ResourceAction gets specified with the columns name, actionId, and bitwiseValue as follows: <column name="name" type="String" /> <column name="actionId" type="String" /> <column name="bitwiseValue" type="long" /> The entity ResourcePermission gets specified with the columns name, scope, primKey, roleId, and actionIds as follows: <column name="name" type="String" /> <column name="scope" type="int" /> <column name="primKey" type="String" /> <column name="roleId" type="long" /> <column name="ownerId" type="long" /> <column name="actionIds" type="long" /> In addition, the portal specified resource permission constants in the class ResourcePermissionConstants Password policy The portal implements enterprise password policies and user account lockout using the entities PasswordPolicy and PasswordPolicyRel, as shown in the following table: Interface Extension Wrapper/Soap Description PasswordPolicy PasswordPolicyModel, PersistedModel PasswordPolicyWrapper, PasswordPolicySoap Columns: name, description, minAge, minAlphanumeric, minLength, minLowerCase, minNumbers, minSymbols, minUpperCase, lockout, maxFailure, lockoutDuration, and so on. PasswordPolicyRel PasswordPolicyRelModel, PersistedModel PasswordPolicyRelWrapper, PasswordPolicyRelSoap Columns: passwordPolicyId, classNameId, and classPK. Ability to associate the entity PasswordPolicy with other entities.   Passwords toolkit The portal has defined the following properties related to the passwords toolkit in portal.properties: passwords.toolkit= com.liferay.portal.security.pwd.PasswordPolicyToolkit passwords.passwordpolicytoolkit.generator=dynamic passwords.passwordpolicytoolkit.static=iheartliferay The property passwords.toolkit defines a class name that extends com.liferay.portal.security.pwd.BasicToolkit, which is called to generate and validate passwords. If you choose to use com.liferay.portal.security.pwd.PasswordPolicyToolkit as your password toolkit, you can choose either static or dynamic password generation. Static is set through the property passwords.passwordpolicytoolkit.static and dynamic uses the class com.liferay.util.PwdGenerator to generate the password. If you are using LDAP password syntax checking, you will also have to use the static generator, so that you can guarantee that passwords obey their rules. The passwords' toolkits get addressed in detail in the following table: Class Interface Utility Property Main methods DigesterImpl Digester DigesterUtil passwords.digest.encoding digest, digestBase64, digestHex, digestRaw, and so on. Base64 None None None decode, encode, fromURLSafe, objectToString, stringToObject, toURLSafe, and so on. PwdEncryptor None None passwords.encryption.algorithm encrypt, default types: MD2, MD5, NONE, SHA, SHA-256, SHA-384, SSHA, UFC-CRYPT, and so on .   Authentication Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. The portal defines the class called PwdAuthenticator for authentication, as shown in the following code: public static boolean authenticate( String login, String clearTextPassword, String currentEncryptedPassword) { String encryptedPassword = PwdEncryptor.encrypt( clearTextPassword, currentEncryptedPassword); if (currentEncryptedPassword.equals(encryptedPassword)) { return true; } } As you can see, it encrypts the clear text password first into the variable encryptedPassword. It then tests whether the variable currentEncryptedPassword has the same value as that of the variable encryptedPassword or not. The classes UserLocalServiceImpl (the method authenticate) and EditUserAction (the method updateUser) call the class PwdAuthenticator for authentication. A Message Authentication Code (MAC) is a short piece of information used to authenticate a message. The portal supports MAC through the following properties: auth.mac.allow=false auth.mac.algorithm=MD5 auth.mac.shared.key= To use authentication with MAC, simply post to a URL as follows: It passes the MAC in the password field. Make sure that the MAC gets URL encoded, since it might contain characters not allowed in a URL. Authentication with MAC also requires that you set the following property in system-ext.properties: com.liferay.util.servlet.SessionParameters=false As shown in the previous code, it encrypts session parameters, so that browsers can't remember them. Authentication pipeline The portal provides the authentication pipeline framework for authentication, as shown in the following code: auth.pipeline.pre=com.liferay.portal.security.auth.LDAPAuth auth.pipeline.post= auth.pipeline.enable.liferay.check=true As you can see, the property auth.pipeline.enable.liferay.check is set to true to enable password checking by the internal portal authentication. If it is set to false, essentially, password checking is delegated to the authenticators configured in the auth.pipeline.pre and auth.pipeline.post settings. The interface com.liferay.portal.security.auth.Authenticator defines the constant values that should be used as return code from the classes implementing the interface. If authentication is successful, it returns SUCCESS; if the user exists but the passwords doesn't match, then it returns FAILURE. If the user doesn't exist in the system, it returns DNE. Constants get defined in the interface Authenticator. As shown in the following table, the available authenticator is com.liferay.portal.security.auth.LDAPAuth: Class Extension Involved properties Main Methods PasswordPolicyToolkit BasicToolkit passwords.passwordpolicytoolkit.charset.lowercase, passwords.passwordpolicytoolkit.charset.numbers, passwords.passwordpolicytoolkit.charset.symbols, passwords.passwordpolicytoolkit.charset.uppercase, passwords.passwordpolicytoolkit.generator, passwords.passwordpolicytoolkit.static generate, validate RegExpToolkit BasicToolkit passwords.regexptoolkit.pattern, passwords.regexptoolkit.charset, passwords.regexptoolkit.length generate, validate PwdToolkitUtil None passwords.toolkit Generate, validate PwdGenerator None None getPassword. getPinNumber   Authentication token The portal provides the interface com.liferay.portal.security.auth.AuthToken for the authentication token as follows: auth.token.check.enabled=true auth.token.impl= com.liferay.portal.security.auth.SessionAuthToken As shown in the previous code, the property auth.token.check.enabled is set to true to enable authentication token security checks. The checks can be disabled for specific actions via the property auth.token.ignore.actions or for specific portlets via the init parameter check-auth-token in portlet.xml. The property auth.token.impl is set to the authentication token class. This class must implement the interface AuthToken. The class SessionAuthToken is used to prevent CSRF (Cross-Site Request Forgery) attacks. The following table shows the interface AuthToken and its implementation: Class Interface Properties Main Methods LDAPAuth Authenticator ldap.auth.method, ldap.referral, ldap.auth.password.encryption.algorithm, ldap.base.dn, ldap.error.user.lockout, ldap.error.password.expired, ldap.import.user.password.enabled, ldap.base.provider.url, auth.pipeline.enable.liferay.check, ldap.auth.required authenticateByEmailAddress, authenticateByScreenName, authenticateByUserId   JAAS Java Authentication and Authorization Service (JAAS) is a Java security framework for user-centric security to augment the Java code-based security. The portal has specified a set of properties for JAAS as follows: portal.jaas.enable=false portal.jaas.auth.type=userId portal.impersonation.enable=true The property portal.jaas.enable is set to false to disable JAAS security checks. Disabling JAAS would speed up login. Note that JAAS must be disabled, if administrators are able to impersonate other users. JAAS can authenticate users based on their e-mail address, screen name, user ID, or login, as determined by the property company.security.auth.type. By default, the class com.liferay.portal.security.jaas.PortalLoginModule loads the correct JAAS login module, based on what application server or servlet container the portal is deployed on. You can set a JAAS implementation class to override this behavior. The following table shows this class and its associations: Class Interface Properties Main methods AuthTokenImpl AuthToken auth.token.impl check, getToken AuthTokenWrapper AuthToken None check, getToken AuthTokenUtil None None check, getToken SessionAuthToken AuthToken auth.token.shared.secret check, getToken   As you have noticed, the classes com.liferay.portal.kernel.security.jaas, PortalLoginModule, and com.liferay.portal.security.jaas.PortalLoginModule, implement the interface LoginModule, configured by the property portal.jaas.impl. As shown in the following table, the portal has provided different login module implementation for different application servers or servlet containers: Class Interface/Extension Package Main methods ProtectedPrincipal Principal com.liferay.portal.kernel.servlet getName, equals, hasCode, toString PortalPrincipal ProtectedPrincipal com.liferay.portal.kernel.security.jaas PortalPrincipal PortalRole PortalPrincipal com.liferay.portal.kernel.security.jaas PortalRole PortalGroup PortalPrincipal, java.security.acl.Group com.liferay.portal.kernel.security.jaas addMember, isMember, members, removeMember PortalLoginModule javax.security.auth.spi.LoginModule com.liferay.portal.kernel.security.jaas, com.liferay.portal.security.jaas abort, commit, initialize, login, logout  
Read more
  • 0
  • 0
  • 3059

Packt
20 Dec 2011
8 min read
Save for later

Ajax:Basic Utilities

Packt
20 Dec 2011
8 min read
Validating a form using Ajax The main idea of Ajax is to get data from the server in real time without reloading the whole page. In this task we will build a simple form with validation using Ajax. Getting ready As a JavaScript library is used in this task, we will choose jQuery. We will download (if we haven't done it already) and include it in our page. We need to prepare some dummy PHP code to retrieve the validation results. In this example, let's name it inputValidation.php. We are just checking for the existence of a param variable. If this variable is introduced in the GET request, we confirm the validation and send an OK status back to the page: ?php $result = array(); if(isset($_GET["param"])){ $result["status"] = "OK"; $result["message"] = "Input is valid!"; } else { $result["status"] = "ERROR"; $result["message"] = "Input IS NOT valid!"; } echo json_encode($result); ?> How to do it... Let`s start with basic HTML structure. We will define a form with three input boxes and one text area. Of course, it is placed in : <body> <h1>Validating form using Ajax</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Title *</label> <input type="text" id="title" name="title" class="required" /> </div> <div class="fieldRow"> <label>Url</label> <input type="text" id="url" name="url" value="http://" /> </div> <div class="fieldRow"> <label>Labels</label> <input type="text" id="labels" name="labels" /> </div> <div class="fieldRow"> <label>Text *</label> <textarea id="textarea" class="required"></textarea> </div> <div class="fieldRow"> <input type="submit" id="formSubmitter" value="Submit" disabled="disabled" /> </div> </form> </body> <style> For visual confirmation of the valid input, we will define CSS styles: label{ width:70px; float:left; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled], input[disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } failed { border: 1px solid red; } </style> Now, it is time to include jQuery and its functionality: <script src="js/jquery-1.4.4.js"></script> <script> var ajaxValidation = function(object){ var $this = $(object); var param= $this.attr('name'); var value = $this.val(); $.get("ajax/inputValidation.php", {'param':param, 'value':value }, function(data) { if(data.status=="OK") validateRequiredInputs(); else $this.addClass('failed'); },"json"); } var validateRequiredInputs = function (){ var numberOfMissingInputs = 0; $('.required').each(function(index){ var $item = $(this); var itemValue = $item.val(); if(itemValue.length) { $item.removeClass('failed'); } else { $item.addClass('failed'); numberOfMissingInputs++; } }); var $submitButton = $('#formSubmitter'); if(numberOfMissingInputs > 0){ $submitButton.attr("disabled", true); } else { $submitButton.removeAttr('disabled'); } } </script> We will also initialize the document ready function: <script> $(document).ready(function(){ var timerId = 0; $('.required').keyup(function() { clearTimeout (timerId); timerId = setTimeout(function(){ ajaxValidation($(this)); }, 200); }); }); </script> When everything is ready, our result is as follows: How it works... We created a simple form with three input boxes and one text area. Objects with class required are automatically validated after the keyup event and calling the ajaxValidation function. Our keyup functionality also includes theTimeoutfunction to prevent unnecessary calls if the user is still writing. The validation is based on two steps: Validation of the actual input box: We are passing the inserted text to the ajax/inputValidation.php via Ajax. If the response from the server is not OK we will mark this input box as failed. If the response is OK, we proceed to the second step. Checking the other required fields in our form. When there is no failed input box left in the form, we will enable the submit button. There's more... Validation in this example is really basic. We were just checking if the response status from the server is OK. We will probably never meet a validation of the required field like we have here. In this case,it's better to use the length property directly on the client side instead of bothering the server with a lot of requests,simply to check if the required field is empty or filled. This task was just a demonstration of the basic Validationmethod. It would be nice to extend it with regular expressions on the server-side to directly check whether the URL form or the title already exist in our database, and let the user know what the problem is and how he/she can fix it. Creating an autosuggest control This recipe will show us how to create an autosuggest control. This functionality is very useful when we need to search within huge amounts of data. The basic functionality is to display the list of suggested data based on text in the input box. Getting ready We can start with the dummy PHP page which will serve as a data source. When we call this script with GET method and variable string, this script will return the list of records (names) which include the selected string: <?php $string = $_GET["string"]; $arr = array( "Adam", "Eva", "Milan", "Rajesh", "Roshan", // ... "Michael", "Romeo" ); function filter($var){ global $string; if(!empty($string)) return strstr($var,$string); } $filteredArray = array_filter($arr, "filter"); $result = ""; foreach ($filteredArray as $key => $value){ $row = "<li>".str_replace($string, "<strong>".$string."</strong>", $value)."</li>"; $result .= $row; } echo $result; ?> How to do it... As always, we will start with HTML. We will define the form with one input box and an unsorted list datalistPlaceHolder: <h1>Dynamic Dropdown</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Skype name:</label> <div class="ajaxDropdownPlaceHolder"> <input type="text" id="name" name="name" class="ajaxDropdown" autocomplete="OFF" /> <ul class="datalistPlaceHolder"></ul> </div> </div> </form> When the HTML is ready, we will play with CSS: <style> label { width:80px; float:left; padding:4px; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } validationFailed { border: 1px solid red; } validationPassed { border: 1px solid green; } .datalistPlaceHolder { width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; display:none; } ul.datalistPlaceHolder li { list-style: none; cursor:pointer; padding:4px; } ul.datalistPlaceHolder li:hover { color:#FFF; background-color:#000; } </style> Now the real fun begins. We will include jQuery library and define our keyup events: <script src="js/jquery-1.4.4.js"></script> <script> var timerId; var ajaxDropdownInit = function(){ $('.ajaxDropdown').keyup(function() { var string = $(this).val(); clearTimeout (timerId); timerId = setTimeout(function(){ $.get("ajax/dropDownList.php", {'string':string}, function(data) { if(data) $('.datalistPlaceHolder').show().html(data); else $('.datalistPlaceHolder').hide(); }); }, 500 ); }); } </script> When everything is set, we will call the ajaxDropdownInit function within the document ready function: <script> $(document).ready(function(){ ajaxDropdownInit(); }); </script> Our autosuggest control is ready. The following screenshot shows the output: How it works... The autosuggest control in this recipe is based on the input box and the list of items in datalistPlaceHolder. After each keyup event of the input box,datalistPlaceHolder will load the list of items from ajax/dropDownList.php via the Ajax function defined in ajaxDropdownInit. A good feature of this recipe is thetimerID variable that,when used with thesetTimeout method, will allow us to send the request on the server only when we stop typing (in our case it is 500 milliseconds). It may not look so important, but it will save a lot of resources. We do not want to wait for the response of "M" typed in the input box, when we have already typed in "Milan". Instead of 5 requests (150 milliseconds each), we have just one. Multiply it, for example, with 10,000 users per day and the effect is huge. There's more... We always need to remember that the response from the server is in the JSON format. [{ 'id':'1', 'contactName':'Milan' },...,{ 'id':'99', 'contactName':'Milan (office)' }] Using JSON objects in JavaScript is not always useful from the performance point of view. Let's imagine we have 5000 contacts in one JSON file. It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: [{ "status": "100", "responseMessage": "Everything is ok! :)", "data": "<li><h2><ahref=\"#1\">Milan</h2></li> <li><h2><ahref=\"#2\">Milan2</h2></li> <li><h2><ahref=\"#3\">Milan3</h2></li>" }] It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: <?php echo "STEP 1"; // Same for 2 and 3 ?>   In this case, we will have the complete data in HTML and there is no need to create any logic to create a simple list of items.
Read more
  • 0
  • 0
  • 1163
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-ajax-basic-utilities
Packt
20 Dec 2011
8 min read
Save for later

Ajax: Basic Utilities

Packt
20 Dec 2011
8 min read
(For more resources on PHP Ajax, see here.) Validating a form using Ajax The main idea of Ajax is to get data from the server in real time without reloading the whole page. In this task we will build a simple form with validation using Ajax. Getting ready As a JavaScript library is used in this task, we will choose jQuery. We will download (if we haven't done it already) and include it in our page. We need to prepare some dummy PHP code to retrieve the validation results. In this example, let's name it inputValidation.php. We are just checking for the existence of a param variable. If this variable is introduced in the GET request, we confirm the validation and send an OK status back to the page: <?php $result = array(); if(isset($_GET["param"])){ $result["status"] = "OK"; $result["message"] = "Input is valid!"; } else { $result["status"] = "ERROR"; $result["message"] = "Input IS NOT valid!"; } echo json_encode($result); ?> How to do it... Let`s start with basic HTML structure. We will define a form with three input boxes and one text area. Of course, it is placed in : <body> <h1>Validating form using Ajax</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Title *</label> <input type="text" id="title" name="title" class="required" /> </div> <div class="fieldRow"> <label>Url</label> <input type="text" id="url" name="url" value="http://" /> </div> <div class="fieldRow"> <label>Labels</label> <input type="text" id="labels" name="labels" /> </div> <div class="fieldRow"> <label>Text *</label> <textarea id="textarea" class="required"></textarea> </div> <div class="fieldRow"> <input type="submit" id="formSubmitter" value="Submit" disabled= "disabled" /> </div> </form> </body> <style> For visual confirmation of the valid input, we will define CSS styles: label{ width:70px; float:left; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled], input[disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } failed { border: 1px solid red; } </style> Now, it is time to include jQuery and its functionality: <script src="js/jquery-1.4.4.js"></script> <script> var ajaxValidation = function(object){ var $this = $(object); var param= $this.attr('name'); var value = $this.val(); $.get("ajax/inputValidation.php", {'param':param, 'value':value }, function(data) { if(data.status=="OK") validateRequiredInputs(); else $this.addClass('failed'); },"json"); } var validateRequiredInputs = function (){ var numberOfMissingInputs = 0; $('.required').each(function(index){ var $item = $(this); var itemValue = $item.val(); if(itemValue.length) { $item.removeClass('failed'); } else { $item.addClass('failed'); numberOfMissingInputs++; } }); var $submitButton = $('#formSubmitter'); if(numberOfMissingInputs > 0){ $submitButton.attr("disabled", true); } else { $submitButton.removeAttr('disabled'); } } </script> We will also initialize the document ready function: <script> $(document).ready(function(){ var timerId = 0; $('.required').keyup(function() { clearTimeout (timerId); timerId = setTimeout(function(){ ajaxValidation($(this)); }, 200); }); }); </script> When everything is ready, our result is as follows: How it works... We created a simple form with three input boxes and one text area. Objects with class required are automatically validated after the keyup event and calling the ajaxValidation function. Our keyup functionality also includes theTimeoutfunction to prevent unnecessary calls if the user is still writing. The validation is based on two steps: Validation of the actual input box: We are passing the inserted text to the ajax/inputValidation.php via Ajax. If the response from the server is not OK we will mark this input box as failed. If the response is OK, we proceed to the second step. Checking the other required fields in our form. When there is no failed input box left in the form, we will enable the submit button. There's more... Validation in this example is really basic. We were just checking if the response status from the server is OK. We will probably never meet a validation of the required field like we have here. In this case,it's better to use the length property directly on the client side instead of bothering the server with a lot of requests,simply to check if the required field is empty or filled. This task was just a demonstration of the basic Validationmethod. It would be nice to extend it with regular expressions on the server-side to directly check whether the URL form or the title already exist in our database, and let the user know what the problem is and how he/she can fix it. Creating an autosuggest control This recipe will show us how to create an autosuggest control. This functionality is very useful when we need to search within huge amounts of data. The basic functionality is to display the list of suggested data based on text in the input box. Getting ready We can start with the dummy PHP page which will serve as a data source. When we call this script with GET method and variable string, this script will return the list of records (names) which include the selected string: <?php $string = $_GET["string"]; $arr = array( "Adam", "Eva", "Milan", "Rajesh", "Roshan", // ... "Michael", "Romeo" ); function filter($var){ global $string; if(!empty($string)) return strstr($var,$string); } $filteredArray = array_filter($arr, "filter"); $result = ""; foreach ($filteredArray as $key => $value){ $row = "<li>".str_replace($string, "<strong>".$string."</strong>", $value)."</li>"; $result .= $row; } echo $result; ?> How to do it... As always, we will start with HTML. We will define the form with one input box and an unsorted list datalistPlaceHolder: <h1>Dynamic Dropdown</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Skype name:</label> <div class="ajaxDropdownPlaceHolder"> <input type="text" id="name" name="name" class="ajaxDropdown" autocomplete="OFF" /> <ul class="datalistPlaceHolder"></ul> </div> </div> </form> When the HTML is ready, we will play with CSS: <style> label { width:80px; float:left; padding:4px; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } validationFailed { border: 1px solid red; } validationPassed { border: 1px solid green; } .datalistPlaceHolder { width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; display:none; } ul.datalistPlaceHolder li { list-style: none; cursor:pointer; padding:4px; } ul.datalistPlaceHolder li:hover { color:#FFF; background-color:#000; } </style>   Now the real fun begins. We will include jQuery library and define our keyup events: <script src="js/jquery-1.4.4.js"></script> <script> var timerId; var ajaxDropdownInit = function(){ $('.ajaxDropdown').keyup(function() { var string = $(this).val(); clearTimeout (timerId); timerId = setTimeout(function(){ $.get("ajax/dropDownList.php", {'string':string}, function(data) { if(data) $('.datalistPlaceHolder').show().html(data); else $('.datalistPlaceHolder').hide(); }); }, 500 ); }); } </script> When everything is set, we will call the ajaxDropdownInit function within the document ready function: <script>$(document).ready(function(){ajaxDropdownInit();});</script>  Our autosuggest control is ready. The following screenshot shows the output: How it works... The autosuggest control in this recipe is based on the input box and the list of items in datalistPlaceHolder. After each keyup event of the input box,datalistPlaceHolder will load the list of items from ajax/dropDownList.php via the Ajax function defined in ajaxDropdownInit. A good feature of this recipe is thetimerID variable that,when used with thesetTimeout method, will allow us to send the request on the server only when we stop typing (in our case it is 500 milliseconds). It may not look so important, but it will save a lot of resources. We do not want to wait for the response of "M" typed in the input box, when we have already typed in "Milan". Instead of 5 requests (150 milliseconds each), we have just one. Multiply it, for example, with 10,000 users per day and the effect is huge. There's more... We always need to remember that the response from the server is in the JSON format. [{ 'id':'1', 'contactName':'Milan' },...,{ 'id':'99', 'contactName':'Milan (office)' }] Using JSON objects in JavaScript is not always useful from the performance point of view. Let's imagine we have 5000 contacts in one JSON file. It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: [{ "status": "100", "responseMessage": "Everything is ok! :)", "data": "<li><h2><ahref="#1">Milan</h2></li> <li><h2><ahref="#2">Milan2</h2></li> <li><h2><ahref="#3">Milan3</h2></li>" }] It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: <?php echo "STEP 1"; // Same for 2 and 3 ?> In this case, we will have the complete data in HTML and there is no need to create any logic to create a simple list of items.
Read more
  • 0
  • 0
  • 1171

article-image-load-validate-submit-forms-using-ext-js-3-0-part-1
Packt
17 Oct 2011
6 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 1

Packt
17 Oct 2011
6 min read
Specifying the required fields in a form This recipe uses a login form as an example to explain how to create required fields in a form. How to do it... Initialize the global QuickTips instance: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px; background:transparent', border: false, url:'login.php', items: [{ xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>'} }, { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false }, { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password',allowBlank: false }], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... Initializing the QuickTips singleton allows the form's validation errors to be shown as tool tips. When the form is created, each required field needs to have the allowblank configuration option set to false: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false},{ xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false} Setting allowBlank to false activates a validation rule that requires the length of the field's value to be greater than zero. There's more... Use the blankText configuration option to change the error text when the blank validation fails. For example, the username field definition in the previous code snippet can be changed as shown here: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false, blankText:'Enter your username'} The resulting error is shown in the following figure: Validation rules can be combined and even customized. Other recipes in this article explain how to range-check a field's length, as well as how to specify the valid format of the field's value. See also... The next recipe titled Setting the minimum and maximum length allowed for a field's value explains how to restrict the number of characters entered in a field The Changing the location where validation errors are displayed recipe, covered later in this article, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe, covered later in this article, to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation The Creating validation functions for URLs, email addresses, and other types of data recipe, covered later in this article, explains the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe, covered later in this article, explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe, covered later in this article, explains how to perform server-side validation Setting the minimum and maximum length allowed for a field's value This recipe shows how to set the minimum and maximum number of characters allowed for a text field. The way to specify a custom error message for this type of validation is also explained. The login form built in this recipe has username and password fields of a login form whose lengths are restricted: How to do it... The first thing is to initialize the QuickTips singleton: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px;background:transparent', border: false, url:'login.php', items: [ { xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>' } }, { xtype: 'textfield',id: 'login-user', fieldLabel: 'Username', allowBlank: false,minLength: 3,maxLength: 32 }, { xtype: 'textfield',id: 'login-pwd', fieldLabel: 'Password',inputType: 'password', allowBlank: false,minLength: 6,maxLength: 32, minLengthText: 'Password must be at least 6 characters long.' } ], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... After initializing the QuickTips singleton, which allows the form's validation errors to be shown as tool tips, the form is built. The form is an instance of Ext.form.FormPanel. The username and password fields have their lengths restricted by the way of the minLength and maxLength configuration options: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username',allowBlank: false, minLength: 3, maxLength: 32},{ xtype: 'textfield', id: 'login-pwd',fieldLabel: 'Password', inputType: 'password',allowBlank: false, minLength: 6, maxLength: 32,minLengthText: 'Password must be at least 6 characters long.'} Notice how the minLengthText option is used to customize the error message that is displayed when the minimum length validation fails: { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false, minLength: 6, maxLength: 32, minLengthText: 'Password must be at least 6 characters long.'} As a last step, the window that will host the form is created and displayed. There's more... You can also use the maxLengthText configuration option to specify the error message when the maximum length validation fails. See also... The previous recipe, Specifying the required fields in a form, explains how to make some form fields required The next recipe, Changing the location where validation errors are displayed, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe (covered later in this article) to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation Refer to the Creating validation functions for URLs, email addresses, and other types of data recipe (covered later in this article) for an explanation of the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe (covered later in this article) explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe (covered later in this article) explains how to perform server-side validation
Read more
  • 0
  • 0
  • 1604

article-image-learning-jquery
Packt
27 Sep 2011
9 min read
Save for later

Learning jQuery

Packt
27 Sep 2011
9 min read
  (For more resources on jQuery, see here.) Custom events The events that are triggered naturally by the DOM implementations of browsers are crucial to any interactive web application. However, we are not limited to this set of events in our jQuery code. We can freely add our own custom events to the repertoire. Custom events must be triggered manually by our code. In a sense, they are like regular functions that we define, in that we can cause a block of code to be executed when we invoke it from another place in the script. The .bind() call corresponds to a function definition and the .trigger() call to a function invocation. However, event handlers are decoupled from the code that triggers them. This means that we can trigger events at any time, without knowing in advance what will happen when we do. We might cause a single bound event handler to execute, as with a regular function. We also might cause multiple handlers to run or even none at all. In order to illustrate this, we can revise our Ajax loading feature to use a custom event. We will trigger a nextPage event whenever the user requests more photos and bind handlers that watch for this event and perform the work previously done by the .click() handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage'); return false; }); }); The .click() handler now does very little work itself. After triggering the custom event, it prevents the default behavior by returning false. The heavy lifting is transferred to the new event handlers for the nextPage event as follows: (function($) { $(document).bind('nextPage', function() { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { $('#gallery').append(data); }); } }); var pageNum = 1; $(document).bind('nextPage', function() { pageNum++; if (pageNum < 20) { $('#more-photos') .attr('href', 'pages/' + pageNum + '.html'); } else { $('#more-photos').remove(); } }); })(jQuery); The largest difference is that we have split what was once a single function into two. This is simply to illustrate that a single event trigger can cause multiple bound handlers to fire. The other point to note is that we are illustrating another application of event bubbling here. Our nextPage handlers could be bound to the link that triggers the event, but we would need to wait to do this until the DOM was ready. Instead, we are binding the handlers to the document itself, which is available immediately, so we can do the binding outside of $(document).ready(). The event bubbles up and, so long as another handler does not stop the event propagation, our handlers will be fired. Infinite scrolling Just as multiple event handlers can react to the same triggered event, the same event can be triggered in multiple ways. We can demonstrate this by adding an infinite scrolling feature to our page. This popular technique lets the user's scroll bar manage the loading of content, fetching additional content whenever the user reaches the end of what has been loaded thus far. We will begin with a simple implementation, and then improve it in successive examples. The basic idea is to observe the scroll event, measure the current scroll bar position when scrolling occurs, and load the new content if needed, as follows: (function($) { var $window = $(window); function checkScrollPosition() { var distance = $window.scrollTop() + $window.height(); if ($('#container').height() <= distance) { $(document).trigger('nextPage'); } } $(document).ready(function() { $window.scroll(checkScrollPosition).scroll(); }); })(jQuery); The new checkScrollPosition() function is set as a handler for the window's scroll event. This function computes the distance from the top of the document to the bottom of the window, and then compares this distance to the total height of the main container in the document. As soon as these reach equality, we need to fill the page with additional photos, so we trigger the nextPage event. As soon as we bind the scroll handler, we immediately trigger it with a call to .scroll(). This kick-starts the process, so that if the page is not initially filled with photos, an Ajax request is made right away. Custom event parameters When we define functions, we can set up any number of parameters to be filled with argument values when we actually call the function. Similarly, when triggering a custom event, we may want to pass along additional information to any registered event handlers. We can accomplish this by using custom event parameters. The first parameter defined for any event handler, as we have seen, is the DOM event object as enhanced and extended by jQuery. Any additional parameters we define are available for our discretionary use. To see this action, we will add a new option to the nextPage event allowing us to scroll the page down to display the newly added content as follows: (function($) { $(document).bind('nextPage', function(event, scrollToVisible) { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { var $data = $(data).appendTo('#gallery'); if (scrollToVisible) { var newTop = $data.offset().top; $(window).scrollTop(newTop); } checkScrollPosition(); }); } } ); }); We have now added a scrollToVisible parameter to the event callback. The value of this parameter determines whether we perform the new functionality, which entails measuring the position of the new content and scrolling to it. Measurement is easy using the .offset() method, which returns the top and left coordinates of the new content. In order to move down the page, we call the .scrollTop() method. Now we need to pass an argument into the new parameter. All that is required is providing an extra value when invoking the event using .trigger(). When newPage is triggered through scrolling, we don't want the new behavior to occur, as the user is already manipulating the scroll position directly. When the More Photos link is clicked, on the other hand, we want the newly added photos to be displayed on the screen, so we will pass a value of true to the handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage', [true]); return false; }); $window.scroll(checkScrollPosition).scroll(); }); In the call to .trigger(), we are now providing an array of values to pass to event handlers. In this case, the value of true will be given to the scrollToVisible parameter of the event handler. Note that custom event parameters are optional on both sides of the transaction. We have two calls to .trigger() in our code, only one of which provides argument values; when the other is called, this does not result in an error, but rather the value of null is passed to each parameter. Similarly, the lack of a scrollToVisible parameter in one of our .bind('nextPage') calls is not an error; if a parameter does not exist when an argument is passed, that argument is simply ignored. Throttling events A major issue with the infinite scrolling feature as we have implemented it is its performance impact. While our code is brief, the checkScrollPosition() function does need to do some work to measure the dimensions of the page and window. This effort can accumulate rapidly, because in some browsers the scroll event is triggered repeatedly during the scrolling of the window. The result of this combination could be choppy or sluggish performance. Several native events have the potential for frequent triggering. Common culprits include scroll, resize, and mousemove. To account for this, we need to limit our expensive calculations, so that they only occur after some of the event instances, rather than each one. This technique is known as event throttling. $(document).ready(function() { var timer = 0; $window.scroll(function() { if (!timer) { timer = setTimeout(function() { checkScrollPosition(); timer = 0; }, 250); } }).scroll(); }); Rather than setting checkScrollPosition() directly as the scroll event handler, we are using the JavaScript setTimeout function to defer the call by 250 milliseconds. More importantly, we are checking for a currently running timer first before performing any work. As checking the value of a simple variable is extremely fast, most of the calls to our event handler will return almost immediately. The checkScrollPosition() call will only happen when a timer completes, which will at most be every 250 milliseconds. We can easily adjust the setTimeout() value to a comfortable number that strikes a reasonable compromise between instant feedback and low performance impact. Our script is now a good web citizen. Other ways to perform throttling The throttling technique we have implemented is efficient and simple, but it is not the only solution. Depending on the performance characteristics of the action being throttled and typical interaction with the page, we may for instance want to institute a single timer for the page rather than create one when an event begins: $(document).ready(function() { var scrolled = false; $window.scroll(function() { scrolled = true; }); setInterval(function() { if (scrolled) { checkScrollPosition(); scrolled = false; } }, 250); checkScrollPosition(); }); Unlike our previous throttling code, this polling solution uses a single setInterval() call to begin checking the state of the scrolled variable every 250 milliseconds. Any time a scroll event occurs, scrolled is set to true, ensuring that the next time the interval passes, checkScrollPosition() will be called. A third solution for limiting the amount of processing performed during frequently repeated events is debouncing. This technique, named after the post-processing required handling repeated signals sent by electrical switches, ensures that only a single, final event is acted upon even when many have occurred. Deferred objects In jQuery 1.5, a concept known as a deferred object was introduced to the library. A deferred object encapsulates an operation that takes some time to complete. These objects allow us to easily handle situations in which we want to act when a process completes, but we don't necessarily know how long the process will take or even if it will be successful. A new deferred object can be created at any time by calling the $.Deferred() constructor. Once we have such an object, we can perform long-lasting operations and then call the .resolve() or .reject() methods on the object to indicate the operation was successful or unsuccessful. It is somewhat unusual to do this manually, however. Typically, rather than creating our own deferred objects by hand, jQuery or its plugins will create the object and take care of resolving or rejecting it. We just need to learn how to use the object that is created. Creating deferred objects is a very advanced topic. Rather than detailing how the $.Deferred() constructor operates, we will focus here on how jQuery effects take advantage of deferred objects.  
Read more
  • 0
  • 0
  • 1264

article-image-plone-4-development-creating-custom-workflow
Packt
30 Aug 2011
7 min read
Save for later

Plone 4 Development: Creating a Custom Workflow

Packt
30 Aug 2011
7 min read
Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.        Keeping control with workflow As we have alluded to before, managing permissions directly anywhere other than the site root is usually a bad idea. Every content object in a Plone site is subject to security, and will in most cases inherit permission settings from its parent. If we start making special settings in particular folders, we will quickly lose control. However, if settings are always acquired, how can we restrict access to particular folders or prevent authors from editing published content whilst still giving them rights to work on items in a draft state? The answer to both of these problems is workflow. Workflows are managed by the portal_workflow tool. This controls a mapping of content types to workflows definitions, and sets a default workflow for types not explicitly mapped. The workflow tool allows a workflow chain of multiple workflows to be assigned to a content type. Each workflow is given its own state variable. Multiple workflows can manage permissions concurrently. Plone's user interface does not explicitly support more than one workflow, but can be used in combination with custom user interface elements to address complex security and workflow requirements. The workflow definitions themselves are objects found inside the portal_workflow tool, under the Contents tab. Each definition consists of states, such as private or published, and transitions between them. Transitions can be protected by permissions or restricted to particular roles. Although it is fairly common to protect workflow transitions by role, this is not actually a very good use of the security system. It would be much more sensible to use an appropriate permission. The exception is when custom roles are used solely for the purpose of defining roles in a workflow. Some transitions are automatic, which means that they will be invoked as soon as an object enters a state that has this transition as a possible exit (that is, provided the relevant guard conditions are met). More commonly, transitions are invoked following some user action, normally through the State drop-down menu in Plone's user interface. It is possible to execute code immediately before or after a transition is executed. States may be used simply for information purposes. For example, it is useful to be able to mark a content object as "published" and be able to search for all published content. More commonly, states are also used to control content item security. When an object enters a particular state, either its initial state, when it is first created, or a state that is the result of a workflow transition, the workflow tool can set a number of permissions according to a predefined permissions map associated with the target state. The permissions that are managed by a particular workflow are listed under the Permissions tab on the workflow definition: These permissions are used in a permission map for each state: If you change workflow security settings, your changes will not take effect immediately, since permissions are only modified upon workflow transitions. To synchronize permissions with the current workflow definitions, use the Update security settings button at the bottom of the Workflows tab of the portal_workflow tool. Note that this can take a long time on large sites, because it needs to find all content items using the old settings and update their permissions. If you use the Types control panel in Plone's Site Setup to change workflows, this reindexing happens automatically. Workflows can also be used to manage role-to-group assignments in the same way they can be used to manage role-to-permission assignments. This feature is rarely used in Plone, however. All workflows manage a number of workflow variables, whose values can change with transitions and be queried through the workflow tool. These are rarely changed, however, and Plone relies on a number of the default ones. These include the previous transition (action), the user ID of the person who performed that transition (actor), any associated comments (comments), the date/time of the last transition (time), and the full transition history (review_history). Finally, workflows can define work lists, which are used by Plone's Review list portlet to show pending tasks for the current user. A work list in effect performs a catalog search using the workflow's state variable. In Plone, the state variable is always called review_state. The workflow system is very powerful, and can be used to solve many kinds of problems where objects of the same type need to be in different states. Learning to use it effectively can pay off greatly in the long run. Interacting with workflow in code Interacting with workflow from our own code is usually straightforward. To get the workflow state of a particular object, we can do: from Products.CMFCore.utils import getToolByName wftool = getToolByName(context, 'portal_workflow') review_state = wftool.getInfoFor(context, 'review_state') However, if we are doing a search using the portal_catalog tool, the results it returns has the review state as metadata already: from Products.CMFCore.utils import getToolByName catalog = getToolByName(context, 'portal_catalog') for result in catalog(dict( portal_type=('Document', 'News Item',), review_state=('published', 'public', 'visible',), )): review_state = result.review_state # do something with the review_state To change the workflow state of an object, we can use the following line of code: wftool.doActionFor(context, action='publish') The action here is the name of a transition, which must be available to the current user, from current state of context. There is no (easy) way to directly specify the target state. This is by design: recall that transitions form the paths between states, and may involve additional security restrictions or the triggering of scripts. Again, the Doc tab for the portal_workflow tool and its sub-objects (the workflow definitions and their states and transitions) should be your first point of call if you need more detail. The workflow code can be found in Products.CMFCore.WorkflowTool and Products.DCWorkflow. Installing a custom workflow It is fairly common to create custom workflows when building a Plone website. Plone ships with several useful workflows, but security and approvals processes tend to differ from site to site, so we will often find ourselves creating our own workflows. Workflows are a form of customization. We should ensure they are installable using GenericSetup. However, the workflow XML syntax is quite verbose, so it is often easier to start from the ZMI and export the workflow definition to the filesystem. Designing a workflow for Optilux Cinemas It is important to get the design of a workflow policy right, considering the different roles that need to interact with the objects, and the permissions they should have in the various states. Draft content should be visible to cinema staff, but not customers, and should go through review before being published. The following diagram illustrates this workflow: This workflow will be made the default, and should therefore apply to most content. However, we will keep the standard Plone policy of omitting workflow for the File and Image types. This means that permissions for content items of these types will be acquired from the Folder in which they are contained, making them simpler to manage. In particular, this means it is not necessary to separately publish linked files and embedded images when publishing a Page. Because we need to distinguish between logged-in customers and staff members, we will introduce a new role called StaffMember. This role will be granted View permission by default for all items in the site, much like a Manager or Site Administrator user is by default (although workflow may override this). We will let the Site Administrator role represent site administrators, and the Reviewer role represent content reviewers, as they do in a default Plone installation. We will also create a new group, Staff, which is given the StaffMember role. Among other things, this will allow us to easily grant the Reader, Editor and Contributor role in particular folders to all staff from the Sharing screen. The preceding workflow is designed for content production and review. This is probably the most common use for workflow in Plone, but it is by no means the only use case. For example, the author once used workflows to control the payment status on an Invoice content type. As you become more proficient with the workflow engine, you will find that it is useful in a number of scenarios.
Read more
  • 0
  • 0
  • 1886
article-image-plone-4-development-understanding-zope-security
Packt
30 Aug 2011
6 min read
Save for later

Plone 4 Development: Understanding Zope Security

Packt
30 Aug 2011
6 min read
Security primitives Zope's security is declarative: views, actions, and attributes on content objects are declared to be protected by permissions. Zope takes care of verifying that the current user has the appropriate access rights for a resource. If not, an AccessControl.Unauthorized exception will be raised. This is caught by an error handler which will either redirect the user to a login screen or show an access denied error page. Permissions are not granted to users directly. Instead, they are assigned to roles. Users can be given any number of roles, either site-wide, or in the context of a particular folder, in which case they are referred to as local roles. Global and local roles can also be assigned to groups, in which case all users in that group will have the particular role. (In fact, Zope considers users and groups largely interchangeable, and refers to them more generally as principals.) This makes security settings much more flexible than if they were assigned to individual users. Users and groups Users and groups are kept in user folders, which are found in the ZMI with the name acl_users. There is one user folder at the root of the Zope instance, typically containing only the default Zope-wide administrator that is created by our development buildout the first time it is run. There is also an acl_users folder inside Plone, which manages Plone's users and groups. Plone employs the Pluggable Authentication Service (PAS), a particularly flexible kind of user folder. In PAS, users, groups, their roles, their properties, and other security-related policy are constructed using various interchangeable plugins. For example, an LDAP plugin could allow users to authenticate against an LDAP repository. In day-to-day administration, users and groups are normally managed from Plone's Users and Groups control panel. Permissions Plone relies on a large number of permissions to control various aspects of its functionality. Permissions can be viewed from the Security tab in the ZMI, which lets us assign permissions to roles at a particular object. Note that most permissions are set to Acquire—the default—meaning that they cascade down from the parent folder. Role assignments are additive when permissions are set to acquire. Sometimes, it is appropriate to change permission settings at the root of the Plone site (which can be done using the rolemap.xml GenericSetup import step—more on that follows), but managing permissions from the Security tab anywhere else is almost never a good idea. Keeping track of which security settings are made where in a complex site can be a nightmare. Permissions are the most granular piece of the security puzzle, and can be seen as a consequence of a user's roles in a particular context. Security-aware code should almost always check permissions, rather than roles, because roles can change depending on the current folder and security policy of the site, or even based on an external source such as an LDAP or Active Directory repository. Permissions can be logically divided into three main categories: Those that relate to basic content operations, such as View and Modify portal content. These are used by almost all content types, and defined as constants in the module Products.CMFCore.permissions. Core permissions are normally managed by workflow. Those that control the creation of particular types of content, such as ATContentTypes: Add Image. These are usually set at the Plone site root to apply to the whole site, but they may be managed by workflow on folders. Those that control site-wide policy. For example, the Portlets: Manage portlets permission is usually given to the Manager and Site Administrator roles, because this is typically an operation that only the site's administrator will need to perform. These permissions are usually set at the site root and acquired everywhere else. Occasionally, it may be appropriate to change them here. For example, the Add portal member permission controls whether anonymous users can add themselves (that is, "join" the site) or not. Note that there is a control panel setting for this, under Security in Site Setup. Developers can create new permissions when necessary, although they are encouraged to reuse the ones in Products.CMFCore.permissions if possible. The most commonly used permissions are: Permission Constant Zope Toolkit name Controls Access contents information AccessContents Information zope2.AccessContents Information Low-level Zope permission controlling access to objects View View zope2.View Access to the main view of a content object List folder contents ListFolderContents cmf.ListFolderContents Ability to view folder listings Modify portal content ModifyPortalContent cmf.ModifyPortalContent Edit operations on content Change portal events N/A N/A Modification of the Event content type (largely a historical accident) Manage portal ManagePortal cmf.ManagePortal Operations typically restricted to the Manager role. Request review RequestReview cmf.RequestReview Ability to submit content for review in many workflows. Review portal content ReviewPortalContent cmf.ReviewPortalContent Ability to approve or reject items submitted for review in many workflows. Add portal content AddPortalContent cmf.AddPortalContent add new content in a folder. Note that most content types have their own "add" permissions. In this case, both this permission and the type-specific permission are required. The Constant column in the preceding table refers to constants defined in Products. CMFCore.permissions. The Zope Toolkit name column lists the equivalent names found in ZCML files in packages such as Products.CMFCore, Products.Five and (at least from Zope 2.13), AccessControl. They contain directives such as: <permission id="zope2.View" title="View" /> This is how permissions are defined in the Zope Toolkit. Custom permissions can also be created in this way. Sometimes, we will use ZCML directives which expect a permission attribute, such as: <browser:page name="some-view" class=".someview.SomeView" for="*" permission="zope2.View" /> The permission attribute here must be a Zope Toolkit permission ID. The title of the <permission /> directive is used to map the Zope 2-style permissions (which are really just strings) to Zope Toolkit permission IDs. To declare that a particular view or other resource defined in ZCML should not be subject to security checks, we can use the special permission zope.Public.
Read more
  • 0
  • 0
  • 1662

article-image-professional-plone-4-development-developing-site-strategy
Packt
26 Aug 2011
9 min read
Save for later

Professional Plone 4 Development: Developing a Site Strategy

Packt
26 Aug 2011
9 min read
  Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.         Read more about this book       (For more resources on Plone, see here.) Creating a policy package Our policy package is just a package that can be installed as a Plone add-on. We will use a GenericSetup extension profile in this package to turn a standard Plone installation into one that is configured to our client's needs. We could have used a full-site GenericSetup base profile instead, but by using a GenericSetup extension profile we can avoid replicating the majority of the configuration that is done by Plone. We will use ZopeSkel to create an initial skeleton for the package, which we will call optilux.policy, adopting the optilux.* namespace for all Optilux-specific packages. In your own code, you should of course use a different namespace. It is usually a good idea to base this on the owning organization's name, as we have done here. Note that package names should be all lowercase, without spaces, underscores, or other special characters. If you intend to release your code into the Plone Collective, you can use the collective.* namespace, although other namespaces are allowed too. The plone.* namespace is reserved for packages in the core Plone repository, where the copyright has been transferred to the Plone Foundation. You should normally not use this without first coordinating with the Plone Framework Team. We go into the src/ directory of the buildout and run the following command: $ ../bin/zopeskel plone optilux.policy This uses the plone ZopeSkel template to create a new package called optilux.policy. This will ask us a few questions. We will stick with "easy" mode for now, and answer True when asked whether to register a GenericSetup profile. Note that ZopeSkel will download some packages used by its local command support. This may mean the initial bin/zopeskel command takes a little while to complete, and assumes that we are currently connected to the internet. A local command is a feature of PasteScript, upon which ZopeSkel is built. ZopeSkel registers an addcontent command, which can be used to insert additional snippets of code, such as view registrations or new content types, into the initial skeleton generated by ZopeSkel. We will not use this feature, preferring instead to retain full control over the code we write and avoid the potential pitfalls of code generation. If you wish to use this feature, you will either need to install ZopeSkel and PasteScript into the global Python environment, or add PasteScript to the ${zopeskel:eggs} option in buildout.cfg, so that you get access to the bin/paster command. Run bin/zopeskel --help from the buildout root directory for more information about ZopeSkel and its options. Distribution details Let us now take a closer look at what ZopeSkel has generated for us. We will also consider which files should be added to version control, and which files should be ignored. Item Version control Purpose setup.py Yes Contains instructions for how Setuptools/Distribute (and thus Buildout) should manage the package's distribution. We will make a few modifications to this file later. optilux.policy.egg-info/ Yes Contains additional distribution configuration. In this case, ZopeSkel keeps track of which template was used to generate the initial skeleton using this file. *.egg No ZopeSkel downloads a few eggs that are used for its local command support (Paste, PasteScript, and PasteDeploy) into the distribution directory root. If you do not intend to use the local command support, you can delete these. You should not add these to version control. README.txt Yes If you intend to release your package to the public, you should document it here. PyPI requires that this file be present in the root of a distribution. It is also read into the long_description variable in setup.py. PyPI will attempt to render this as reStructuredText markup (see http://docutils.sourceforge.net/rst.html). docs/ Yes Contains additional documentation, including the software license (which should be the GNU General Public License, version 2, for any packages that import directly from any of Plone's GPL-licensed packages) and a change log. Changes to setup.py Before we can progress, we will make a few modifications to setup.py. Our revised file looks similar to the following code, with changes highlighted: from setuptools import setup, find_packagesimport osversion = '2.0'setup(name='optilux.policy', version=version, description="Policy package for the Optilux Cinemas project", long_description=open("README.txt").read() + "n" + open(os.path.join("docs", "HISTORY.txt")).read(), # Get more strings from # http://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ "Framework :: Plone", "Programming Language :: Python", ], keywords='', author='Martin Aspeli', author_email='[email protected]', url='http://optilux-cinemas.com', license='GPL', packages=find_packages(exclude=['ez_setup']), namespace_packages=['optilux'], include_package_data=True, zip_safe=False, install_requires=[ 'setuptools', 'Plone', ], extras_require={ 'test': ['plone.app.testing',] }, entry_points=""" # -*- Entry points: -*- [z3c.autoinclude.plugin] target = plone """,# setup_requires=["PasteScript"],# paster_plugins=["ZopeSkel"], ) The changes are as follows: We have added an author name, e-mail address, and updated project URL. These are used as metadata if the distribution is ever uploaded to PyPI. For internal projects, they are less important. We have declared an explicit dependency on the Plone distribution, that is, on Plone itself. This ensures that when our package is installed, so is Plone. We will shortly update our main working set to contain only the optilux. policy distribution. This dependency ensures that Plone is installed as part of our application policy. We have then added a [tests] extra, which adds a dependency on plone. app.testing. We will install this extra as part of the following test working set, making plone.app.testing available in the test runner (but not in the Zope runtime). Finally, we have commented out the setup_requires and paster_plugins options. These are used to support ZopeSkel local commands, which we have decided not to use. The main reason to comment them out is to avoid having Buildout download these additional dependencies into the distribution root directory, saving time, and reducing the number of files in the build. Also note that, unlike distributions downloaded by Buildout in general, there is no "offline" support for these options. Changes to configure.zcml We will also make a minor change to the generated configure.zcml file, removing the line: <five:registerPackage package="." initialize=".initialize" /> This directive is used to register the package as an old-style Zope 2 product. The main reason to do this is to ensure that the initialize() function is called on Zope startup. This may be a useful hook, but most of the time it is superfluous, and requires additional test setup that can make tests more brittle. We can also remove the (empty) initialize() function itself from the optilux/policy/__init__.py file, effectively leaving the file blank. Do not delete __init__.py, however, as it is needed to make this directory into a Python package. Updating the buildout Before we can use our new distribution, we need to add it to our development buildout. We will consider two scenarios: The distribution is under version control in a repository module separate to the development buildout itself. This is the recommended approach. The distribution is not under version control, or is kept inside the version control module of the buildout itself. The example source code that comes with this article is distributed as a simple archive, so it uses this approach. Given the approach we have taken to separating out our buildout configuration into multiple files, we must first update packages.cfg to add the new package. Under the [sources] section, we could add: [sources]optilux.policy = svn https://some-svn-server/optilux.policy/trunk Or, for distributions without a separate version control URL: [sources]optilux.policy = fs optilux.policy We must also update the main and test working sets in the same file: [eggs]main = optilux.policytest = optilux.policy [test] Finally, we must tell Buildout to automatically add this distribution as a develop egg when running the development buildout. This is done near the top of buildout.cfg: auto-checkout = optilux.policy We must rerun buildout to let the changes take effect: $ bin/buildout We can test that the package is now available for import using the zopepy interpreter: $ bin/zopepy>>> from optilux import policy>>> The absence of an ImportError tells us that this package will now be known to the Zope instance in the buildout. To be absolutely sure, you can also open the bin/instance script in a text editor (bin/instance-script.py on Windows) and look for a line in the sys.path mangling referencing the package. Working sets and component configuration It is worth deliberating a little more on how Plone and our new policy package are loaded and configured. At build time: Buildout installs the [instance] part, which will generate the bin/instance script. The plone.recipe.zope2instance recipe calculates a working set from its eggs option, which in our buildout references ${eggs:main}. This contains exactly one distribution: optilux.policy. This in turn depends on the Plone distribution which in turn causes Buildout to install all of Plone. Here, we have made a policy decision to depend on a "big" Plone distribution that includes some optional add-ons. We could also have depended on the smaller Products.CMFPlone distribution (which works for Plone 4.0.2 onwards), which includes only the core of Plone, perhaps adding specific dependencies for add-ons we are interested in. When declaring actual dependencies used by distributions that contain reusable code instead of just policy, you should always depend on the packages you import from or otherwise depend on, and no more. That is, if you import from Products.CMFPlone, you should depend on this, and not on the Plone meta-egg (which itself contains no code, but only declares dependencies on other distributions, including Products. CMFPlone). To learn more about the rationale behind the Products. CMFPlone distribution, see http://dev.plone.org/plone/ticket/10877. At runtime: The bin/instance script starts Zope. Zope loads the site.zcml file (parts/instance/etc/site.zcml) as part of its startup process. This automatically includes the ZCML configuration for packages in the Products.* namespace, including Products.CMFPlone, Plone's main package. Plone uses z3c.autoinclude to automatically load the ZCML configuration of packages that opt into this using the z3c.autoinclude.plugin entry point target = plone. The optilux.policy distribution contains such an entry point, so it will be configured, along with any packages or files it explicitly includes from its own configure.zcml file.
Read more
  • 0
  • 0
  • 1198

article-image-creating-enterprise-portal-oracle-webcenter-11g-ps3
Packt
01 Aug 2011
9 min read
Save for later

Creating an Enterprise Portal with Oracle WebCenter 11g PS3

Packt
01 Aug 2011
9 min read
  Oracle WebCenter 11g PS3 Administration Cookbook Over 100 advanced recipes to secure, support, manage, and administer Oracle WebCenter         Introduction An enterprise portal is a framework that allows users to interact with different applications in a secure way. There is a single point of entry and the security to the composite applications is transparent for the user. Each user should be able to create their own view on the portal. A portal is highly customizable, which means that most of the work will be done at runtime. An administrator should be able to create and manage pages, users, roles, and so on. Users can choose whatever content they want to see on their pages so they can personalize the portal to their needs. In this article, you will learn some basics about the WebCenter Portal application. Later chapters will go into further details on most of the subjects covered in this chapter. It is intended as an introduction to the WebCenter Portal. Preparing JDeveloper for WebCenter When you want to build WebCenter portals, JDeveloper is the preferred IDE. JDeveloper has a lot of built-in features that will help us to build rich enterprise applications. It has a lot of wizards that can help in building the complex configuration files. Getting ready You will need to install JDeveloper before you can start with this recipe. JDeveloper is the IDE from Oracle and can be downloaded from the following link: http://www.oracle.com/technetwork/developer-tools/jdev/downloads/index.html. You will need to download JDeveloper 11.1.1.5 Studio Edition and not JDeveloper 11.1.2 because that version is not compatible with WebCenter yet. This edition is the full-blown edition with all the bells and whistles. It has all the libraries for building an ADF application, which is the basis for a WebCenter application. How to do it... Open JDeveloper that was installed. Choose Default Role. From JDeveloper, open the Help menu and select Check for updates. Click Next on the welcome screen. Make sure all the Update Centers are selected and press Next. In the available Updates, enter WebCenter and select all the found updates. Press Next to start the download. After the download is finished, you will need to restart JDeveloper. You can check if the updates have been installed by opening the About window from the Help menu. Select the Extensions tab and scroll down to the WebCenter extensions. You should be able to see them: How it works... When you first open JDeveloper, you first need to select a role. The role determines the functionality you have in JDeveloper. When you select the default role, all the functionality will be available. By installing the WebCenter extensions, you are installing all the necessary jar files containing the libraries for the WebCenter framework. JDeveloper will have three additional application templates: Portlet Producer Application: This template allows you to create a producer based upon the new JSR286 standard. WebCenter Portal Application: Template that will create a preconfigured portal with ADF and WebCenter technology. WebCenter Spaces Taskflow Customizations: This application is configured for customizing the applications and services taskflows used with the WebCenter Spaces Application. The extensions also include the taskflows and data controls for each of the WebCenter services that we will be integrating in our portal. Creating a WebCenter portal In this release of WebCenter, we can easily build enterprise portals by using the WebCenter Portal application template in JDeveloper. This template contains a preconfigured portal that we can modify to our needs. It has basic administration pages and security. Getting ready For this recipe, you need the latest version of JDeveloper with the WebCenter extensions installed, which is described in the previous recipe. How to do it... Select New from the File menu. Select Application in the General section on the left-hand side. Select WebCenter Portal Application from the list on the right. Press OK. The Create WebCenter Portal Application dialog will open. In the dialog, you will need to complete a few steps in order to create the portal application: Application Name: Specify the application name, directory, and application package prefix. Project Name: Specify the name and directory of the portal project. At this stage, you can also add additional libraries to the project. Project Java Settings: Specify the default package, java source, and output directory. Project WebCenter settings: With this step, you can request to build a default portal environment. When you disable the Configure the application with standard Portal features checkbox, you will have an empty project with only the reference to the WebCenter libraries, but no default portal will be configured. You can also let JDeveloper create a special test-role, so you can test your application. Press the Finish button to create the application. You can test the portal without needing to develop anything. Just start the integrated WebLogic server, right-click the portal project, and select Run from the context menu. When you start the WebLogic server for the first time, it can take a few minutes. This is because JDeveloper will create the WebLogic domain for the integrated WebLogic server. Because we have installed the WebCenter extensions, JDeveloper will also extend the domain with the WebCenter libraries. How it works... When the portal has been started, you will see a single page, which is the Home page that contains a login form at the top right corner: When you log in with the default WebLogic user, you should have complete administration rights. The default user of the integrated WebLogic server is weblogic with password weblogic1. When logged in, you should see an Administration link. This links to the Administration Console where you can manage the resources of your portal like pages, resource catalogs, navigations, and so on. In the Administration Console you have five tabs: Resources: In this tab, you manage all the resources of your portal. The resources are divided into three parts: Structure: In the structure, you manage the resources about the structure of your portal, such as pages, templates, navigations, and resource catalogs. Look and Layout: In the look and layout part, you manage things like skins, styles, templates for the content presenter, and mashup styles. Mashups: Mashups are taskflows created during runtime. You can also manage data controls in the mashup section. Services: In the services tab, you can manage the services that are configured for your portal. Security: In the security tab, you can add users or roles and define their access to the portal application. Configuration: In this tab, you can configure default settings for the portal like the default page template, default navigation, default resource catalog, and default skin. Propagation: This tab is only visible when you create a specific URL connection. From this tab, you can propagate changes from your staging environment to your production environment. There's more... The WebCenter Portal application will create a preconfigured portal for us. It has a basic structure and page navigation to build complex portals. JDeveloper has created a lot of files for us. Here is an overview of the most important files created for us by JDeveloper: Templates The default portal has two page templates. They can be found in the Web Content/oracle/Webcenter/portalapp/pagetemplates folder: pageTemplate_globe.jspx: This is the default template used for a page pageTemplate_swooshy.jspx: This is the same template as the globe template, but with another header image You can of course create your own templates. Pages JDeveloper will create four pages for us. These can be found in the Web Content/oracle/Webcenter/portalapp/pages folder: error.jspx: This page looks like the login page and is designed to show error messages upon login. home.jspx: This is an empty page that uses the globe template. login.jspx: This is the login page. It is also based upon the globe template. Resource catalogs By default, JDeveloper will create a default resource catalog. This can be found in the Web Content/oracle/Webcenter/portalapp/catalogs folder. In this folder, you will find the default-catalog.xml file which represents the resource catalog. When you open this file, you will notice that JDeveloper has a design view for this file. This way it is easier to manage and edit the catalog without knowing the underlying XML. Another file in the catalogs folder is the catalog-registry.xml. This is the set of components that the user can use when creating a resource catalog at runtime. Navigations By using navigations, you can allow users to find content on different pages, taskflow, or even external pages. By defining different navigation, you allow users to have a personalized navigation that fits their needs. By default, you will find one navigation model in the Web content/oracle/Webcenter/portalapp/navigations folder: default-navigation-model.xml. It contains the page hierarchy and a link to the administration page. This model is not used in the template, but it is there as an example. You can of course use this model and modify it, or you can create your own models. You will also find the navigation-registry.xml. This file contains the items that can be used to create a navigation model at runtime. Page hierarchy With the page hierarchy, you can create parent-child relationships between pages. It allows you to create multi-level navigation of existing pages. Within the page hierarchy, you can set the security of each node. You are able to define if a child node inherits the security from its parent or it has its own security. By default, JDeveloper will create the pages.xml page hierarchy in the Web Content/oracle/Webcenter/portalapp/pagehierarchy folder. This hierarchy has only one node, being the Home page.
Read more
  • 0
  • 0
  • 1724
article-image-mootool-understanding-foundational-basics
Packt
01 Aug 2011
9 min read
Save for later

MooTool: Understanding the Foundational Basics

Packt
01 Aug 2011
9 min read
  MooTools 1.3 Cookbook Over 110 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page MooTroduction MooTools was conceived by Valerio Proietti and copy written under MIT License in 2006. We send a great round of roaring applause to Valerio for creating the Moo.FX (My Object Oriented Effects) plugin for Prototype, a JavaScript abstraction library. That work gave life to an arguably more effects-oriented (and highly extensible) abstraction layer of its own: MooTools (My Object Oriented Tools).   Knowing our MooTools version This recipe is an introduction to the different MooTools versions and how to be sure we are coding in the right version. Getting ready Not all are equal nor are backwards compatible! The biggest switch in compatibility came between MooTools 1.1 and MooTools 1.2. This minor version change caused clamor in the community given the rather major changes included. In our experience, we find that 1.2 and 1.3 MooTool scripts play well together while 1.0 and 1.1 scripts tend to be agreeable as well. However, Moo's popularity spiked with version 1.1, and well-used scripts written with 1.0, like MooTabs, were upgraded to 1.1 when released. The exact note in Google Libraries for the version difference between 1.1 and 1.2 reads: Since 1.1 versions are not compatible with 1.2 versions, specifying version "1" will map to the latest 1.1 version (currently 1.1.2). MooTools 1.1.1 has inline comments, which cause the uncompressed version to be about 180% larger than version 1.2.5 and 130% larger than the 1.3.0 release. When compressed, with YUI compression, 1.1 and 1.2 weigh in at about 65K while 1.3.0 with the CSS3 selectors is a modest 85K. In the code snippets, the compressed versions are denoted with a c.js file ending. Two great additions in 1.3.0 that account for most of the difference in size from 1.2.5 are Slick.Parser and Slick.Finder. We may not need CSS3 parsing; so we may download the MooTools Core with only the particular class or classes we need. Browse http://mootools.net/core/ and pick and choose the classes needed for the project. We should note that the best practice is to download all modules during development and pare down to what is needed when taking an application into production. When we are more concerned with functionality than we are with performance and have routines that require backwards compatibility with MooTools 1.1, we can download the 1.2.5 version with the 1.1 classes from the MooTools download page at http://mootools.net/download. The latest MooTools version as of authoring is 1.3.0. All scripts within this cookbook are built and tested using MooTools version 1.3.0 as hosted by Google Libraries. How to do it... This is the basic HTML framework within which all recipes will be launched. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title>MooTools Recipes</title> <meta http-equiv="content-type" content="text/html;charset=utf-8"/> Note that the portion above is necessary but is not included in the other recipes to save space. Please do always include a DOCTYPE, and opening HTML, HEAD, TITLE, and META tag for the HTTP-EQUIV and CONTENT. <script type="text/javascript" src="mootools-1.3.0.js"></script> </head> <body> <noscript>Your Browser has JavaScript Disabled. Please use industry best practices for coding in JavaScript; letting users know they are missing out is crucial!</noscript> <script type="text/javascript"> // best practice: ALWAYS include a NOSCRIPT tag! var mooversion = MooTools.version; var msg = 'version: '+mooversion; document.write(msg); // just for fun: var question = 'Use MooTools version '+msg+'?'; var yes = 'It is as you have requested!'; var no = "Please change the mootools source attribute in HTML->head->script."; // give 'em ham alert((confirm(question)?yes:no)); </script> </body> </html> How it works... Inclusion of external libraries like MooTools is usually handled within the HEAD element of the HTML document. The NOSCRIPT tag will only be read by browsers that have their JavaScript disabled. The SCRIPT tag may be placed directly within the layout of the page. There's more... Using the XHTML doctype (or any doctype for that matter) allows your HTML to validate, helps browsers parse your pages faster, and helps the Dynamic Object Model (DOM) behave consistently. When our HTML does not validate, our JavaScript errors will be more random and difficult to solve. Many seasoned developers have settled upon a favorite doctype. This allows them to become familiar with the ad-nauseam of cross browser oddities associated with that particular doctype. To further delve into doctypes, quirksmode, and other HTML specification esoterica, the heavily trafficked http://www.quirksmode.org/css/quirksmode.html provides an easy-to-follow and complete discourse.   Finding MooTools documentation both new and old Browsing http://mootools.net/docs/core will afford us the opportunity to use the version of our choice. The 1.2/1.3 demonstrations at the time of writing are expanding nicely. Tabs in the demonstrations at http://mootools.net/demos display each of the important elements of the demonstration. (Move the mouse over the image to enlarge.) MooTools had a major split at the minor revision number of 1.1. If working on a legacy project that still implements the deprecated MooTools version 1.1, take a shortcut to http://docs111.mootools.net. Copying the demonstrations line-for-line, without studying them to see how they work, may afford our project the opportunity to become malicious code.   Using Google Library's MooTools scripts Let Google maintain the core files and provide the bandwidth to serve them. Getting ready Google is leading the way in helping MooTools developers save time in the arenas of development, maintenance, and hosting by working together with the MooTools developers to host and deliver compressed and uncompressed versions of MooTools to our website visitors. Hosting on their servers eliminates the resources required to host, bandwidth required to deliver, and developer time required to maintain the requested, fully patched, and up-to-date version. Usually we link to a minor version of a library to prevent major version changes that could cause unexpected behavior in our production code. Google API keys that are required in the documentation to use Google Library can be easily and quickly obtained at: http://code.google.com/apis/libraries/devguide.html#sign_up_for_an_api_key. How to do it... Once you have the API Key, use the script tag method to include MooTools. For more information on loading the JavaScript API see http://code.google.com/apis/libraries/devguide.html#load_the_javascript_api_and_ajax_search_module. <!--script type="text/javascript" src="mootools-1.3.0.js"> </script--> <!--we've got ours commented out so that we can use google's here:--> <script src="https://www.google.com/jsapi?key=OUR-KEY-HERE" type="text/javascript"></script> // the full src path is truncated for display here <script src="https://ajax.googleapis.com/... /mootools-yui-compressed.js" type="text/javascript"></script> </head> <body> <noscript>JavaScript is disabled.</noscript> <script type="text/javascript"> var mooversion = MooTools.version; var msg = 'MooTools version: '+mooversion+' from Google'; // show the msg in two different ways (just because) document.write(msg); alert(msg); </script> Using google.load(), which is available to us when we include the Google Library API, we can make the inclusion code a bit more readable. See the line below that includes the string jsapi?key=. We replace OUR-KEY-HERE with our API key, which is tied to our domain name so Google can contact us if they detect a problem: <!--script type="text/javascript" src="mootools-1.3.0.js"> </script--> <!--we've got ours commented out so that we can use google's here:--> <script src="https://www.google.com/jsapi?key=OUR-KEY-HERE" type="text/javascript"></script> <script type="text/javascript"> google.load("mootools", "1.2.5"); </script> </head> <body> <noscript>JavaScript is disabled.</noscript> <script type="text/javascript"> var mooversion = MooTools.version; var msg = 'MooTools version: '+mooversion+' from Google'; // show the msg in two different ways (just because) document.write(msg); alert(msg); </script> How it works... There are several competing factors that go into the decision to use a direct load or dynamic load via google.load(): Are we loading more than one library? Are our visitors using other sites that include this dynamic load? Can our page benefit from parallel loading? Do we need to provide a secure environment? There's more... If we are only loading one library, a direct load or local load will almost assuredly benchmark faster than a dynamic load. However, this can be untrue when browser accelerator techniques, most specifically browser caching, come into play. If our web server is sending no-cache headers, then dynamic load, or even direct load, as opposed to a local load, will allow the browser to cache the Google code and reduce our page load time. If our page is making a number of requests to our web server, it may be possible to have the browser waiting on a response from the server. In this instance, parallel loading from another website can allow those requests that the browser can handle in parallel to continue during such a delay. We need to also take a look at how secure websites function with non-secure, external includes. Many of us are familiar with the errors that can occur when a secure website is loaded with an external (or internal) resource that is not provided via http. The browser can pop up an alert message that can be very concerning and lose the confidence of our visitors. Also, it is common to have some sort of negative indicator in the address bar or in the status bar that alerts visitors that not all resources on the page are secure. Avoid mixing http and https resources; if using a secure site, opt for a local load of MooTools or use Google Library over HTTPS.  
Read more
  • 0
  • 1
  • 1441

article-image-haxe-2-dynamic-type-and-properties
Packt
28 Jul 2011
7 min read
Save for later

haXe 2: The Dynamic Type and Properties

Packt
28 Jul 2011
7 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language Freeing yourself from the typing system The goal of the Dynamic type is to allow one to free oneself from the typing system. In fact, when you define a variable as being a Dynamic type, this means that the compiler won't make any kind of type checking on this variable. Time for action – Assigning to Dynamic variables When you declare a variable as being Dynamic, you will be able to assign any value to it at compile time. So you can actually compile this code: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = "Hello"; dynamicVar = 123; dynamicVar = {name:"John", lastName : "Doe"}; dynamicVar = new Array<String>(); } } The compiler won't mind even though you are assigning values with different types to the same variable! Time for action – Assigning from Dynamic variables You can assign the content of any Dynamic variable to a variable of any type. Indeed, we generally say that the Dynamic type can be used in place of any type and that a variable of type Dynamic is indeed of any type. So, with that in mind, you can now see that you can write and compile this code: class DynamicTest { public static function main() { var dynamicVar : Dynamic; var year : Int; dynamicVar = "Hello"; year = dynamicVar; } } So, here, even though we will indeed assign a String to a variable typed as Int, the compiler won't complain. But you should keep in mind that this is only at compile time! If you abuse this possibility, you may get some strange behavior! Field access A Dynamic variable has an infinite number of fields all of Dynamic type. That means you can write the following: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = {}; dynamicVar.age = 12; //age is Dynamic dynamicVar.name = "Benjamin"; //name is Dynamic } } Note that whether this code will work or not at runtime is highly dependent on the runtime you're targeting. Functions in Dynamic variables It is also possible to store functions in Dynamic variables and to call them: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = function (name : String) { trace("Hello" + name); }; dynamicVar(); var dynamicVar2 : Dynamic = {}; dynamicVar2.sayBye = function (name : String) { trace("Bye" + name ); }; dynamicVar2.sayBye(); } } As you can see, it is possible to assign functions to a Dynamic variable or even to one of its fields. It's then possible to call them as you would do with any function. Again, even though this code will compile, its success at running will depend on your target. Parameterized Dynamic class You can parameterize the Dynamic class to slightly modify its behavior. When parameterized, every field of a Dynamic variable will be of the given type. Let's see an example: class DynamicTest { public static function main() { var dynamicVar : Dynamic<String>; dynamicVar = {}; dynamicVar.name = "Benjamin"; //name is a String dynamicVar.age = 12; //Won't compile since age is a String } } In this example, dynamicVar.name and dynamicVar.age are of type String, therefore, this example will fail to compile on line 7 because we are trying to assign an Int to a String. Classes implementing Dynamic A class can implement a Dynamic, parameterized or not. Time for action – Implementing a non-parameterized Dynamic When one implements a non-parameterized Dynamic in a class, one will be able to access an infinite number of fields in an instance. All fields that are not declared in the class will be of type Dynamic. So, for example: class User implements Dynamic { public var name : String; public var age : Int; //... } //... var u = new User(); //u is of type User u.name = "Benjamin"; //String u.age = 22; //Int u.functionrole = "Author"; //Dynamic   What just happened?   As you can see, the functionrole field is not declared in the User class, so it is of type Dynamic. In fact, when you try to access a field that's not declared in the class, a function named resolve will be called and it will get the name of the property accessed. You can then return the value you want. This can be very useful to implement some magic things. Time for action – Implementing a parameterized Dynamic When implementing a parameterized Dynamic, you will get the same behavior as with a non-parameterized Dynamic except that the fields that are not declared in the class will be of the type given as a parameter. Let's take almost the same example but with a parameterized Dynamic: class User implements Dynamic<String> { public var name : String; public var age : Int; //... } //... var u = new User(); //u is of type User u.name = "Benjamin"; //String u.age = 22; //Int u.functionrole = "Author"; //String because of the type parameter   What just happened?   As you can see here, fields that are not declared in the class are of type String because we gave String as a type parameter. Using a resolve function when implementing Dynamic Now we are going to use what we've just learned. We are going to implement a Component class that will be instantiated from a configuration file. A component will have properties and metadata. Such properties and metadata are not pre-determined, which means that the properties' names and values will be read from the configuration file. Each line of the configuration file will hold the name of the property or metadata, its value, and a 0 if it's a property (or otherwise it will be a metadata). Each of these fields will be separated by a space. The last constraint is that we should be able to read the value of a property or metadata by using the dot-notation. Time for action – Writing our Component class As you may have guessed, we will begin with a very simple Component class — all it has to do at first is to have two Hashes: one for metadata, the other one for properties. class Component { public var properties : Hash<String>; public var metadata : Hash<String>; public function new() { properties = new Hash<String>(); metadata = new Hash<String>(); } } It is that simple at the moment. As you can see, we do not implement access via the dot-notation at the moment. We will do it later, but the class won't be very complicated even with the support for this notation. Time for action – Parsing the configuration file We are now going to parse our configuration file to create a new instance of the Component class. In order to do that, we are going to create a ComponentParser class. It will contain two functions: parseConfigurationFile to parse a configuration file and return an instance of Component. writeConfigurationFile that will take an instance of Component and write data to a file. Let's see how our class should look at the moment (this example will only work on neko): class ComponentParser { /** * This function takes a path to a configuration file and returns an instance of ComponentParser */ public static function parseConfigurationFile(path : String) { var stream = neko.io.File.read(path, false); //Open our file for reading in character mode var comp = new Component(); //Create a new instance of Component while(!stream.eof()) //While we're not at the end of the file { var str = stream.readLine(); //Read one line from file var fields = str.split(" "); //Split the string using space as delimiter if(fields[2] == "0") { comp.properties.set(fields[0], fields[1]); //Set the key<->value in the properties Hash } else { comp.metadata.set(fields[0], fields[1]); //Set the key<->value in the metadata Hash } } stream.close(); return comp; } } It's not that complicated, and you would actually use the same kind of method if you were going to use a XML file. Time for action – Testing our parser Before continuing any further, we should test our parser in order to be sure that it works as expected. To do this, we can use the following configuration file: nameMyComponent 1 textHelloWorld 0 If everything works as expected, we should have a name metadata with the value MyComponent and a property named text with the value HelloWorld. Let's write a simple test class: class ComponentImpl { public static function main(): Void { var comp = ComponentParser.parseConfigurationFile("conf.txt"); trace(comp.properties.get("text")); trace(comp.metadata.get("name")); } } Now, if everything went well, while running this program, you should get the following output: ComponentImpl.hx:6: HelloWorld ComponentImpl.hx:7: MyComponent
Read more
  • 0
  • 0
  • 1426