Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-jquery-ui-themes-using-themeroller
Packt
04 Aug 2011
8 min read
Save for later

jQuery UI Themes: Using the ThemeRoller

Packt
04 Aug 2011
8 min read
jQuery UI Themes Beginner's Guide Create new themes for your JQuery site with this step-by-step guide ThemeRoller basics Before we start using the ThemeRoller application to design and build our own themes, we'll take a quick look at what makes it such a handy tool. There is a lot more to the ThemeRoller than simply changing themes—we also use it to build them. You can think of it as an IDE for jQuery UI themes. Instant feedback What makes the ThemeRoller application such a powerful development tool is the speed with which you get feedback to changes made in the theme design. Any change made in the ThemeRoller is instantaneously reflected in the sample widgets provided on the page. For instance, if I were to change a font setting, that change would be reflected immediately in the sample widgets provided on the same page. There is no need to update the application you're building to see the results of small adjustments made to theme style settings. The same is true of prepackaged themes in the ThemeRoller gallery. Selecting a theme will apply it to the same widgets - you get immediate feedback. This is very helpful in deciding on prepackaged themes. If you can see how it looks with jQuery UI widgets right away, that may dissuade you from using the theme or it may close the deal. The idea behind this feedback mechanism offered by the ThemeRoller is a sped-up development cycle. Eliminating several steps when developing anything, themes included, is a welcome feature. The dev tool The ThemeRoller dev tool is a simple bookmarket for Firefox that brings the entire ThemeRoller application into any page with jQuery UI widgets. The benefit of the dev tool is that it allows you to see immediate theme changes in the context of the application you're building. If you use the ThemeRoller application from the jQuery UI website, you can only see changes as they apply to the sample widgets provided. This can give you a better idea of what the theme changes will look like on a finished product. There are some limitations to using the dev tool though. If you're developing your application locally, not on a development server, you can't use the dev tool due to security restrictions. The dev tool is better suited for viewing changes to themes, or viewing different themes entirely, on a deployed user interface. Having said that, if you're designing a user interface with several collaborators, you might have a remote development server. In this scenario, the dev tool suits its name. Portability The ThemeRoller application is portable in more ways than one. The dev tool for Firefox allows us to use the application within any jQuery UI application. This means that we can design and tweak our jQuery UI themes as we build the widgets. This portability between applications means that we can build a single theme that works for a suite of applications, or a product line, if we're so inclined. We can also use the ThemeRoller application directly from the jQueryUI website. This is handy if we don't have any widgets built or if you're trying jQuery UI out for the first time and just want to browse the wide selection of prepackaged themes. Whatever approach you take, the application is the same and will always be consistent, as it is a hosted application. You don't need to concern yourself with installing an IDE for theme authors to collaborate with. The ThemeRoller application is available wherever they are. ThemeRoller gallery It is nice to have a wide variety of prepackaged themes to choose from. It isn't all that helpful if you can't see how they look. The ThemeRoller application has a gallery where we can not only browse prepackaged themes but also take them for a test drive. This section is about using the ThemeRoller gallery to view themes and get a feel of the variety available to us. Viewing themes The ThemeRoller application doesn't hide anything about the prepackaged themes in the gallery. When we preview a theme, we get to see how it looks when applied to widgets. The theme gallery even gives us a thumbnail in the browser to show a bird's eye view of the theme. So if you see a lot of black and you're looking for something bright, you don't need to bother selecting it to see how the widgets look with it. Time for action - previewing a theme It's time for us to preview a jQuery UI theme before we actually download it. We can get an idea of what a theme in the ThemeRoller gallery will look like when applied to widgets: Point your web browser to http://jqueryui.com/themeroller/. Select the Gallery tab in the ThemeRoller section on the right-hand side. Move your mouse pointer over any theme in the gallery. A visual indicator will be displayed. Select the theme thumbnail:   What just happened? We've just selected a theme to preview from the ThemeRoller gallery. You'll notice that all the sample widgets to the right are instantly updated with the new theme. If we wanted to, we could change our theme selection and the sample widgets are once again updated with the theme changes. You'll notice that once you make a theme selection, the URL in your address bar is now long and ugly. These are the individual theme settings for the chosen theme being passed to the ThemeRoller page with the sample widgets. You'll also notice that the theme selection on the left-hand side of the page isn't preserved. This is because we're passing individual theme settings and not the name of the theme itself, for example, instancetheme=darkness. We'll see why this distinction is important in a little bit. Downloading themes Once you've selected a theme from the gallery and you're happy with how it looks, it is time to download it and use it with your jQuery UI project. Downloading a theme is easy—each prepackaged theme has a download button that will take you to the jQuery UI download page. If we wanted to, we could download all themes in a single package to experiment with, locally. This would also eliminate the need for the ThemeRoller application, which you probably don't want to do. Time for action - downloading a theme The gallery is a nice way to preview a theme, but now we want to use it in our application. To do this, we need to download it. This is similar to downloading the jQuery UI toolkit: Point your web browser to http://jqueryui.com/themeroller/. Select the Gallery tab in the ThemeRoller section on the left-hand side. Find a theme you wish to download. Click on the Download button underneath the theme thumbnail. This will bring you to the jQuery UI download page. Notice that your chosen theme is selected on the right-hand side of the page. Click on the Download button to download your theme: What just happened? We've just selected a prepackaged theme from the ThemeRoller gallery and downloaded it. In fact, you just downloaded jQuery UI again. The difference being, the downloaded ZIP archive contains the theme you selected from the gallery. The same principles apply for extracting the archive and using your theme with your jQuery UI application. The downside is that if you're downloading a theme, chances are you already have a jQuery UI application under development. In this case, downloading jQuery UI JavaScript files is redundant. However, there is no easy way around this. This is one of the drawbacks to having a useful tool available to us—a minor drawback at that. If you're only interested in the theme, you simply need to extract the theme folder from the ZIP archive and copy it to your jQuery UI application directory. You then need to update your path in your HTML in including the appropriate CSS file. You'll also notice that after clicking on the Download button from the theme gallery, you're brought to the download page with an ugly URL. That is, you'll see something like /download/?themeParams=%3FffDefault instead of just /download. This is a requirement of the ThemeRoller application that allows developers to edit existing themes or to roll their own. Without these parameters, we wouldn't be able to download themes we have made changes to. The jQuery UI download page also includes an Advanced Settings section that is hidden by default. This is because you rarely need to use it. It allows you to set the CSS scope for your theme, useful if you're using multiple themes in a single user interface. This isn't a recommended practice; the key idea behind jQuery UI themes is consistency. The advanced settings also lets you change the name of the downloaded theme folder. This can be useful if you plan on changing your theme later, but you can always rename the folder after downloading it.
Read more
  • 0
  • 0
  • 2523

article-image-play-framework-binding-and-validating-objects-and-rendering-json-output
Packt
03 Aug 2011
4 min read
Save for later

Play Framework: Binding and Validating Objects and Rendering JSON Output

Packt
03 Aug 2011
4 min read
Binding and validating objects using custom binders Read the Play documentation about binding and validating objects. As validation is extremely important in any application, it basically has to fulfill several tasks. First, it should not allow the user to enter wrong data. After a user has filled a form, he should get a positive or negative feedback, irrespective of whether the entered content was valid or not. The same goes for storing data. Before storing data you should make sure that storing it does not pose any future problems as now the model and the view layer should make sure that only valid data is stored or shown in the application. The perfect place to put such a validation is the controller. As a HTTP request basically is composed of a list of keys and values, the web framework needs to have a certain logic to create real objects out of this argument to make sure the application developer does not have to do this tedious task. You can find the source code of this example in the chapter2/binder directory. How to do it... Create or reuse a class you want created from an item as shown in the following code snippet: public class OrderItem { @Required public String itemId; public Boolean hazardous; public Boolean bulk; public Boolean toxic; public Integer piecesIncluded; public String toString() { return MessageFormat.format("{0}/{1}/{2}/{3}/{4}", itemId, piecesIncluded, bulk, toxic, hazardous); } } Create an appropriate form snippet for the index.xml template: #{form @Application.createOrder()} <input type="text" name="item" /><br /> <input type="submit" value="Create Order"> #{/form} Create the controller: public static void createOrder(@Valid OrderItem item) { if (validation.hasErrors()) { render("@index"); } renderText(item.toString()); } Create the type binder doing this magic: @Global public class OrderItemBinder implements TypeBinder<OrderItem> { @Override public Object bind(String name, Annotation[] annotations, String value, Class actualClass) throws Exception { OrderItem item = new OrderItem(); List<String> identifier = Arrays.asList(value.split("-", 3)); if (identifier.size() >= 3) { item.piecesIncluded = Integer.parseInt(identifier.get(2)); } if (identifier.size() >= 2) { int c = Integer.parseInt(identifier.get(1)); item.bulk = (c & 4) == 4; item.hazardous = (c & 2) == 2; item.toxic = (c & 1) == 1; } if (identifier.size() >= 1) { item.itemId = identifier.get(0); } return item; } } How it works... With the exception of the binder definition all of the preceding code has been seen earlier. By working with the Play samples you already got to know how to handle objects as arguments in controllers. This specific example creates a complete object out of a simple String. By naming the string in the form value (<input …name="item" />) the same as the controller argument name (createOrder(@Valid OrderItem item)) and using the controller argument class type in the OrderItemBinder definition (OrderItemBinder implements TypeBinder<OrderItem>), the mapping is done. The binder splits the string by a hyphen, uses the first value for item ID, the last for piìesIncluded, and checks certain bits in order to set some Boolean properties. By using curl you can verify the behavior very easily as shown: curl -v -X POST --data "item=Foo-3-5" localhost:9000/order Foo/5/false/true/true Here Foo resembles the item ID, 5 is the piecesIncluded property, and 3 is the argument means that the first two bits are set and so the hazardous and toxic properties are set, while bulk is not. There's more... The TypeBinder feature has been introduced in Play 1.1 and is documented at http://www.playframework.org/documentation/1.2/controllers#custombinding. Using type binders on objects Currently, it is only possible to create objects out of one single string with a TypeBinder. If you want to create one object out of several submitted form values you will have to create your own plugin for this as workaround. You can check more about this at: http://groups.google.com/group/play-framework/browse_thread/thread/62e7fbeac2c9e42d Be careful with JPA using model classes As soon as you try to use model classes with a type binder you will stumble upon strange behavior, as your objects will always only have null or default values when freshly instanced. The JPA plugin already uses a binding and overwrites every binding you are doing.
Read more
  • 0
  • 0
  • 2640

article-image-play-framework-introduction-writing-modules
Packt
28 Jul 2011
11 min read
Save for later

Play Framework: Introduction to Writing Modules

Packt
28 Jul 2011
11 min read
Play Framework Cookbook In order to get to know more modules, you should not hesitate to take a closer look at the steadily increasing amount of modules available at the Play framework modules page at http://www.playframework.org/modules. When beginning to understand modules, you should not start with modules implementing its persistence layer, as they are often the more complex ones. In order to clear up some confusion, you should be aware of the definition of two terms throughout the article, as these two words with an almost identical meaning are used most of the time. The first is word is module and the second is plugin. Module means the little application which serves your main application, where as plugin represents a piece of Java code, which connects to the mechanism of plugins inside Play. Creating and using your own module Before you can implement your own functionality in a module, you should know how to create and build a module. This recipe takes a look at the module's structure and should give you a good start. The source code of the example is available at examples/chapter5/module-intro. How to do it... It is pretty easy to create a new module. Go into any directory and enter the following: play new-module firstmodule This creates a directory called firstmodule and copies a set of predefined files into it. By copying these files, you can create a package and create this module ready to use for other Play applications. Now, you can run play build-module and your module is built. The build step implies compiling your Java code, creating a JAR file from it, and packing a complete ZIP archive of all data in the module, which includes Java libraries, documentation, and all configuration files. This archive can be found in the dist/ directory of the module after building it. You can just press Return on the command line when you are asked for the required Play framework version for this module. Now it is simple to include the created module in any Play framework application. Just put this in the in the conf/dependencies.yml file of your application. Do not put this in your module! require: - play - customModules -> firstmodule repositories: - playCustomModules: type: local artifact: "/absolute/path/to/firstmodule/" contains: - customModules -> * The next step is to run play deps. This should show you the inclusion of your module. You can check whether the modules/ directory of your application now includes a file modules/firstmodule, whose content is the absolute path of your module directory. In this example it would be /path/to/firstmodule. To check whether you are able to use your module now, you can enter the following: play firstmodule:hello This should return Hello in the last line. In case you are wondering where this is coming from, it is part of the commands.py file in your module, which was automatically created when you created the module via play new-module. Alternatively, you just start your Play application and check for an output such as the following during application startup: INFO ~ Module firstmodule is available (/path/to/firstmodule) The next step is to fill the currently non-functional module with a real Java plugin, so create src/play/modules/firstmodule/MyPlugin.java: public class MyPlugin extends PlayPlugin { public void onApplicationStart() { Logger.info("Yeeha, firstmodule started"); } } You also need to create the file src/play.plugins: 1000:play.modules.firstmodule.MyPlugin Now you need to compile the module and create a JAR from it. Build the module as shown in the preceding code by entering play build-module. After this step, there will be a lib/play- firstmodule.jar file available, which will be loaded automatically when you include the module in your real application configuration file. Furthermore, when starting your application now, you will see the following entry in the application log file. If you are running in development mode, do not forget to issue a first request to make sure all parts of the application are loaded: INFO ~ Yeeha, firstmodule started How it works... After getting the most basic module to work, it is time go get to know the structure of a module. The filesystem layout looks like this, after the module has been created: app/controllers/firstmodule app/models/firstmodule app/views/firstmodule app/views/tags/firstmodule build.xml commands.py conf/messages conf/routes lib src/play/modules/firstmodule/MyPlugin.java src/play.plugins As you can see a module basically resembles a normal Play application. There are directories for models, views, tags, and controllers, as well as a configuration directory, which can include translations or routes. Note that there should never be an application.conf file in a module. There are two more files in the root directory of the module. The build.xml file is an ant file. This helps to compile the module source and creates a JAR file out of the compiled classes, which is put into the lib/ directory and named after the module. The commands.py file is a Python file, which allows you to add special command line directives, such as the play firstmodule:hello command that we just saw when executing the Play command line tool. The lib/ directory should also be used for additional JARs, as all JAR files in this directory are automatically added to classpath when the module is loaded. Now the only missing piece is the src/ directory. It includes the source of your module, most likely the logic and the plugin source. Furthermore, it features a very important file called play.plugins. After creating the module, the file is empty. When writing Java code in the src/ directory, it should have one line consisting of two entries. One entry features the class to load as a plugin; where as the other entry resembles a priority. This priority defines the order in which to load all modules of an application. The lower the priority, the earlier the module gets loaded. If you take a closer look at the PlayPlugin class, which MyPlugin inherits from, you will see a lot of methods that you can override. Here is a list of some of them accompanying a short description: onLoad(): This gets executed directly after the plugin has been loaded. However, this does not mean that the whole application is ready! bind(): There are two bind() methods with different parameters. These methods allow a plugin to create a real object out of arbitrary HTTP request parameters or even the body of a request. If you return anything different other than null in this method, the returned value is used as a parameter for controller whenever any controller is executed. getStatus(), getJsonStatus(): Allows you to return an arbitrary string representing a status of the plugin or statistics about its usage. You should always implement this for production ready plugins in order to simplify monitoring. enhance(): Performs bytecode enhancement. rawInvocation(): This can be used to intercept any incoming request and change the logic of it. This is already used in the CorePlugin to intercept the @kill and @status URLs. This is also used in the DocViewerPlugin to provide all the existing documentation, when being in test mode. serveStatic(): Allows for programmatically intercepting the serving of static resources. A common example can be found in the SASS module, where the access to the .sass file is intercepted and it is precomplied. loadTemplate(): This method can be used to inject arbitrary templates into the template loader. For example, it could be used to load templates from a database instead of the filesystem. detectChange(): This is only active in development mode. If you throw an exception in this method, the application will be reloaded. onApplicationStart(): This is executed on application start and if in development mode, on every reload of your application. You should initiate stateful things here, such as connections to databases or expensive object creations. Be aware, that you have to care of thread safe objects and method invocations for yourself. For an example you could check the DBPlugin, which initializes the database connection and its connection pool. Another example is the JPAPlugin, which initializes the persistence manager or the JobPlugin, which uses this to start jobs on application start. onApplicationReady(): This method is executed after all plugins are loaded, all classes are precompiled, and every initialization is finished. The application is now ready to serve requests. afterApplicationStart(): This is currently almost similar to onApplicationReady(). onApplicationStop(): This method is executed during a graceful shutdown. This should be used to free resources, which were opened during the starting of the plugin. A standard example is to close network connections to database, remove stale file system entries, or clear up caches. onInvocationException(): This method is executed when an exception, which is not caught is thrown during controller invocation. The ValidationPlugin uses this method to inject an error cookie into the current request. invocationFinally(): This method is executed after a controller invocation, regardless of whether an exception was thrown or not. This should be used to close request specific data, such as a connection, which is only active during request processing. beforeActionInvocation(): This code is executed before controller invocation. Useful for validation, where it is used by Play as well. You could also possibly put additional objects into the render arguments here. Several plugins also set up some variables inside thread locals to make sure they are thread safe. onActionInvocationResult(): This method is executed when the controller action throws a result. It allows inspecting or changing the result afterwards. You can also change headers of a response at this point, as no data has been sent to the client yet. onInvocationSuccess(): This method is executed upon successful execution of a complete controller method. onRoutesLoaded(): This is executed when routes are loaded from the routes files. If you want to add some routes programmatically, do it in this method. onEvent(): This is a poor man's listener for events, which can be sent using the postEvent() method. onClassesChange(): This is only relevant in testing or development mode. The argument of this method is a list of freshly changed classes, after a recompilation. This allows the plugin to detect whether certain resources need to be refreshed or restarted. If your application is a complete shared-nothing architecture, you should not have any problems. Test first, before implementing this method. addTemplateExtensions(): This method allows you to add further TemplateExtension classes, which do not inherit from JavaExtensions, as these are added automatically. At the time of this writing, neither a plugin nor anything in the core Play framework made use of this, with the exception of the Scala module. compileAll(): If the standard compiler inside Play is not sufficient to compile application classes, you can override this method. This is currently only done inside the Scala plugin and should not be necessary in regular applications. routeRequest(): This method can be used to redirect requests programmatically. You could possibly redirect any URL which has a certain prefix or treat POST requests differently. You have to render some result if you decide to override this method. modelFactory(): This method allows for returning a factory object to create different model classes. This is needed primarily inside of the different persistence layers. It was introduced in play 1.1 and is currently only used by the JPA plugin and by the Morphia plugin. The model factory returned here implements a basic and generic interface for getting data, which is meant to be independent from the persistence layer. It is also used to provide a more generic fixtures support. afterFixtureLoad(): This method is executed after a Fixtures.load() method has been executed. It could possibly be used to free or check some resources after adding batch data via fixtures. Cleaning up after creating your module When creating a module via Play new-module, you should remove any unnecessary cruft from your new module, as most often, not all of this is needed. Remove all unneeded directories or files, to make understanding the module as easy as possible. Supporting Eclipse IDE As play eclipsify does not work currently for modules, you need to set it up manually. A trick to get around this is to create and eclipsify a normal Play application, and then configure the build path and use "Link source" to add the src/ directory of the plugin.
Read more
  • 0
  • 0
  • 4282
Visually different images

article-image-apache-solr-spellchecker-statistics-and-grouping-mechanism
Packt
27 Jul 2011
5 min read
Save for later

Apache Solr: Spellchecker, Statistics, and Grouping Mechanism

Packt
27 Jul 2011
5 min read
Computing statistics for the search results Imagine a situation where you want to compute some basic statistics about the documents in the results list. For example, you have an e-commerce shop where you want to show the minimum and the maximum price of the documents that were found for a given query. Of course, you could fetch all the documents and count it by yourself, but imagine if Solr can do it for you. Yes it can and this recipe will show you how to use that functionality. How to do it... Let's start with the index structure (just add this to the fields section of your schema.xml file): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" /> <field name="price" type="float" indexed="true" stored="true" /> The example data file looks like this: <add> <doc> <field name="id">1</field> <field name="name">Book 1</field> <field name="price">39.99</field> </doc> <doc> <field name="id">2</field> <field name="name">Book 2</field> <field name="price">30.11</field> </doc> <doc> <field name="id">3</field> <field name="name">Book 3</field> <field name="price">27.77</field> </doc> </add> Let's assume that we want our statistics to be computed for the price field. To do that, we send the following query to Solr: http://localhost:8983/solr/select?q=name:book&stats=true&stats. field=price The response Solr returned should be like this: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">0</int> <lst name="params"> <str name="q">name:book</str> <str name="stats">true</str> <str name="stats.field">price</str> </lst> </lst> <result name="response" numFound="3" start="0"> <doc> <str name="id">1</str> <str name="name">Book 1</str> <float name="price">39.99</float> </doc> <doc> <str name="id">2</str> <str name="name">Book 2</str> <float name="price">30.11</float> </doc> <doc> <str name="id">3</str> <str name="name">Book 3</str> <float name="price">27.77</float> </doc> </result> <lst name="stats"> <lst name="stats_fields"> <lst name="price"> <double name="min">27.77</double> <double name="max">39.99</double> <double name="sum">97.86999999999999</double> <long name="count">3</long> <long name="missing">0</long> <double name="sumOfSquares">3276.9851000000003</double> <double name="mean">32.62333333333333</double> <double name="stddev">6.486118510583508</double> </lst> </lst> </lst> </response> As you can see, in addition to the standard results list, there was an additional section available. Now let's see how it works. How it works... The index structure is pretty straightforward. It contains three fields—one for holding the unique identifier (the id field), one for holding the name (the name field), and one for holding the price (the price field). The file that contains the example data is simple too, so I'll skip discussing it. The query is interesting. In addition to the q parameter, we have two new parameters. The first one, stats=true, tells Solr that we want to use the StatsComponent, the component which will calculate the statistics for us. The second parameter, stats.field=price, tells the StatsComponent which field to use for the calculation. In our case, we told Solr to use the price field. Now let's look at the result returned by Solr. As you can see, the StatsComponent added an additional section to the results. This section contains the statistics generated for the field we told Solr we want statistics for. The following statistics are available: min: The minimum value that was found in the field for the documents that matched the query max: The maximum value that was found in the field for the documents that matched the query sum: Sum of all values in the field for the documents that matched the query count: How many non-null values were found in the field for the documents that matched the query missing: How many documents that matched the query didn't have any value in the specified field sumOfSquares: Sum of all values squared in the field for the documents that matched the query mean: The average for the values in the field for the documents that matched the query stddev: The standard deviation for the values in the field for the documents that matched the query You should also remember that you can specify multiple stats.field parameters to calculate statistics for different fields in a single query. Please be careful when using this component on the multi-valued fields. It can sometimes be a performance bottleneck.
Read more
  • 0
  • 0
  • 1727

article-image-jquery-ui-themes-theme-icons-standalone-icons-and-icon-states
Packt
27 Jul 2011
9 min read
Save for later

jQuery UI Themes: Theme icons, Standalone Icons, and Icon States

Packt
27 Jul 2011
9 min read
  jQuery UI Themes Beginner's Guide Create new themes for your JQuery site with this step-by-step guide   What are theme icons? In any user interface, we see icons all over the place. On your desktop, you see icons that represent the various application shortcuts as well as any files you've placed there. The window containing your web browser has icons for the maximize, minimize, and close actions. The benefit of using icons is that they're incredibly space-efficient, as long as they're descriptive. Using icons out of context defeats their purpose - you don't want a button with a "down arrow" icon in your toolbar. This doesn't mean anything to the user. Having a button with a "trashcan" icon in the tool-bar does make sense—it means I want to delete what I'm looking at. Another potentially harmful use is using icons in places where a text description would better inform the user. For instance, displaying a "trashcan" button in the toolbar might confuse the user if there are several things displayed on the same page, even if they've selected something. In these scenarios, we're often better off using a combination of text and an icon. The jQuery UI theming framework provides a large selection of icons we can use in our user interfaces. Some of these icons are already used in some widgets, for instance, the accordion uses arrow icons by default. Not only are the icon graphics provided to us - we can choose icon colors in the ThemRoller application - but we also have powerful CSS class we use to apply the icons. Using these classes, we can give existing jQuery UI widgets new icons or we can place them strategically in our application user interface where they prove helpful. Sometimes, the provided icon set will only go so far. You'll find that at one point or another, you need new icons that better reflect the concepts of your application domain. Time for action - preparing the example It's time to set up an environment for examples throughout the remainder of this article. If you haven't already, download and extract the jQuery UI package into a directory called jQuery UI from http://jqueryui.com/download. At the same level as the jQuery UI directory, create a new index.html file with the following content: <html > <head> <title>Creating Theme Icons</title> <link href="jqueryui/development-bundle/themes/base/ jquery.ui.all.css" rel="stylesheet" type="text/css" /> <script src="jqueryui/js/jquery-1.5.x.min.js" type="text/ javascript"></script> <script src="jqueryui/js/jquery-ui-1.8.x.custom.min.js" type="text/javascript"></script> <script src="index.js" type="text/javascript"></script> </head> <body style="font-size: 10px;"> <button id="my_button">Click Me</button> </body> </html> At the same level as the jqueryui directory, create a new index.js file with the following content. $(document).ready(function(){ $("#my_button").button(); }); Open index.html in your web browser; you should see something similar to the following: Icons in widgets Several jQuery UI widgets have icons from the theming framework embedded inside them. We use icons inside widgets to decorate them and to add meaning. Icons are similar to interaction cues, they help guide the user through the application workflow by given subtle hints. Before we start modifying icons used in our theme, we need to take a closer look at the role they play in widgets. Time for action - default widget icons Let's take a look at some of the icons displayed in jQuery UI widgets by default: Edit the index.html file created earlier and replace the content with the following: <html > <head> <title>Creating Theme Icons</title> <link href="jqueryui/development-bundle/themes/base/ jquery.ui.all.css" rel="stylesheet" type="text/css" /> <script src="jqueryui/js/jquery-1.5.x.min.js" type="text/ javascript"></script> <script src="jqueryui/js/jquery-ui-1.8.x.custom.min.js" type="text/javascript"></script> <script src="index.js" type="text/javascript"></script> </head> <body style="font-size: 10px;"> <input id="my_datepicker" type="text" style="margin- bottom: 170px;"/> <div style="width: 40%;"> <div id="my_accordion"> <h3><a href="#">First</a></h3> <div> <p>First paragraph</p> <p>Second paragraph</p> <p>Third paragraph</p> </div> <h3><a href="#">Second</a></h3> <div></div> <h3><a href="#">Third</a></h3> <div></div> </div> </div> </body> </html> Edit the index.js file created earlier and replace the content with the following: $(document).ready(function(){ $("#my_accordion").accordion(); $("#my_datepicker").datepicker(); }); Reload index.html in your web browser. You should see something similar to the following:   What just happened?   We've just created two widgets—a date-picker and an accordion. In index.html, we've created the markup for both widgets and in index.js, we construct the jQuery UI components when the page has finished loading. You'll notice that both widgets have icons in them by default. The date-picker widget has two arrows beside the month and year. The accordion widget has an arrow in each accordion section header. These widgets have icons by default because they help bring meaning to the widget succinctly. As a user, I can easily deduce the meaning of the arrows in the date-picker: move to the next or previous month. Additionally, the text "Next" and "Previous" are added to their respective icons as titles. An alternate presentation of these controls is a text link or button: "next month", "previous month". This doesn't add any value; it only takes away from the space inside the widget. The arrow icon role in the accordion widget is even more obvious. The down arrow represents the currently expanded accordion section. The right arrows represent collapsed sections. Without these arrows, the user would eventually figure out how to work the accordion controls; however, the icons make it much more obvious in a non-intrusive way. Time for action - setting widget icons In addition to using the default icons in widgets, we have the option to set the icon in certain widgets. Let's see how this is done: Edit the index.html file created earlier and replace the content with the following: <html > <head> <title>Creating Theme Icons</title> <link href="jqueryui/development-bundle/themes/base/ jquery.ui.all.css" rel="stylesheet" type="text/css" /> <script src="jqueryui/js/jquery-1.5.x.min.js" type="text/ javascript"></script> <script src="jqueryui/js/jquery-ui-1.8.x.custom.min.js" type="text/javascript"></script> <script src="index.js" type="text/javascript"></script> </head> <body style="font-size: 10px;"> <button id="my_button" style="margin-bottom: 10px;">View</ button> <div style="width: 40%;"> <div id="my_accordion"> <h3><a href="#">First</a></h3> <div> <p>First paragraph</p> <p>Second paragraph</p> <p>Third paragraph</p> </div> <h3><a href="#">Second</a></h3> <div></div> <h3><a href="#">Third</a></h3> <div></div> </div> </div> </body> </html> Edit the index.js file created earlier and replace the content with the following: $(document).ready(function(){ $("#my_button").button({icons: {primary: "ui-icon-video"}}); $("#my_accordion").accordion({icons: {header: "ui-icon-circle- triangle-e", headerSelected: "ui-icon-circle-triangle-s"} }); }); Reload index.html in your web browser. You should see something similar to the following:   What just happened? In index.html, we've created a button and an accordion widget. In index.js, we build the jQuery UI components of these widgets when the page has finished loading. In the constructor of the button widget, we pass an object to the icons parameter. This object has a primary value of ui-icon-video. This will give our button a small video icon to the left of the text. Likewise, we pass an object to the icon's parameter in the accordion constructor. This object has two values - header has a value of ui-icon-circletriangle- e and headerSelected has a value of ui-icon-circle-triangle-s. The jQuery UI theme framework has several arrow icons to choose from. The framework uses the "compass notation" for arrow icon classes. Say you want an arrow that points up. You could use ui-icon-circletriangle- n, as this arrow points "north". The button widget has built-in support for adding a button to text in order to provide additional meaning. In our example, the text view isn't very meaningful to the user. With the video icon beside the text view, it becomes very obvious what the button does. What we've done with the accordion widget is slightly different. The accordion widget displays icons by default; we've just specified different ones. This is a pure embellishment of the accordion - we've found icons that we'd like to use and replaced the default ones. We might even want to replace them with our own icons that we create.
Read more
  • 0
  • 0
  • 3078

article-image-alfresco-3-writing-and-executing-scripts
Packt
27 Jul 2011
4 min read
Save for later

Alfresco 3: Writing and Executing Scripts

Packt
27 Jul 2011
4 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco The reader can benefit from the previous article on Implementing Alfresco JavaScript API Functionalities. Introduction Alfresco, like any other enterprise open source framework, exposes a number of APIs including Alfresco SDK (Software Development Kit) a set of development tools that allows the creation of an application for a certain software package or framework and JavaScript API. Available JavaScript APIs Alfresco JavaScript API exposes all important repository objects as JavaScript objects that can be used in a script file. The API follows the object-oriented programming model for well known Alfresco concepts such as Nodes, Properties, Associations, and Aspects. The JavaScript API is capable of performing several essential functions for the script developer, such as: Create Node, Update Node: You can create, upload, or update files using these. Check In/Check Out: You can programmatically check-out and check-in your content. Access Rights Management Permissioning: You can manage your content’s security aspects. Transformation: You can transform your content using this. For example, you want to generate a PDF version of your MS-Office document. Tagging: Tagging APIs will help you tag your contents. Classifying: You can categorize or classify your contents using this. People: Using these APIs, you can handle all user-and group-related operations in your script; such as creating a new user, changing the password of a user, and so on. Searching: One of most important and powerful APIs exposed. You can search your contents using these APIs. You can perform Lucene-based search or XPath-based search operations using these APIs. Workflow: You can manage the tasks and workflows in your system using these APIs and services. Thumbnail: Exposes APIs to manage the thumbnail operations of various content items. Node operations: You use these APIs to perform several node-related functions such as Manage Properties, Manage Aspects, copying, deleting, moving, and so on. Thus, as you can see, pretty much most of the things can be done in a JavaScript file using these APIs. However, one thing is important, that you should not mix the usual JavaScript code you write for your HTML or JSP web pages. Those scripts are executed by your browser (this means, at the client side). The scripts you write using Alfresco JavaScript API are not client-side JavaScript file – this means these do not get executed by your browser. Instead, they get executed in your server and the browser has nothing to do in these scripts. It is called JavaScript API since the APIs are exposed using the ECMA script model and syntaxes. The programs you develop using these APIs are written in JavaScript language. The JavaScript API model Alfresco has provided a number of objects in the JavaScript API – these are more usually named as Root Scope Objects. These objects are your entry point into the repository. Each of the root level objects refers to a particular entity or functional point in the repository. For example, userhome object refers to the home space node of the current user. Each of these objects presents a number of properties and functionalities, thus enabling the script writer to implement several different requirements. For example, the userhome.name statement will return the name of the root folder of the current user. Some important and most frequently used root scope objects are: Companyhome: Returns the company home script node object Userhome: Returns the home folder node of the current user Person: Represents the current user person object Space: Stands for the current space object Document: Returns the currently selected document Search: Offers fully functional search APIs People: Encapsulates all functionalities related to user, groups, roles, permissions, and so on. Sites: Exposes the site service functionalities Actions: Provides invocation methods for registered actions Workflow: Handles all functionalities related to workflow implementation within the repository Among these, companyhome, userhome, person, space, and document objects represent Alfresco Node objects and allow access to the properties and aspects of the corresponding node object. Each of the node objects provides a number of APIs which are termed ScriptNode API. The others – search, people, sites, workflow, and actions – expose several methods that would help you implement specific business requirements. For example, if you want to write a script that searches some documents and contents, you would use the search API. If you want to create a new user – the people API will help you.  
Read more
  • 0
  • 0
  • 2109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-implementing-alfresco-javascript-api-functionalities
Packt
27 Jul 2011
6 min read
Save for later

Implementing Alfresco JavaScript API Functionalities

Packt
27 Jul 2011
6 min read
Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco The reader can benefit from the previous article on Alfresco 3: Writing and Executing Scripts.   Add/Change contents of a document Let’s explore some example JavaScript. In the following example scripts, you will be able to witness the APIs and functionalities. Getting ready We will store the JavaScript files in the Company Home>Data Dictionary>Scripts>Cookbook folder (this folder does not exist in your repository and create this folder). And will run the sample scripts against a document – Test_JS_API.txt in the folder Company Home>InfoAxon>Chapter 8. I have uploaded this text file with a simple line of text: A sample Document created to investigate in JavaScript API. and used our custom content type iabook:Product. if (document.hasPermission("Write")) { if (document.mimetype == "text/plain") { if (!document.hasAspect("cm:versionable")) document.addAspect("cm:versionable"); var wcopy = document.checkout(); var cnt = wcopy.content; cnt += "rnThis line is added using the JavaScript."; wcopy.content = cnt; wcopy.checkin("Sample Line added via JS"); } } How to do it... Create a new script file in the Company Home>Data Dictionary>Scripts>Cookbook folder and save this code; let’s say the file is named changecontent.js Execute the script using Run Action on the document Test_JS_API.txt in the Chapter 8 folder. After running the script, a new version of the document will be created and a new line will be added in the document. Thus each time you run the script for this document, a line will be appended at the end of the content and a new version will be created. How it works... The document object here automatically refers to the current document, in our case, it is Test_JS_API.txt, since we have executed the script against this document. First we have checked whether we have proper permission to perform the write operation on the document. If the permission is there, we check the mimetype of the document, since the textual content writing operation is possible only for a few mimetypes such as text, html, and so on. After that, we check whether the document is versionable or not, by default, any content you upload in the repository is not versionable. So we add the cm:versionable aspect in case it is not there already. Then we checkout the document and append the line of text we want in the working copy. After updating the content, we checking the working copy with a commit comment. This comment is visible in the Version History of the document. Though it is not always mandatory to check for the required permissions, it is a good practice to confirm for the relevant permissions, otherwise Alfresco may throw runtime errors in case the required permissions are not available.   Creating a backup copy of a document In this recipe, we will write a script to create a backup copy of a particular document. How to do it... Create a new script file in the Company Home>Data Dictionary>Scripts>Cookbook folder and add the following code. Let’s say the file is named createbackup.js var back = space.childByNamePath("Backup"); if (back == null && space.hasPermission("CreateChildren")) { back = space.createFolder("Backup"); } if (back != null && back.hasPermission("CreateChildren")) { var copied = document.copy(back); if (copied != null) { var backName = "Backup of " + copied.name; copied.name = backName; copied.properties.description = "This is a Backup copy created by JS"; copied.save(); } } Execute the script using Run Action on the document Test_JS_API.txt in the Chapter 8 folder. After executing the script, a new folder named Backup will be created (if it does not exist already) and a copy of this document (named Backup of Test_JS_API.txt) will be created in the backup folder. (Move the mouse over the image to enlarge.) How it works... The space object here automatically refers to the current space. In our case, it is Chapter 8, since we have executed the script against a document from this folder. The document object here automatically refers to the current document. In our case, it is Test_JS_API.txt, since we have executed the script against this document. First we have checked whether a space already exists there with the name Backup under Chapter 8. If not, we create the space. This is the space where we intend to create our backup copy. After that, we check whether we have the proper permission to create a new document in the backup folder. We do this by checking the CreateChildren permission. If we have the proper required permission, we create a copy of the document in the backup folder. Then we change a few properties of the copied document – we change the name and description, for instance. After changing the properties, we save the changes. Note that you do not need to save after changing the content of a document. However, you need to do this in case you change any property of the content item.   Adding a tag to a document In this recipe, we will write a script that can be used to tag a document. How to do it… Create a new script file in the Company Home>Data Dictionary>Scripts>Cookbook folder and add the following code; let’s say the file is named addtag.js if (!document.hasAspect("cm:taggable")) document.addAspect("cm:taggable"); document.addTag("test"); Execute the script using Run Action on the document Test_JS_API.txt in the Chapter 8 folder. The document will not be taggable, and a new tag has been added with the document – test. This is reflected in the property sheet of the document. Now, you can also add more tags using the property editor dialog. How it works... The code we presented is rather simple in this case. As usual, the document object here automatically refers to the current document. In our case, it is Test_JS_API.txt, since we have executed the script against this document. First we have checked whether the document already has the cm:taggable aspect associated with it, if not we add this aspect. Then it is just about adding a tag – we added a tag test. You can also add multiple tags at a time using the addTags method (we have used the addTag method to add a single tag in our example).  
Read more
  • 0
  • 0
  • 2135

article-image-using-additional-solr-functionalities
Packt
26 Jul 2011
9 min read
Save for later

Using Additional Solr Functionalities

Packt
26 Jul 2011
9 min read
  Apache Solr 3.1 Cookbook Over 100 recipes to discover new ways to work with Apache’s Enterprise Search Server         Read more about this book       (For more resources on this subject, see here.) Getting more documents similar to those returned in the results list Let's imagine a situation where you have an e-commerce library shop and you want to show users the books similar to the ones they found while using your application. This recipe will show you how to do that. How to do it... Let's assume that we have the following index structure (just add this to your schema.xml file's fields section): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" termVectors="true" /> The test data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook first edition</field> </doc> <doc> <field name="id">2</field> <field name="name">Solr Cookbook second edition</field> </doc> <doc> <field name="id">3</field> <field name="name">Solr by example first edition</field> </doc> <doc> <field name="id">4</field> <field name="name">My book second edition</field> </doc> </add> Let's assume that our hypothetical user wants to find books that have first in their names. However, we also want to show him the similar books. To do that, we send the following query: http://localhost:8983/solr/select?q=name:edition&mlt=true&mlt. fl=name&mlt.mintf=1&mlt.mindf=1 The results returned by Solr are as follows: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">1</int> <lst name="params"> <str name="mlt.mindf">1</str> <str name="mlt.fl">name</str> <str name="q">name:edition</str> <str name="mlt.mintf">1</str> <str name="mlt">true</str> </lst> </lst> <result name="response" numFound="1" start="0"> <doc> <str name="id">3</str> <str name="name">Solr by example first edition</str> </doc> </result> <lst name="moreLikeThis"> <result name="3" numFound="3" start="0"> <doc> <str name="id">1</str> <str name="name">Solr Cookbook first edition</str> </doc> <doc> <str name="id">2</str> <str name="name">Solr Cookbook second edition</str> </doc> <doc> <str name="id">4</str> <str name="name">My book second edition</str> </doc> </result> </lst> </response> Now let's see how it works. How it works... As you can see, the index structure and the data are really simple. One thing to notice is that the termVectors attribute is set to true in the name field definition. It is a nice thing to have when using more like this component and should be used when possible in the fields on which we plan to use the component. Now let's take a look at the query. As you can see, we added some additional parameters besides the standard q one. The parameter mlt=true says that we want to add the more like this component to the result processing. Next, the mlt.fl parameter specifies which fields we want to use with the more like this component. In our case, we will use the name field. The mlt.mintf parameter tells Solr to ignore terms from the source document (the ones from the original result list) with the term frequency below the given value. In our case, we don't want to include the terms that will have the frequency lower than 1. The last parameter, mlt.mindf, tells Solr that the words that appear in less than the value of the parameter documents should be ignored. In our case, we want to consider words that appear in at least one document. Finally, let's take a look at the search results. As you can see, there is an additional section (<lst name="moreLikeThis">) that is responsible for showing us the more like this component results. For each document in the results, there is one more similar section added to the response. In our case, Solr added a section for the document with the unique identifier 3 (<result name="3" numFound="3" start="0">) and there were three similar documents found. The value of the id attribute is assigned the value of the unique identifier of the document that the similar documents are calculated for. Presenting search results in a fast and easy way Imagine a situation where you have to show a prototype of your brilliant search algorithm made with Solr to the client. But the client doesn't want to wait another four weeks to see the potential of the algorithm, he/she wants to see it very soon. On the other hand, you don't want to show the pure XML results page. What to do then? This recipe will show you how you can use the Velocity response writer (a.k.a. Solritas) to present a prototype fast. How to do it... Let's assume that we have the following index structure (just add this to your schema.xml file to the fields section): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" /> The test data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook first edition</field> </doc> <doc> <field name="id">2</field> <field name="name">Solr Cookbook second edition</field> </doc> <doc> <field name="id">3</field> <field name="name">Solr by example first edition</field> </doc> <doc> <field name="id">4</field> <field name="name">My book second edition</field> </doc> </add> We need to add the response writer definition. To do this, you should add this to your solrconfig.xml file (actually this should already be in the configuration file): <queryResponseWriter name="velocity" class="org.apache.solr. request.VelocityResponseWriter"/> Now let's set up the Velocity response writer. To do that we add the following section to the solrconfig.xml file (actually this should already be in the configuration file): <requestHandler name="/browse" class="solr.SearchHandler"> <lst name="defaults"> <str name="wt">velocity</str> <str name="v.template">browse</str> <str name="v.layout">layout</str> <str name="title">Solr cookbook example</str> <str name="defType">dismax</str> <str name="q.alt">*:*</str> <str name="rows">10</str> <str name="fl">*,score</str> <str name="qf">name</str> </lst> </requestHandler> Now you can run Solr and type the following URL address: http://localhost:8983/solr/browse You should see the following page: (Move the mouse over the image to enlarge it.) How it works... As you can see, the index structure and the data are really simple, so I'll skip discussing this part of the recipe. The first thing in configuring the solrconfig.xml file is adding the Velocity Response Writer definition. By adding it, we tell Solr that we will be using velocity templates to render the view. Now we add the search handler to use the Velocity Response Writer. Of course, we could pass the parameters with every query, but we don't want to do that, we want them to be added by Solr automatically. Let's go through the parameters: wt: The response writer type; in our case, we will use the Velocity Response Writer. v.template: The template that will be used for rendering the view; in our case, the template that Velocity will use is in the browse.vm file (the vm postfix is added by Velocity automatically). This parameter tells Velocity which file is responsible for rendering the actual page contents. v.layout: The layout that will be used for rendering the view; in our case, the template that velocity will use is in the layout.vm file (the vm postfix is added by velocity automatically). This parameter specifies how all the web pages rendered by Solritas will look like. title: The title of the page. defType: The parser that we want to use. q.alt: Alternate query for the dismax parser in case the q parameter is not defined. rows: How many maximum documents should be returned. fl: Fields that should be listed in the results. qf: The fields that we should be searched. Of course, the page generated by the Velocity Response Writer is just an example. To modify the page, you should modify the Velocity files, but this is beyond the scope of this article. There's more... If you are still using Solr 1.4.1 or 1.4, there is one more thing that can be useful. Running Solritas on Solr 1.4.1 or 1.4 Because the Velocity Response Writer is a contrib module in Solr 1.4.1, we need to do the following operations to use it. Copy the following libraries from the /contrib/velocity/ src/main/solr/lib directory to the /lib directory of your Solr instance: apache-solr-velocity-1.4.dev.jar commons-beanutils-1.7.0.jar commons-collections-3.2.1.jar velocity-1.6.1.jar velocity-tools-2.0-beta3.jar Then copy the contents of the /velocity (with the directory) directory from the code examples to your Solr configuration directory.
Read more
  • 0
  • 0
  • 1210

article-image-apache-solr-analyzing-your-text-data
Packt
22 Jul 2011
13 min read
Save for later

Apache Solr: Analyzing your Text Data

Packt
22 Jul 2011
13 min read
  Apache Solr 3.1 Cookbook Introduction Type's behavior can be defined in the context of the indexing process or the context of the query process, or both. Furthermore, type definition is composed of tokenizers and filters (both token filters and character filters). Tokenizer specifies how your data will be preprocessed after it is sent to the appropriate field. Analyzer operates on the whole data that is sent to the field. Types can only have one tokenizer. The result of the tokenizer work is a stream of objects called tokens. Next in the analysis chain are the filters. They operate on the tokens in the token stream. And they can do anything with the tokens—changing them, removing them, or for example, making them lowercase. Types can have multiple filters. One additional type of filter is the character filter. The character filters do not operate on tokens from the token stream. They operate on the data that is sent to the field and they are invoked before the data is sent to the analyzer. This article will focus on the data analysis and how to handle the common day-to-day analysis questions and problems. Storing additional information using payloads Imagine that you have a powerful preprocessing tool that can extract information about all the words in the text. Your boss would like you to use it with Solr or at least store the information it returns in Solr. So what can you do? We can use something that is called payload and use it to store that data. This recipe will show you how to do it. How to do it... I assumed that we already have an application that takes care of recognizing the part of speech in our text data. Now we need to add it to the Solr index. To do that we will use payloads, a metadata that can be stored with each occurrence of a term. First of all, you need to modify the index structure. For this, we will add the new field type to the schema.xml file: <fieldtype name="partofspeech" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer" delimiter="|"/> </analyzer> </fieldtype> Now add the field definition part to the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="text" type="text" indexed="true" stored="true" /> <field name="speech" type="partofspeech" indexed="true" stored= "true" multivalued="true" /> Now let's look at what the example data looks like (I named it ch3_payload.xml): <add> <doc> <field name="id">1</field> <field name="text">ugly human</field> <field name="speech">ugly|3 human|6</field> </doc> <doc> <field name="id">2</field> <field name="text">big book example</field> <field name="speech">big|3 book|6 example|1</field> </doc> </add> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_payload.xml file there): java -jarpost.jar ch3_payload.xml How it works... What information can payload hold? It may hold information that is compatible with the encoder type you define for the solr.DelimitedPayloadTokenFilterFactory filter . In our case, we don't need to write our own encoder—we will use the supplied one to store integers. We will use it to store the boost of the term. For example, nouns will be given a token boost value of 6, while the adjectives will be given a boost value of 3. First we have the type definition. We defined a new type in the schema.xml file, named partofspeech based on the Solr text field (attribute class="solr.TextField"). Our tokenizer splits the given text on whitespace characters. Then we have a new filter which handles our payloads. The filter defines an encoder, which in our case is an integer (attribute encoder="integer"). Furthermore, it defines a delimiter which separates the term from the payload. In our case, the separator is the pipe character |. Next we have the field definitions. In our example, we only define three fields: Identifier Text Recognized speech part with payload   Now let's take a look at the example data. We have two simple fields: id and text. The one that we are interested in is the speech field. Look how it is defined. It contains pairs which are made of a term, delimiter, and boost value. For example, book|6. In the example, I decided to boost the nouns with a boost value of 6 and adjectives with the boost value of 3. I also decided that words that cannot be identified by my application, which is used to identify parts of speech, will be given a boost of 1. Pairs are separated with a space character, which in our case will be used to split those pairs. This is the task of the tokenizer which we defined earlier. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it, we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/update. The following parameter is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. That is how you index payloads in Solr. In the 1.4.1 version of Solr, there is no further support for payloads. Hopefully this will change. But for now, you need to write your own query parser and similarity class (or extend the ones present in Solr) to use them. Eliminating XML and HTML tags from the text There are many real-life situations when you have to clean your data. Let's assume that you want to index web pages that your client sends you. You don't know anything about the structure of that page—one thing you know is that you must provide a search mechanism that will enable searching through the content of the pages. Of course, you could index the whole page by splitting it by whitespaces, but then you would probably hear the clients complain about the HTML tags being searchable and so on. So, before we enable searching on the contents of the page, we need to clean the data. In this example, we need to remove the HTML tags. This recipe will show you how to do it with Solr. How to do it... Let's suppose our data looks like this (the ch3_html.xml file): <add> <doc> <field name="id">1</field> <field name="html"><![CDATA[<html><head><title>My page</title></ head><body><p>This is a <b>my</b><i>sample</i> page</body></html> ]]></field> </doc> </add> Now let's take care of the schema.xml file. First add the type definition to the schema.xml file: <fieldType name="html_strip" class="solr.TextField"> <analyzer> <charFilter class="solr.HTMLStripCharFilterFactory"/> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> And now, add the following to the field definition part of the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="html" type="html_strip" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar ch3_html.xml If there were no errors, you should see a response like this: SimplePostTool: version 1.2 SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, other encodings are not currently supported SimplePostTool: POSTing files to http://localhost:8983/solr/update.. SimplePostTool: POSTingfile ch3_html.xml SimplePostTool: COMMITting Solr index changes.. How it works... First of all, we have the data example. In the example, we see one file with two fields; the identifier and some HTML data nested in the CDATA section. You must remember to surround the HTML data in CDATA tags if they are full pages, and start from HTML tags like our example, otherwise Solr will have problems with parsing the data. However, if you only have some tags present in the data, you shouldn't worry. Next, we have the html_strip type definition. It is based on solr.TextField to enable full-text searching. Following that, we have a character filter which handles the HTML and the XML tags stripping. This is something new in Solr 1.4. The character filters are invoked before the data is sent to the tokenizer. This way they operate on untokenized data. In our case, the character filter strips the HTML and XML tags, attributes, and so on. Then it sends the data to the tokenizer, which splits the data by whitespace characters. The one and only filter defined in our type makes the tokens lowercase to simplify the search. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/ update. The parameter of the command execution is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. As you can see, the sample response from the post tools is rather informative. It provides information about the update handler address, files that were sent, and information about commits being performed. If you want to check how your data was indexed, remember not to be mistaken when you choose to store the field contents (attribute stored="true"). The stored value is the original one sent to Solr, so you won't be able to see the filters in action. If you wish to check the actual data structures, please take a look at the Luke utility (a utility that lets you see the index structure, field values, and operate on the index). Luke can be found at the following address: http://code.google.com/p/luke Solr provides a tool that lets you see how your data is analyzed. That tool is a part of Solr administration pages. Copying the contents of one field to another Imagine that you have many big XML files that hold information about the books that are stored on library shelves. There is not much data, just the unique identifier, name of the book, and the name of the author. One day your boss comes to you and says: "Hey, we want to facet and sort on the basis of the book author". You can change your XML and add two fields, but why do that when you can use Solr to do that for you? Well, Solr won't modify your data, but it can copy the data from one field to another. This recipe will show you how to do that. How to do it... Let's assume that our data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook</field> <field name="author">John Kowalsky</field> </doc> <doc> <field name="id">2</field> <field name="name">Some other book</field> <field name="author">Jane Kowalsky</field> </doc> </add> We want the contents of the author field to be present in the fields named author, author_facet, and author sort. So let's define the copy fields in the schema.xml file (place the following right after the fields section): <copyField source="author"dest="author_facet"/> <copyField source="author"dest="author_sort"/> And that's all. Solr will take care of the rest. The field definition part of the schema.xml file could look like this: <field name="id" type="string" indexed="true" stored="true" required="true"/> <field name="author" type="text" indexed="true" stored="true" multiValued="true"/> <field name="name" type="text" indexed="true" stored="true"/> <field name="author_facet" type="string" indexed="true" stored="false"/> <field name="author_sort" type="alphaOnlySort" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar data.xml How it works... As you can see in the example, we have only three fields defined in our sample data XML file. There are two fields which we are not particularly interested in: id and name. The field that interests us the most is the author field. As I have mentioned earlier, we want to place the contents of that field in three fields: Author (the actual field that will be holding the data) author_ sort author_facet   To do that we use the copy fields. Those instructions are defined in the schema.xml file, right after the field definitions, that is, after the tag. To define a copy field, we need to specify a source field (attribute source) and a destination field (attribute dest). After the definitions, like those in the example, Solr will copy the contents of the source fields to the destination fields during the indexing process. There is one thing that you have to be aware of—the content is copied before the analysis process takes place. This means that the data is copied as it is stored in the source. There's more... There are a few things worth nothing when talking about copying contents of the field to another field. Copying the contents of dynamic fields to one field You can also copy multiple field content to one field. To do that, you should define a copy field like this: <copyField source="*_author"dest="authors"/> The definition like the one above would copy all of the fields that end with _author to one field named authors. Remember that if you copy multiple fields to one field, the destination field should be defined as multivalued. Limiting the number of characters copied There may be situations where you only need to copy a defined number of characters from one field to another. To do that we add the maxChars attribute to the copy field definition. It can look like this: <copyField source="author" dest="author_facet" maxChars="200"/> The above definition tells Solr to copy upto 200 characters from the author field to the author_facet field. This attribute can be very useful when copying the content of multiple fields to one field.
Read more
  • 0
  • 0
  • 1987

article-image-alfresco-3-web-scripts
Packt
21 Jul 2011
6 min read
Save for later

Alfresco 3: Web Scripts

Packt
21 Jul 2011
6 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco Introduction You all know about Web Services – which took the web development world by storm a few years ago. Web Services have been instrumental in constructing Web APIs (Application Programming Interface) and making the web applications work as Service-Oriented Architecture. In the new Web 2.0 world, however, many criticisms arose around traditional Web Services – thus RESTful services came into the picture. REST (Representational State Transfer) attempts to expose the APIs using HTTP or similar protocol and interfaces using well-known, light-weight and standard methods such as GET, POST, PUT, DELETE, and so on. Alfresco Web Scripts provide RESTful APIs of the repository services and functions. Traditionally, ECM systems have been exposing the interfaces using RPC (Remote Procedure Call) – but gradually it turned out that RPC-based APIs are not particularly suitable in the wide Internet arena where multiple environments and technologies reside together and talk seamlessly. In the case of Web Scripts, the RESTful services overcome all these problems and integration with an ECM repository has never been so easy and secure. Alfresco Web Scripts were introduced in 2006 and since then it has been quite popular with the developer and system integrator community for implementing services on top of the Alfresco repository and to amalgamate Alfresco with any other system. What is a Web Script? A Web Script is simply a URI bound to a service using standard HTTP methods such as GET, POST, PUT, or DELETE. Web Scripts can be written using simply the Alfresco JavaScript APIs and Freemarker templates, and optionally Java API as well with or without any Freemarker template. For example, the http://localhost:8080/alfresco/service/api/search/person.html ?q=admin&p=1&c=10 URL will invoke the search service and return the output in HTML. Internally, a script has been written using JavaScript API (or Java API) that performs the search and a FreeMarker template is written to render the search output in a structured HTML format. All the Web Scripts are exposed as services and are generally prefixed with http://<<server-url>>/<<context-path>>/<<servicepath>>. In a standard scenario, this is http://localhost:8080/alfresco/service Web Script architecture Alfresco Web Scripts strictly follow the MVC architecture. Controller: Written using Alfresco Java or JavaScript API, you implement your business requirements for the Web Script in this layer. You also prepare your data model that is returned to the view layer. The controller code interacts with the repository via the APIs and other services and processes the business implementations. View: Written using Freemarker templates, you implement exactly what you want to return in your Web Script. For data Web Scripts you construct your JSON or XML data using the template; and for presentation Web Scripts you build your output HTML. The view can be implemented using Freemarker templates, or using Java-backed Web Script classes. Model: Normally constructed in the controller layer (in Java or JavaScript), these values are automatically available in the view layer. Types of Web Scripts Depending on the purpose and output, Web Scripts can be categorized in two types: Data Web Scripts: These Web Scripts mostly return data in plenty after processing of business requirements. Such Web Scripts are mostly used to retrieve, update, and create content in the repository or query the repository. Presentation Web Scripts: When you want to build a user interface using Web Scripts, you use these Web Scripts. They mostly return HTML output. Such Web Scripts are mostly used for creating dashlets in Alfresco Explorer or Alfresco Share or for creating JSR-168 portlets. Note that this categorization of Web Script is not technically different—it is just a logical separation. This means data Web Scripts and presentation Web Scripts are not technically dissimilar, only usage and purpose is different. Web Script files Defining and creating a Web Script in Alfresco requires creating certain files in particular folders. These files are: Web Script Descriptor: The descriptor is an XML file used to define the Web Script – the name of the script, the URL(s) on which the script can be invoked, the authentication mechanism of the script and so on. The name of the descriptor file should be of the form: <<service-id>>.<<http-method>>. desc.xml; for example, helloworld.get.desc.xml. Freemarker Template Response file(s) optional: The Freemarker Template output file(s) is the FTL file which is returned as the result of the Web Script. The name of the template files should be of the form: &lt;<service-id>>.<<httpmethod>>.<< response-format>>.ftl; for example, helloworld.get.html.ftl and helloworld.get.json.ftl. Controller JavaScript file (optional): The Controller JavaScript file is the business layer of your Web Script. The name of the JavaScript file should be of the form: <<service-id>>.<<http-method>>.js; for example, helloworld.get.js. Controller Java file (optional): You can write your business implementations in Java classes as well, instead of using JavaScript API. Configuration file (optional): You can optionally include a configuration XML file. The name of the file should be of the form: <<service-id>>.<<http-method>>.config.xml; for example, helloworld.get.config.js. Resource Bundle file (optional): These are standard message bundle files that can be used for making Web Script responses localized. The name of message files would be of the form: <<service-id>>.<<http-method>>.properties; for example, helloworld.get.properties. The naming conventions of Web Script files are fixed – they follow particular semantics. Alfresco, by default, has provided a quite rich list of built-in Web Scripts which can be found in the tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscriptsorgalfresco folder. There are a few locations where you can store your Web Scripts. Classpath folder: tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscripts Classpath folder (extension): tomcatwebappsalfrescoWEB-INFclassesalfrescoextensiontemplateswebscripts Repository folder: /Company Home/Data Dictionary/Web Scripts Repository folder (extension): /Company Home/Data Dictionary/Web Scripts Extensions It is not advised to keep your Web Scripts in the orgalfresco folder; this folder is reserved for Alfresco default Web Scripts. Create your own folders instead. Or better, you should create your Web Scripts in the extension folders. Web Script parameters You of course need to pass some parameters to your Web Script and execute your business implementations around that. You can pass parameters by query string for the GET Web Scripts. For example: http://localhost:8080/alfresco/service/api/search/person.html?q=admin&p=1&c=10 In this script, we have passed three parameters – q (for the search query), p (for the page index), and c (for the number of items per page). You can also pass parameters bound in HTML form data in the case of POST Web Scripts. One example of such Web Script is to upload a file using Web Script.  
Read more
  • 0
  • 0
  • 1774
article-image-play-framework-data-validation-using-controllers
Packt
21 Jul 2011
15 min read
Save for later

Play Framework: Data Validation Using Controllers

Packt
21 Jul 2011
15 min read
Play Framework Cookbook Over 60 incredibly effective recipes to take you under the hood and leverage advanced concepts of the Play framework Introduction This article will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes. Always remember that controllers are really only a thin layer to ensure that your data from the outside world is valid before handing it over to your models, or something needs to be specifically adapted to HTTP. URL routing using annotation-based configuration If you do not like the routes file, you can also describe your routes programmatically by adding annotations to your controllers. This has the advantage of not having any additional config file, but also poses the problem of your URLs being dispersed in your code. You can find the source code of this example in the examples/chapter2/annotationcontroller directory. How to do it... Go to your project and install the router module via conf/dependencies.yml: require: - play - play -> router head Then run playdeps and the router module should be installed in the modules/ directory of your application. Change your controller like this: @StaticRoutes({ @ServeStatic(value="/public/", directory="public") }) public class Application extends Controller { @Any(value="/", priority=100) public static void index() { forbidden("Reserved for administrator"); } @Put(value="/", priority=2, accept="application/json") public static void hiddenIndex() { renderText("Secret news here"); } @Post("/ticket") public static void getTicket(String username, String password) { String uuid = UUID.randomUUID().toString(); renderJSON(uuid); } } How it works... Installing and enabling the module should not leave any open questions for you at this point. As you can see in the controller, it is now filled with annotations that resemble the entries in the routes.conf file, which you could possibly have deleted by now for this example. However, then your application will not start, so you have to have an empty file at least. The @ServeStatic annotation replaces the static command in the routes file. The @StaticRoutes annotation is just used for grouping several @ServeStatic annotations and could be left out in this example. Each controller call now has to have an annotation in order to be reachable. The name of the annotation is the HTTP method, or @Any, if it should match all HTTP methods. Its only mandatory parameter is the value, which resembles the URI—the second field in the routes. conf. All other parameters are optional. Especially interesting is the priority parameter, which can be used to give certain methods precedence. This allows a lower prioritized catchall controller like in the preceding example, but a special handling is required if the URI is called with the PUT method. You can easily check the correct behavior by using curl, a very practical command line HTTP client: curl -v localhost:9000/ This command should give you a result similar to this: > GET / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: Play! Framework;1.1;dev < Content-Type: text/html; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=0c7df945a5375480993f51914804284a3bb ca726-%00___ID%3A70963572-b0fc-4c8c-b8d5-871cb842c5a2%00;Path=/ < Cache-Control: no-cache < Content-Length: 32 < <h1>Reserved for administrator</h1> You can see the HTTP error message and the content returned. You can trigger a PUT request in a similar fashion: curl -X PUT -v localhost:9000/ > PUT / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 200 OK < Server: Play! Framework;1.1;dev < Content-Type: text/plain; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=f0cb6762afa7c860dde3fe1907e8847347 6e2564-%00___ID%3A6cc88736-20bb-43c1-9d43-42af47728132%00;Path=/ < Cache-Control: no-cache < Content-Length: 16 Secret news here As you can see now, the priority has voted the controller method for the PUT method which is chosen and returned. There's more... The router module is a small, but handy module, which is perfectly suited to take a first look at modules and to understand how the routing mechanism of the Play framework works at its core. You should take a look at the source if you need to implement custom mechanisms of URL routing. Mixing the configuration file and annotations is possible You can use the router module and the routes file—this is needed when using modules as they cannot be specified in annotations. However, keep in mind that this is pretty confusing. You can check out more info about the router module at http://www.playframework.org/modules/router. Basics of caching Caching is quite a complex and multi-faceted technique, when implemented correctly. However, implementing caching in your application should not be complex, but rather the mindwork before, where you think about what and when to cache, should be. There are many different aspects, layers, and types (and their combinations) of caching in any web application. This recipe will give a short overview about the different types of caching and how to use them. You can find the source code of this example in the chapter2/caching-general directory. Getting ready First, it is important that you understand where caching can happen—inside and outside of your Play application. So let's start by looking at the caching possibilities of the HTTP protocol. HTTP sometimes looks like a simple protocol, but is tricky in the details. However, it is one of the most proven protocols in the Internet, and thus it is always useful to rely on its functionalities. HTTP allows the caching of contents by setting specific headers in the response. There are several headers which can be set: Cache-Control: This is a header which must be parsed and used by the client and also all the proxies in between. Last-Modified: This adds a timestamp, explaining when the requested resource had been changed the last time. On the next request the client may send an If-Modified- Since header with this date. Now the server may just return a HTTP 304 code without sending any data back. ETag: An ETag is basically the same as a Last-Modified header, except it has a semantic meaning. It is actually a calculated hash value resembling the resource behind the requested URL instead of a timestamp. This means the server can decide when a resource has changed and when it has not. This could also be used for some type of optimistic locking. So, this is a type of caching on which the requesting client has some influence on. There are also other forms of caching which are purely on the server side. In most other Java web frameworks, the HttpSession object is a classic example, which belongs to this case. Play has a cache mechanism on the server side. It should be used to store big session data, in this case any data exceeding the 4KB maximum cookie size. Be aware that there is a semantic difference between a cache and a session. You should not rely on the data being in the cache and thus need to handle cache misses. You can use the Cache class in your controller and model code. The great thing about it is that it is an abstraction of a concrete cache implementation. If you only use one node for your application, you can use the built-in ehCache for caching. As soon as your application needs more than one node, you can configure a memcached in your application.conf and there is no need to change any of your code. Furthermore, you can also cache snippets of your templates. For example, there is no need to reload the portal page of a user on every request when you can cache it for 10 minutes. This also leads to a very simple truth. Caching gives you a lot of speed and might even lower your database load in some cases, but it is not free. Caching means you need RAM, lots of RAM in most cases. So make sure the system you are caching on never needs to swap, otherwise you could read the data from disk anyway. This can be a special problem in cloud deployments, as there are often limitations on available RAM. The following examples show how to utilize the different caching techniques. We will show four different use cases of caching in the accompanying test. First test: public class CachingTest extends FunctionalTest { @Test public void testThatCachingPagePartsWork() { Response response = GET("/"); String cachedTime = getCachedTime(response); assertEquals(getUncachedTime(response), cachedTime); response = GET("/"); String newCachedTime = getCachedTime(response); assertNotSame(getUncachedTime(response), newCachedTime); assertEquals(cachedTime, newCachedTime); } @Test public void testThatCachingWholePageWorks() throws Exception { Response response = GET("/cacheFor"); String content = getContent(response); response = GET("/cacheFor"); assertEquals(content, getContent(response)); Thread.sleep(6000); response = GET("/cacheFor"); assertNotSame(content, getContent(response)); } @Test public void testThatCachingHeadersAreSet() { Response response = GET("/proxyCache"); assertIsOk(response); assertHeaderEquals("Cache-Control", "max-age=3600", response); } @Test public void testThatEtagCachingWorks() { Response response = GET("/etagCache/123"); assertIsOk(response); assertContentEquals("Learn to use etags, dumbass!", response); Request request = newRequest(); String etag = String.valueOf("123".hashCode()); Header noneMatchHeader = new Header("if-none-match", etag); request.headers.put("if-none-match", noneMatchHeader); DateTime ago = new DateTime().minusHours(12); String agoStr = Utils.getHttpDateFormatter().format(ago. toDate()); Header modifiedHeader = new Header("if-modified-since", agoStr); request.headers.put("if-modified-since", modifiedHeader); response = GET(request, "/etagCache/123"); assertStatus(304, response); } private String getUncachedTime(Response response) { return getTime(response, 0); } private String getCachedTime(Response response) { return getTime(response, 1); } private String getTime(Response response, intpos) { assertIsOk(response); String content = getContent(response); return content.split("n")[pos]; } } The first test checks for a very nice feature. Since play 1.1, you can cache parts of a page, more exactly, parts of a template. This test opens a URL and the page returns the current date and the date of such a cached template part, which is cached for about 10 seconds. In the first request, when the cache is empty, both dates are equal. If you repeat the request, the first date is actual while the second date is the cached one. The second test puts the whole response in the cache for 5 seconds. In order to ensure that expiration works as well, this test waits for six seconds and retries the request. The third test ensures that the correct headers for proxy-based caching are set. The fourth test uses an HTTP ETag for caching. If the If-Modified-Since and If-None- Match headers are not supplied, it returns a string. On adding these headers to the correct ETag (in this case the hashCode from the string 123) and the date from 12 hours before, a 302 Not-Modified response should be returned. How to do it... Add four simple routes to the configuration as shown in the following code: GET / Application.index GET /cacheFor Application.indexCacheFor GET /proxyCache Application.proxyCache GET /etagCache/{name} Application.etagCache The application class features the following controllers: public class Application extends Controller { public static void index() { Date date = new Date(); render(date); } @CacheFor("5s") public static void indexCacheFor() { Date date = new Date(); renderText("Current time is: " + date); } public static void proxyCache() { response.cacheFor("1h"); renderText("Foo"); } @Inject private static EtagCacheCalculator calculator; public static void etagCache(String name) { Date lastModified = new DateTime().minusDays(1).toDate(); String etag = calculator.calculate(name); if(!request.isModified(etag, lastModified.getTime())) { throw new NotModified(); } response.cacheFor(etag, "3h", lastModified.getTime()); renderText("Learn to use etags, dumbass!"); } } As you can see in the controller, the class to calculate ETags is injected into the controller. This is done on startup with a small job as shown in the following code: @OnApplicationStart public class InjectionJob extends Job implements BeanSource { private Map<Class, Object>clazzMap = new HashMap<Class, Object>(); public void doJob() { clazzMap.put(EtagCacheCalculator.class, new EtagCacheCalculator()); Injector.inject(this); } public <T> T getBeanOfType(Class<T>clazz) { return (T) clazzMap.get(clazz); } } The calculator itself is as simple as possible: public class EtagCacheCalculator implements ControllerSupport { public String calculate(String str) { return String.valueOf(str.hashCode()); } } The last piece needed is the template of the index() controller, which looks like this: Current time is: ${date} #{cache 'mainPage', for:'5s'} Current time is: ${date} #{/cache} How it works... Let's check the functionality per controller call. The index() controller has no special treatment inside the controller. The current date is put into the template and that's it. However, the caching logic is in the template here because not the whole, but only a part of the returned data should be cached, and for that a #{cache} tag used. The tag requires two arguments to be passed. The for parameter allows you to set the expiry out of the cache, while the first parameter defines the key used inside the cache. This allows pretty interesting things. Whenever you are in a page where something is exclusively rendered for a user (like his portal entry page), you could cache it with a key, which includes the user name or the session ID, like this: #{cache 'home-' + connectedUser.email, for:'15min'} ${user.name} #{/cache} This kind of caching is completely transparent to the user, as it exclusively happens on the server side. The same applies for the indexCacheFor() controller. Here, the whole page gets cached instead of parts inside the template. This is a pretty good fit for nonpersonalized, high performance delivery of pages, which often are only a very small portion of your application. However, you already have to think about caching before. If you do a time consuming JPA calculation, and then reuse the cache result in the template, you have still wasted CPU cycles and just saved some rendering time. The third controller call proxyCache() is actually the most simple of all. It just sets the proxy expire header called Cache-Control. It is optional to set this in your code, because your Play is configured to set it as well when the http.cacheControl parameter in your application.conf is set. Be aware that this works only in production, and not in development mode. The most complex controller is the last one. The first action is to find out the last modified date of the data you want to return. In this case it is 24 hours ago. Then the ETag needs to be created somehow. In this case, the calculator gets a String passed. In a real-world application you would more likely pass the entity and the service would extract some properties of it, which are used to calculate the ETag by using a pretty-much collision-safe hash algorithm. After both values have been calculated, you can check in the request whether the client needs to get new data or may use the old data. This is what happens in the request.isModified() method. If the client either did not send all required headers or an older timestamp was used, real data is returned; in this case, a simple string advising you to use an ETag the next time. Furthermore, the calculated ETag and a maximum expiry time are also added to the response via response.cacheFor(). A last specialty in the etagCache() controller is the use of the EtagCacheCalculator. The implementation does not matter in this case, except that it must implement the ControllerSupport interface. However, the initialization of the injected class is still worth a mention. If you take a look at the InjectionJob class, you will see the creation of the class in the doJob() method on startup, where it is put into a local map. Also, the Injector.inject() call does the magic of injecting the EtagCacheCalculator instance into the controllers. As a result of implementing the BeanSource interface, the getBeanOfType() method tries to get the corresponding class out of the map. The map actually should ensure that only one instance of this class exists. There's more... Caching is deeply integrated into the Play framework as it is built with the HTTP protocol in mind. If you want to find out more about it, you will have to examine core classes of the framework. More information in the ActionInvoker If you want to know more details about how the @CacheFor annotation works in Play, you should take a look at the ActionInvoker class inside of it. Be thoughtful with ETag calculation Etag calculation is costly, especially if you are calculating more then the last-modified stamp. You should think about performance here. Perhaps it would be useful to calculate the ETag after saving the entity and storing it directly at the entity in the database. It is useful to make some tests if you are using the ETag to ensure high performance. In case you want to know more about ETag functionality, you should read RFC 2616. You can also disable the creation of ETags totally, if you set http.useETag=false in your application.conf. Use a plugin instead of a job The job that implements the BeanSource interface is not a very clean solution to the problem of calling Injector.inject() on start up of an application. It would be better to use a plugin in this case.
Read more
  • 0
  • 0
  • 2295

article-image-drupal-7-customizing-existing-theme
Packt
15 Jul 2011
9 min read
Save for later

Drupal 7: Customizing an Existing Theme

Packt
15 Jul 2011
9 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling    With the arrival of Drupal 6, sub-theming really came to the forefront of theme design. While previously many people copied themes and then re-worked them to achieve their goals, that process became less attractive as sub-themes came into favor. This article focuses on sub-theming and how it should be used to customize an existing theme. We'll start by looking at how to set up a workspace for Drupal theming. Setting up the workspace Before you get too far into attempting to modify your theme files, you should put some thought into your tools. There are several software applications that can make your work modifying themes more efficient. Though no specific tools are required to work with Drupal themes, you could do it all with just a text editor—there are a couple of applications that you might want to consider adding to your tool kit. The first item to consider is browser selection. Firefox has a variety of extensions that make working with themes easier. The Web Developer extension, for example, is hugely helpful when dealing with CSS and related issues. We recommend the combination of Firefox and the Web developer extension to anyone working with Drupal themes. Another extension popular with many developers is Firebug, which is very similar to the Web developer extension, and is indeed more powerful in several regards. Pick up Web developer, Firebug, and other popular Firefox add-ons at https://addons.mozilla.org/en-US/firefox/. There are also certain utilities you can add into your Drupal installation that will assist with theming the site. Two modules you definitely will want to install are Devel and Theme developer. Theme developer can save you untold hours of digging around trying to find the right function or template. When the module is active all you need to do is click on an element and the Theme developer pop-up window will show you what is generating the element, along with other useful information like potential template suggestions. The Devel module performs a number of functions and is a prerequisite for running Theme developer. Download Devel from: http://drupal.org/project/devel. You can find Theme developer at: http://drupal.org/project/devel_themer. Note that neither Devel nor Theme Developer are suitable for use in a development environment—you don't want these installed and enabled on a client's public site, as they can present a security risk. When it comes to working with PHP files and the various theme files, you will also need a good code editor. There's a whole world of options out there, and the right choice for you is really a personal decision. Suffice it to say: as long as you are comfortable with it, it's probably the right choice. Setting up a local development server Another key component of your workspace is the ability to preview your work—preferably locally. As a practical matter, previewing Drupal themes requires the use of a server; themes are difficult to preview with any accuracy without a server to execute the PHP code. While you can work on a remote server on your webhost, often this is undesirable due to latency or simple lack of availability. A quick solution to this problem is to set up a local server using something like the XAMPP package (or the MAMP package for Mac OSX). XAMPP provides a one step installer containing everything you need to set up a server environment on your local machine (Apache, MySQL, PHP, phpMyAdmin, and more). Visit http://www.ApacheFriends.org to download XAMPP and you can have your own Dev Server set up on your local machine in no time at all. Follow these steps to acquire the XAMPP installation package and get it set up on your local machine: Connect to the Internet and direct your browser to http://www.apachefriends.org. Select XAMPP from the main menu. Click the link labeled XAMPP for Windows. Click the .zip option under the heading XAMPP for Windows. Note that you will be re-directed to the SourceForge site for the actual download. When the pop-up prompts you to save the file, click OK and the installer will download to your computer. Locate the downloaded archive (.zip) package on your local machine, and double-click it. Double-click the new file to start the installer. Follow the steps in the installer and then click Finish to close the installer. That's all there is to it. You now have all the elements you need for your own local development server. To begin, simply open the XAMPP application and you will see buttons that allow you to start the servers. To create a new website, simply copy the files into a directory placed inside the /htdocs directory. You can then access your new site by opening the URL in your browser, as follows: http://localhost/sitedirectoryname. As a final note, you may also want to have access to a graphics program to handle editing any image files that might be part of your theme. Again, there is a world of options out there and the right choice is up to you. Planning the modifications A proper dissertation on site planning and usability are beyond the scope of this article. Similarly, this article is neither an HTML nor a CSS tutorial; accordingly, in this article we are going to focus on identifying the issues and delineating the process involved in the customization of an existing theme, rather than focusing on design techniques or coding-specific changes. Any time you set off down the path of transforming an existing theme into something new, you need to spend some time planning. The principle here is the same as in many other areas. A little time spent planning at the frontend of a project can pay off big in savings later. When it comes to planning your theming efforts, the very first question you have to answer is whether you are going to customize an existing theme or whether you will create a new theme. In either event, it is recommended that you work with sub-themes. The key difference is the nature of the base theme you select, that is, the theme you are going to choose as your starting point. In sub-theming, the base theme is the starting point. Sub-themes inherit the parent theme's resources; hence, the base theme you select will shape your theme building. Some base themes are extremely simple, designed to impose on the themer the fewest restrictions; others are designed to give you the widest range of resources to assist your efforts. However, since you can use any theme for a base theme, the reality is that most themes fall in between, at least in terms of their suitability for use as a base theme. Another way to think of the relationship between a base theme and a subtheme is in terms of a parent-child relationship. The child (sub-theme) inherits its characteristics from its parent (the base theme). There are no limits to the ability to chain together multiple parent-child relationships; a sub-theme can be the child of another sub-theme. When it comes to customizing an existing theme, the reality is often that the selection of the base theme will be dictated by the theme's default appearance and feature set; in other words, you are likely to select the theme that is already the closest to what you want. That said, don't limit yourself to a shallow surface examination of the theme. In order to make the best decision, you need to look carefully at the underlying theme's file and structures and see if it truly is the best choice. While the original theme may be fairly close to what you want, it may also have limitations that require work to overcome. Sometimes it is actually faster to start with a more generic theme that you already know and can work with easily. Learning someone else's code is always a bit of a chore and themes are like any other code—some are great, some are poor, most are simply okay. A best practices theme makes your life easier. In simplest terms, the process of customizing an existing theme can be broken into three steps: Select your base theme. Create a sub-theme from your base theme. Make the changes to your new sub-theme. Why it is not recommended to simply modify the theme directly? There are two following reasons: First, best practices say not to touch the original files; leave them intact so you can upgrade them without losing customizations. Second, as a matter of theming philosophy, it's better to leave the things you don't need to change in the base theme and focus your sub-theme on only the things you want to change. This approach to theming is more manageable and makes for much easier testing as you go. Selecting a base theme For the sake of simplicity, in this article, we are going to work with the default Bartik theme. We'll take Bartik, create a new sub-theme and then modify the subtheme to create the customized theme. Let's call the new theme "JeanB". Note that while we've named the theme "JeanB", when it comes to naming the theme's directory, we will use "jeanb" as the system only supports lowercase letters and underscores. In order to make the example easier to follow and to avoid the need to install a variety of third-party extensions, the modifications we will make in this article will be done using only the default components. Arguably, when you are building a site like this for deployment in the real world (rather than simply for skills development) you might wish to consider implementing one or more specialized third-party extensions to handle certain tasks.
Read more
  • 0
  • 0
  • 3700

article-image-drupal-7-fieldscck-field-display-management
Packt
12 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Field Display Management

Packt
12 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Field display The purpose of managing the field display is not only to beautify the visual representation of fields but also to affect how people read the information on a web page and the usability of a website. The design of a field display has to seem logical to users and easy to understand. Consider an online application form where the first name field is positioned in between the state and country fields. Although the application can gather the information just fine, this would be very illogical and bothersome to our users. It goes without saying that the first name should be in the personal details section while the state and country should go in the personal address section of the form. Time for action – a first look at the field display settings In this section, we will learn where to find the field display settings in Drupal. Now, let's take a look at the field display settings: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on this page. On the right of the table, click on the manage display link to go to the manage display administration page to adjust the order and positioning of, the field labels. Click on the manage display link to adjust the field display for the Recipe content type. This page lists all of the field display settings that are related to the content type we selected. (Move the mouse over the image to enlarge it.) If we click on the select list for any of the labels, there are three options that we can select, Above, Inline, and <Hidden>. If we click on the select list for any of the formats, there are five options that we can select from, namely, Default, Plain text, Trimmed, Summary or trimmed, and <Hidden>. However, the options will vary with field types. As in the case of the Difficulty field the multiple values field, if we click on the select list for Format, will show three options, Default, Key, and <Hidden>. What just happened? We have learned where to find the field display settings in Drupal, and we have taken a look at the options for the field display. When we click on the select list for labels, there are three options that we can use to control the display of the field label: Above: The label will be positioned above the field widget Inline: The label will be positioned to the left of the field widget <Hidden>:The label will not be displayed on the page When we click on the select list for formats, the options will be shown, but the options will be different, and depend on the field type we select. For the Body field, we will have four options that we can choose to control the body field display: Default: The field content will be displayed as we specified when we created the field Plain text: The field content will be displayed as plain text, which ignores HTML tags if the content contains any Trimmed: The field content will be truncated to a specified number of characters Summary or trimmed: The summary of the field will be displayed; if there is no summary entered, the content of the field will be trimmed to a specified number of characters <Hidden>:The field content will not be displayed Formatting field display in the Teaser view The teaser view of content is usually the first piece of information people will see on a homepage or a landing page, so it will be useful if there are options that could control the display in teaser view. For example, for the yummy recipe website, the client would like to have the number of characters displayed in teaser view limited to 300 characters, because they do not like to display too much text for each post on the homepage. Time for action – formatting the Body field display in teaser view In this section, we will format the Body field of the Recipe content type in teaser view: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on the following page: Click on the manage display link to adjust the field display for the Recipe content type. At the top-right of the page, there are two buttons, the first one is Default, the second one is Teaser, now click on the Teaser button. (Move the mouse over the image to enlarge it.) This page lists all the available fields for the teaser view of the Recipe content type. (Move the mouse over the image to enlarge it.) Now click on the gear icon to update the trim length settings: (Move the mouse over the image to enlarge it.) After clicking on the gear icon, which will display the Trim length settings. The default value of Trim length is 600, we change it to 300, and then click on the Update button to confirm the entered value. Click on the Save button at the bottom of the page to store the value into Drupal. If we go back to the homepage, we will see the recipe content in teaser view. It is now truncated to 300 characters. What just happened? We have formatted the Body field of the Recipe content type in Teaser view. Currently there are two view modes, one is the Default view mode, and the other is the Teaser view mode. When we need to format the field content in Teaser view, we have to switch to the Teaser view mode on the Manage display. administration page to modify these settings. Moreover, when entering data or updating the field display settings, we have to remember to click on the Save button at the bottom of the page to permanently store the value into Drupal. If we just click on the Update button, it will not store the value into Drupal, it will only confirm the value we entered, therefore, we always need to remember to click on the Save button after updating any settings. Furthermore, there are other fields which are positioned in the hidden section at the bottom of the page, which means those fields will not be shown in Teaser view. In our case only the Body field is shown in Teaser view. We can easily drag and drop a field to the hidden section to hide the field, or drag and drop a field above the hidden section to show the field on the screen.
Read more
  • 0
  • 0
  • 1327
article-image-drupal-7-fieldscck-using-image-field-modules
Packt
11 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Using the Image Field Modules

Packt
11 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Adding image fields to content types We have learned how to add file fields to content types. In this section, we will learn how to add image fields to content types so that we can attach images to our content. Time for action – adding an image field to the Recipe content type In this section, we will add an image field to the Recipe content type. Follow these steps: Click on the Structure link in the administration menu at the top of the page. Click on the Content types link to go to the content types administration page. Click on the manage fields link on the Recipe row as in the following screenshot, because we would like to add an image field to the recipe content type. (Move the mouse over the image to enlarge it.) Locate the Add new field section. In the Label field enter "Image", and in the Field name field enter "image". In the field type select list, select "image" as the field type; the field widget will be automatically switched to select Image as the field widget. After the values are entered, click on Save. (Move the mouse over the image to enlarge it.)   What just happened?   We added an image field to the Recipe content type. The process of adding an image field to the Recipe content type is similar to the process of adding a file field to the Recipe content type, except that we selected image field as the field type and we selected image as the field widget. We will configure the image field in the next section. Configuring image field settings We have already added the image field. In this section, we will configure the image field, learn how to configure the image field settings, and understand how they reflect to image outputs by using those settings. Time for action – configuring an image field for the Recipe content type In this section, we will configure image field settings in the Recipe content type. Follow these steps: After clicking on the Save button, Drupal will direct us to the next page, which provides the field settings for the image field. The Upload destination option is the same as the file field settings, which provide us an option to decide whether image files should be public or private. In our case, we select Public files. The last option is the Default image field. We will leave this option for now, and click on the Save field settings button to go to the next step. (Move the mouse over the image to enlarge it.) This page contains all the settings for the image field. The most common field settings are the Label field, the Required field, and the Help text field. We will leave these fields as default. (Move the mouse over the image to enlarge it.) The Allowed file extension section is similar to the file field we have already learned about. We will use the default value in this field, so we don't need to enter anything for this field. The File directory section is also the same as the settings in the file field. Enter "image_files" in this field. (Move the mouse over the image to enlarge it.) Enter "640" x "480" in the Maximum image resolution field and the Minimum image resolution field, and enter "2MB" in the maximum upload size field. (Move the mouse over the image to enlarge it.) Check the Enable Alt field and the Enable Title field checkboxes. (Move the mouse over the image to enlarge it.) Select thumbnail in the Preview image style select list, and select Throbber in the Progress indicator section. (Move the mouse over the image to enlarge it.) The bottom part of this page, the image field settings section, is the same as the previous page we just saved, so we don't need to re-enter the values. Click on the Save settings button at the bottom of the page to store all the values we entered on this page. (Move the mouse over the image to enlarge it.) After clicking on the Save settings button, Drupal sends us back to the Manage fields setting administration page. Now the image field is added to the Recipe content type. (Move the mouse over the image to enlarge it.)   What just happened?   We have added and configured an image field for the Recipe content type. We left the default values in the Label field, the Required field, and the Help text field. They are the most common settings in fields. The Allowed file extension section is similar to the file field that we have seen, which provides us with the ability to enter the file extension of the images which are allowed to be uploaded. The File directory field is the same as the one in the file field, which provides us with the option to save the uploaded files to a different directory to the default location of the file directory. The Maximum image resolution field allows us to specify the maximum width and height of image resolution that will be uploaded. If the uploaded image is bigger than the resolution we specified, it will resize images to the size we specified. If we did not specify the size, it will not have any restriction to images. The Minimum image resolution field is the opposite of the maximum image resolution. We specify the minimum width and height of image resolution that is allowed to be uploaded, not the maximum width and height of image resolution. If we upload image resolution less than the minimum size we specified, it will throw an error message and reject the image upload. The Enable Alt field and the Enable Title field can be enabled to allow site administrators to enter the ALT and Title attributes of the img tag in XHTML, which can improve the accessibility and usability of a website when using images. The Preview image style select list allows us to select which image style will be used to display while editing content. Currently it provides three image styles, thumbnail, medium, and large. The thumbnail image style will be used by default. We will learn how to create a custom image style in the next section. Have a go hero – adding an image field to the Cooking Tip content type It's time for another challenge. We have created an image field to the Recipe content type. We can use the same method we have learned here to add and configure an image field to the Cooking Tip content type. You can apply the same steps used to create image fields to the Recipe content type and try to understand the differences between the settings on the image field settings administration page.
Read more
  • 0
  • 0
  • 1776

article-image-drupal-7-fieldscck-using-file-field-modules
Packt
08 Jul 2011
4 min read
Save for later

Drupal 7 fields/CCK: Using the file field modules

Packt
08 Jul 2011
4 min read
Adding and configuring file fields to content types There are many cases where we need to attach files to website content. For instance, a restaurant owner might like to upload their latest menu in PDF format to their website, or a financial institution would like to upload a new product catalog so customers can download and print out the catalog if they need it. The File module is built into the Drupal 7 core, which provides us with the ability to attach files to content easily, to decide the attachment display format, and also to manage file locations. Furthermore, the File module is integrated with fields and provides a file field type, so we can easily attach files to content using the already discussed field system making the process of managing files much more streamlined. Time for action – adding and configuring a file field to the Recipe content type In this section, we will add a file field to the Recipe content type, which will allow files to be attached to Recipe content. Follow these steps: Click on the Structure link in the administration menu at the top of the page. The following page will display a list of options. Click on the Content types link to go to the Content types administration page. Since we want to add a file field to the Recipe content type, we will click on the manage fields link on the Recipe row as shown in the following screenshot: (Move the mouse over the image to enlarge it.) This page will display the existing fields of the Recipe content type. In the Label field enter "File", and in the Field name field enter "file". In the field type select list select File as the field type, the field widget will be automatically switched to File as the field widget. After the values are entered, click on Save. A new window will pop up which provides the field settings for the file field that we are creating. There are two checkboxes, and we will enable both these checkboxes. The last radio button option will be selected by default. Then click on the Save field settings button at the bottom of the page. We clicked on the Save field settings button to store the values for the file field settings that we selected. After that, it will direct us to the file field settings administration page, as in the following screenshot: We can leave the Label field as default as it will be filled automatically with the value we entered previously. We will also leave the Required field as default, because we do not want to force users to attach files to every recipe. In the Help text field, we can enter "Attach files to this recipe". In the Allowed file extensions section, we can enter the file extensions that are allowed to be uploaded. In this case, we will enter "txt, pdf, zip". In the File directory section, we can enter the name of a subdirectory that will store the uploaded files, and in this case, we will enter "recipe_files": In the Maximum upload size section, we can enter a value to limit the file size when uploading files. We will enter "2MB" in this field. The Enable Description field checkbox allows users to enter a description about the uploaded files. In this case, we will enable this option, because we would like users to enter a description of the uploaded files. (Move the mouse over the image to enlarge it.) In the Progress indicator section, we can select which indicator will be used when uploading files. We select Throbber as the progress indicator for this field. (Move the mouse over the image to enlarge it.) You will notice the bottom part of the page is exactly same as in the previous section. We can ignore the bottom part and click on the Save settings button to store all the values we have entered. Drupal will direct us back to the manage fields administration page with a message saying we have successfully saved the configuration for the file field. After creating the file field, the file field row will be added to the table. This table will display the details about the file field we just created. (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 3233