Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-haxe-2-using-templates
Packt
25 Jul 2011
10 min read
Save for later

haXe 2: Using Templates

Packt
25 Jul 2011
10 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language Introduction to the haxe.Template class As developers our job is to create programs that allow the manipulation of data. That's the basis of our job, but beyond this part of the job, we must also be able to present that data to the user. Programs that don't have a user interface exist, but since you are reading this article about haXe, there is a greater chance that you are mostly interested in web applications, and almost all web applications have a User Interface of some kind. However, these can also be used to create XML documents for example. The haXe library comes with the haxe.Template class. This class allows for basic, yet quite powerful, templating: as we will see, it is not only possible to pass some data to it, but also possible to call some code from a template. Templates are particularly useful when you have to present data—in fact, you can, for example, define a template to display data about a user and then iterate over a list of users displaying this template for each one. We will see how this is possible during this article and we will see what else you can do with templates. We will also see that it is possible to change what is displayed depending on the data and also that it is easy to do some quite common things such as having a different style for one row out of two in a table. The haxe.Template is really easy to use—you just have to create an instance of it passing it a String that contains your template's code as a parameter. Then it is as easy as calling the execute method and giving it some data to display. Let's see a simple example: class TestTemplate { public static function main(): Void { var myTemplate = new haxe.Template("Hi. ::user::"); neko.Lib.println(myTemplate.execute({user : "Benjamin"})); } } This simple code will output "Hi. Benjamin". This is because we have passed an anonymous object as a context with a "user" property that has "Benjamin" as value. Obviously, you can pass objects with several properties. Moreover, as we will see it is even possible to pass complex structures and use them. In addition, we certainly won't be hard coding our templates into our haXe code. Most of the time, you will want to load them from a resource compiled into your executable by calling haxe.Resource.getString or by directly loading them from the filesystem or from a database. Printing a value As we've seen in the preceding sample, we have to surround an expression with :: in order to print its value. Expressions can be of several forms:     Form Explanation ::variableName:: The value of the variable. ::(123):: The integer 123. Note that only integers are allowed. ::e1 operator e2:: Applies the operator to e1 and e2 and returns the resulting value. The syntax doesn't manage operator precedence, so you should wrap expressions inside parenthesis. ::e.field:: Accesses the field and returns the value. Be warned that this doesn't work with properties' getters and setters as these properties are a compile-time only feature. Branching The syntax offers the if, else, and elseif: class TestTemplate { public static function main(): Void { var templateCode = "::if (sex==0):: Male ::elseif (sex==1):: Female ::else:: Unknown ::end::"; var myTemplate = new haxe.Template(templateCode); neko.Lib.print(myTemplate.execute({user : "Benjamin", sex:0})); } } Here the output will be Male. But if the sex property of the context was set to 1 it would print Female, if it is something else, it will print "Unknown". Note that our keywords are surrounded by :: (so the interpreter won't think that it is just some raw-text to be printed). Also note that the "end" keyword has been introduced since we do not use braces. Using lists, arrays, and other iterables The template engine allows one to iterate over an iterable and repeat a part of the template for each object in the iterable. This is done using the ::foreach:: keyword. When iterating, the context will be modified and will become the object that is actually selected in the iterable. It is also possible to access this object (indeed, the context's value) by using the __current__ variable. Let's see an example: class Main { public static function main() { //Let's create two departments: var itDep = new Department("Information Technologies Dept."); var financeDep = new Department("Finance Dept."); //Create some users and add them to their department var it1 = new Person(); it1.lastName = "Par"; it1.firstName = "John"; it1.age = 22; var it2 = new Person(); it2.lastName = "Bear"; it2.firstName = "Caroline"; it2.age = 40; itDep.workers.add(it1); itDep.workers.add(it2); var fin1 = new Person(); fin1.lastName = "Ha"; fin1.firstName = "Trevis"; fin1.age = 43; var fin2 = new Person(); fin2.lastName = "Camille"; fin2.firstName = "Unprobable"; fin2.age = 70; financeDep.workers.add(fin1); financeDep.workers.add(fin2); //Put our departements inside a List: var depts = new List<Department>(); depts.add(itDep); depts.add(financeDep); //Load our template from Resource: var templateCode = haxe.Resource.getString("DeptsList"); //Execute it var template = new haxe.Template(templateCode); neko.Lib.print(template.execute({depts: depts})); } } class Person { public var lastName : String; public var firstName : String; public var age : Int; public function new() { } } class Department { public var name : String; public var workers : List<Person>; public function new(name : String) { workers = new List<Person>(); this.name = name; } } In this part of the code we are simply creating two departments, some persons, and adding those persons into those departments. Now, we want to display the list of departments and all of the employees that work in them. So, let's write a simple template (you can save this file as DeptsList.template): <html> <head> <title>Workers</title> </head> <body> ::foreach depts:: <h1>::name::</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (age < 35)::Junior::elseif (58): :Senior::else::Retired::end::</td> </tr> ::end:: </table> ::end:: </body> </html> When compiling your code you should add the following directive: -resource DeptsList.template@DeptsList The following is the output you will get: <html> <head> <title>Workers</title> </head> <body> <h1>Information Technologies Dept.</h1> <table> <tr> <td>John</td> <td>Par</td> <td>Junior</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept.</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>Senior</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>Retired</td> </tr> </table> </body> </html> As you can see, this is indeed pretty simple once you have your data structure in place. Time for action – Executing code from a template Even though templates can't contain haXe code, they can make calls to so-called "template macros". Macros are defined by the developer and, just like data they are passed to the template.execute function. In fact, they are passed exactly in the same way, but as the second parameter. Calling them is quite easy, instead of surrounding them with :: we will simply prefix them with $$, we can also pass them as parameters inside parenthesis. So, let's take our preceding sample and add a macro to display the number of workers in a department. First, let's add the function to our Main class: public static function displayNumberOfWorkers(resolve : String->Dynamic, department : Department) { return department.workers.length + " workers"; } Note that the first argument that the macro will receive is a function that takes a String and returns a Dynamic. This function will allow you to retrieve the value of an expression in the context from which the macro has been called. Then, other parameters are simply the parameters that the template passes to the macro. So, let's add a call to our macro: <html> <head> </head> <body> ::foreach depts:: <h1>::name:: ($$displayNumberOfWorkers(::__current__::))</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (sex==0)::M::elseif (sex==1)::F::else::?::end::</td> </tr> ::end:: </table> ::end:: </body> </html> As you can see, we will pass the current department to the macro when calling it to display the number of workers. So, here is what you get: <html> <head> </head> <body> <h1>Information Technologies Dept. (2 workers)</h1> <table> <tr> <td>John</td> <td>Par</td> <td>M</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept. (2 workers)</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>M</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>?</td> </tr> </table> </body> </html>   What just happened?   We have written the displayNumberOfWorkers macro and added a call to it in the template. As a result, we've been able to display the number of workers in a department. Integrating subtemplates Sub-templates do not exist as such in the templating system. The fact is that you can include sub-templates into a main template, which is not a rare process. Some frameworks, not only in haXe, have even made this standard behavior. So, there are two ways of doing this: Execute the sub-template, store its return value, and pass it as a property to the main template when executing it. Create a macro to execute the sub-template and return its value. This way you just have to call the macro whenever you want to include your sub-template in your main template. Creating a blog's front page In this section, we are going to create a front page for a blog by using the haxe.Template class. We will also use the SPOD system to retrieve posts from the database.
Read more
  • 0
  • 0
  • 2570

article-image-oracle-webcenter-11g-ps3-working-navigation-models-and-page-hierarchies
Packt
25 Jul 2011
3 min read
Save for later

Oracle WebCenter 11g PS3: Working with Navigation Models and Page Hierarchies

Packt
25 Jul 2011
3 min read
  Oracle WebCenter 11g PS3 Administration Cookbook Over 100 advanced recipes to secure, support, manage, and administer Oracle WebCenter 11g with this book and eBook         Read more about this book       (For more resources on this subject, see here.) Creating a navigation model at runtime Lots of administrators will not have access to JDeveloper, but they will need to manage navigation models. In WebCenter, you can easily create and manage navigation models at runtime. In this recipe, we will show how you can add navigation models at runtime. Getting ready For this recipe, you need a WebCenter Portal application. How to do it... Run your portal application. Log in as an administrator. Go to the administration page. Select Navigations from the Resource tab. Press the Create button. Specify a name, for example, hr. Specify a description, for example, Navigation model for HR users. Leave copy from empty. In this list, you can select an existing navigation model so the newly created model will copy the content from the selected model. Press the Create button: The navigation model is now created and you can add components to it. How it works... When you add a navigation model at runtime, an XML file will be generated in the background. The navigation model will be stored in the MDS. You can request the path to the actual xml file by selecting Edit properties from the Edit menu when you select a navigation model. In the properties window, you will find a field called Metadata file. This is the complete directory to the actual XML file. There's more... Even at runtime, you can modify the actual XML representation of the navigation model. This allows you to be completely flexible. Not everything is possible at runtime, but when you know what XML to add, you can do so by modifying the XML of the navigation model. This can be done by selecting Edit Source from the Edit menu. This way you will get the same XML representation of a navigation model as in JDevleoper. Adding a folder to a navigation model A folder is the simplest resource you can add to your navigation model. It does not link to a specific resource. A folder is only intended to organize your navigation model in a logical way. In this recipe, we will add a folder for the HR resources. Getting ready We will add the folder to the default navigation model so you only need the default WebCenter Portal application for this recipe. How to do it... Open default-navigation-mode.xml from Web Content/oracle/Webcenter/portalapp/navigations. Press the Add button and select Folder from the context menu. Specify an id for the folder. The id should be unique for each resource over the navigation model. Specify an expression language value for the Visible attribute. How it works... Adding a folder to a navigation model will add a folder tag to the XML with the metadata specified: <folder visible="#{true}" id="hr"> <attributes> <attribute isKey="false" value="folder" attributeId="Title"/> </attributes> <contents/> </folder> The folder tag has a contents tag as a child. This means that when you add a resource to a folder, these will be added as a child to the contents tag. There's more... You can also add a folder at runtime to a navigation model. This is done by selecting your navigation model and selecting Edit from the Edit menu. From the Add menu, you can select Folder. You are able to add the id, description, visible attribute and iconUrl.
Read more
  • 0
  • 0
  • 815

article-image-apache-solr-analyzing-your-text-data
Packt
22 Jul 2011
13 min read
Save for later

Apache Solr: Analyzing your Text Data

Packt
22 Jul 2011
13 min read
  Apache Solr 3.1 Cookbook Introduction Type's behavior can be defined in the context of the indexing process or the context of the query process, or both. Furthermore, type definition is composed of tokenizers and filters (both token filters and character filters). Tokenizer specifies how your data will be preprocessed after it is sent to the appropriate field. Analyzer operates on the whole data that is sent to the field. Types can only have one tokenizer. The result of the tokenizer work is a stream of objects called tokens. Next in the analysis chain are the filters. They operate on the tokens in the token stream. And they can do anything with the tokens—changing them, removing them, or for example, making them lowercase. Types can have multiple filters. One additional type of filter is the character filter. The character filters do not operate on tokens from the token stream. They operate on the data that is sent to the field and they are invoked before the data is sent to the analyzer. This article will focus on the data analysis and how to handle the common day-to-day analysis questions and problems. Storing additional information using payloads Imagine that you have a powerful preprocessing tool that can extract information about all the words in the text. Your boss would like you to use it with Solr or at least store the information it returns in Solr. So what can you do? We can use something that is called payload and use it to store that data. This recipe will show you how to do it. How to do it... I assumed that we already have an application that takes care of recognizing the part of speech in our text data. Now we need to add it to the Solr index. To do that we will use payloads, a metadata that can be stored with each occurrence of a term. First of all, you need to modify the index structure. For this, we will add the new field type to the schema.xml file: <fieldtype name="partofspeech" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer" delimiter="|"/> </analyzer> </fieldtype> Now add the field definition part to the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="text" type="text" indexed="true" stored="true" /> <field name="speech" type="partofspeech" indexed="true" stored= "true" multivalued="true" /> Now let's look at what the example data looks like (I named it ch3_payload.xml): <add> <doc> <field name="id">1</field> <field name="text">ugly human</field> <field name="speech">ugly|3 human|6</field> </doc> <doc> <field name="id">2</field> <field name="text">big book example</field> <field name="speech">big|3 book|6 example|1</field> </doc> </add> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_payload.xml file there): java -jarpost.jar ch3_payload.xml How it works... What information can payload hold? It may hold information that is compatible with the encoder type you define for the solr.DelimitedPayloadTokenFilterFactory filter . In our case, we don't need to write our own encoder—we will use the supplied one to store integers. We will use it to store the boost of the term. For example, nouns will be given a token boost value of 6, while the adjectives will be given a boost value of 3. First we have the type definition. We defined a new type in the schema.xml file, named partofspeech based on the Solr text field (attribute class="solr.TextField"). Our tokenizer splits the given text on whitespace characters. Then we have a new filter which handles our payloads. The filter defines an encoder, which in our case is an integer (attribute encoder="integer"). Furthermore, it defines a delimiter which separates the term from the payload. In our case, the separator is the pipe character |. Next we have the field definitions. In our example, we only define three fields: Identifier Text Recognized speech part with payload   Now let's take a look at the example data. We have two simple fields: id and text. The one that we are interested in is the speech field. Look how it is defined. It contains pairs which are made of a term, delimiter, and boost value. For example, book|6. In the example, I decided to boost the nouns with a boost value of 6 and adjectives with the boost value of 3. I also decided that words that cannot be identified by my application, which is used to identify parts of speech, will be given a boost of 1. Pairs are separated with a space character, which in our case will be used to split those pairs. This is the task of the tokenizer which we defined earlier. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it, we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/update. The following parameter is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. That is how you index payloads in Solr. In the 1.4.1 version of Solr, there is no further support for payloads. Hopefully this will change. But for now, you need to write your own query parser and similarity class (or extend the ones present in Solr) to use them. Eliminating XML and HTML tags from the text There are many real-life situations when you have to clean your data. Let's assume that you want to index web pages that your client sends you. You don't know anything about the structure of that page—one thing you know is that you must provide a search mechanism that will enable searching through the content of the pages. Of course, you could index the whole page by splitting it by whitespaces, but then you would probably hear the clients complain about the HTML tags being searchable and so on. So, before we enable searching on the contents of the page, we need to clean the data. In this example, we need to remove the HTML tags. This recipe will show you how to do it with Solr. How to do it... Let's suppose our data looks like this (the ch3_html.xml file): <add> <doc> <field name="id">1</field> <field name="html"><![CDATA[<html><head><title>My page</title></ head><body><p>This is a <b>my</b><i>sample</i> page</body></html> ]]></field> </doc> </add> Now let's take care of the schema.xml file. First add the type definition to the schema.xml file: <fieldType name="html_strip" class="solr.TextField"> <analyzer> <charFilter class="solr.HTMLStripCharFilterFactory"/> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> And now, add the following to the field definition part of the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="html" type="html_strip" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar ch3_html.xml If there were no errors, you should see a response like this: SimplePostTool: version 1.2 SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, other encodings are not currently supported SimplePostTool: POSTing files to http://localhost:8983/solr/update.. SimplePostTool: POSTingfile ch3_html.xml SimplePostTool: COMMITting Solr index changes.. How it works... First of all, we have the data example. In the example, we see one file with two fields; the identifier and some HTML data nested in the CDATA section. You must remember to surround the HTML data in CDATA tags if they are full pages, and start from HTML tags like our example, otherwise Solr will have problems with parsing the data. However, if you only have some tags present in the data, you shouldn't worry. Next, we have the html_strip type definition. It is based on solr.TextField to enable full-text searching. Following that, we have a character filter which handles the HTML and the XML tags stripping. This is something new in Solr 1.4. The character filters are invoked before the data is sent to the tokenizer. This way they operate on untokenized data. In our case, the character filter strips the HTML and XML tags, attributes, and so on. Then it sends the data to the tokenizer, which splits the data by whitespace characters. The one and only filter defined in our type makes the tokens lowercase to simplify the search. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/ update. The parameter of the command execution is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. As you can see, the sample response from the post tools is rather informative. It provides information about the update handler address, files that were sent, and information about commits being performed. If you want to check how your data was indexed, remember not to be mistaken when you choose to store the field contents (attribute stored="true"). The stored value is the original one sent to Solr, so you won't be able to see the filters in action. If you wish to check the actual data structures, please take a look at the Luke utility (a utility that lets you see the index structure, field values, and operate on the index). Luke can be found at the following address: http://code.google.com/p/luke Solr provides a tool that lets you see how your data is analyzed. That tool is a part of Solr administration pages. Copying the contents of one field to another Imagine that you have many big XML files that hold information about the books that are stored on library shelves. There is not much data, just the unique identifier, name of the book, and the name of the author. One day your boss comes to you and says: "Hey, we want to facet and sort on the basis of the book author". You can change your XML and add two fields, but why do that when you can use Solr to do that for you? Well, Solr won't modify your data, but it can copy the data from one field to another. This recipe will show you how to do that. How to do it... Let's assume that our data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook</field> <field name="author">John Kowalsky</field> </doc> <doc> <field name="id">2</field> <field name="name">Some other book</field> <field name="author">Jane Kowalsky</field> </doc> </add> We want the contents of the author field to be present in the fields named author, author_facet, and author sort. So let's define the copy fields in the schema.xml file (place the following right after the fields section): <copyField source="author"dest="author_facet"/> <copyField source="author"dest="author_sort"/> And that's all. Solr will take care of the rest. The field definition part of the schema.xml file could look like this: <field name="id" type="string" indexed="true" stored="true" required="true"/> <field name="author" type="text" indexed="true" stored="true" multiValued="true"/> <field name="name" type="text" indexed="true" stored="true"/> <field name="author_facet" type="string" indexed="true" stored="false"/> <field name="author_sort" type="alphaOnlySort" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar data.xml How it works... As you can see in the example, we have only three fields defined in our sample data XML file. There are two fields which we are not particularly interested in: id and name. The field that interests us the most is the author field. As I have mentioned earlier, we want to place the contents of that field in three fields: Author (the actual field that will be holding the data) author_ sort author_facet   To do that we use the copy fields. Those instructions are defined in the schema.xml file, right after the field definitions, that is, after the tag. To define a copy field, we need to specify a source field (attribute source) and a destination field (attribute dest). After the definitions, like those in the example, Solr will copy the contents of the source fields to the destination fields during the indexing process. There is one thing that you have to be aware of—the content is copied before the analysis process takes place. This means that the data is copied as it is stored in the source. There's more... There are a few things worth nothing when talking about copying contents of the field to another field. Copying the contents of dynamic fields to one field You can also copy multiple field content to one field. To do that, you should define a copy field like this: <copyField source="*_author"dest="authors"/> The definition like the one above would copy all of the fields that end with _author to one field named authors. Remember that if you copy multiple fields to one field, the destination field should be defined as multivalued. Limiting the number of characters copied There may be situations where you only need to copy a defined number of characters from one field to another. To do that we add the maxChars attribute to the copy field definition. It can look like this: <copyField source="author" dest="author_facet" maxChars="200"/> The above definition tells Solr to copy upto 200 characters from the author field to the author_facet field. This attribute can be very useful when copying the content of multiple fields to one field.
Read more
  • 0
  • 0
  • 1987
Visually different images

article-image-play-framework-data-validation-using-controllers
Packt
21 Jul 2011
15 min read
Save for later

Play Framework: Data Validation Using Controllers

Packt
21 Jul 2011
15 min read
Play Framework Cookbook Over 60 incredibly effective recipes to take you under the hood and leverage advanced concepts of the Play framework Introduction This article will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes. Always remember that controllers are really only a thin layer to ensure that your data from the outside world is valid before handing it over to your models, or something needs to be specifically adapted to HTTP. URL routing using annotation-based configuration If you do not like the routes file, you can also describe your routes programmatically by adding annotations to your controllers. This has the advantage of not having any additional config file, but also poses the problem of your URLs being dispersed in your code. You can find the source code of this example in the examples/chapter2/annotationcontroller directory. How to do it... Go to your project and install the router module via conf/dependencies.yml: require: - play - play -> router head Then run playdeps and the router module should be installed in the modules/ directory of your application. Change your controller like this: @StaticRoutes({ @ServeStatic(value="/public/", directory="public") }) public class Application extends Controller { @Any(value="/", priority=100) public static void index() { forbidden("Reserved for administrator"); } @Put(value="/", priority=2, accept="application/json") public static void hiddenIndex() { renderText("Secret news here"); } @Post("/ticket") public static void getTicket(String username, String password) { String uuid = UUID.randomUUID().toString(); renderJSON(uuid); } } How it works... Installing and enabling the module should not leave any open questions for you at this point. As you can see in the controller, it is now filled with annotations that resemble the entries in the routes.conf file, which you could possibly have deleted by now for this example. However, then your application will not start, so you have to have an empty file at least. The @ServeStatic annotation replaces the static command in the routes file. The @StaticRoutes annotation is just used for grouping several @ServeStatic annotations and could be left out in this example. Each controller call now has to have an annotation in order to be reachable. The name of the annotation is the HTTP method, or @Any, if it should match all HTTP methods. Its only mandatory parameter is the value, which resembles the URI—the second field in the routes. conf. All other parameters are optional. Especially interesting is the priority parameter, which can be used to give certain methods precedence. This allows a lower prioritized catchall controller like in the preceding example, but a special handling is required if the URI is called with the PUT method. You can easily check the correct behavior by using curl, a very practical command line HTTP client: curl -v localhost:9000/ This command should give you a result similar to this: > GET / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: Play! Framework;1.1;dev < Content-Type: text/html; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=0c7df945a5375480993f51914804284a3bb ca726-%00___ID%3A70963572-b0fc-4c8c-b8d5-871cb842c5a2%00;Path=/ < Cache-Control: no-cache < Content-Length: 32 < <h1>Reserved for administrator</h1> You can see the HTTP error message and the content returned. You can trigger a PUT request in a similar fashion: curl -X PUT -v localhost:9000/ > PUT / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 200 OK < Server: Play! Framework;1.1;dev < Content-Type: text/plain; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=f0cb6762afa7c860dde3fe1907e8847347 6e2564-%00___ID%3A6cc88736-20bb-43c1-9d43-42af47728132%00;Path=/ < Cache-Control: no-cache < Content-Length: 16 Secret news here As you can see now, the priority has voted the controller method for the PUT method which is chosen and returned. There's more... The router module is a small, but handy module, which is perfectly suited to take a first look at modules and to understand how the routing mechanism of the Play framework works at its core. You should take a look at the source if you need to implement custom mechanisms of URL routing. Mixing the configuration file and annotations is possible You can use the router module and the routes file—this is needed when using modules as they cannot be specified in annotations. However, keep in mind that this is pretty confusing. You can check out more info about the router module at http://www.playframework.org/modules/router. Basics of caching Caching is quite a complex and multi-faceted technique, when implemented correctly. However, implementing caching in your application should not be complex, but rather the mindwork before, where you think about what and when to cache, should be. There are many different aspects, layers, and types (and their combinations) of caching in any web application. This recipe will give a short overview about the different types of caching and how to use them. You can find the source code of this example in the chapter2/caching-general directory. Getting ready First, it is important that you understand where caching can happen—inside and outside of your Play application. So let's start by looking at the caching possibilities of the HTTP protocol. HTTP sometimes looks like a simple protocol, but is tricky in the details. However, it is one of the most proven protocols in the Internet, and thus it is always useful to rely on its functionalities. HTTP allows the caching of contents by setting specific headers in the response. There are several headers which can be set: Cache-Control: This is a header which must be parsed and used by the client and also all the proxies in between. Last-Modified: This adds a timestamp, explaining when the requested resource had been changed the last time. On the next request the client may send an If-Modified- Since header with this date. Now the server may just return a HTTP 304 code without sending any data back. ETag: An ETag is basically the same as a Last-Modified header, except it has a semantic meaning. It is actually a calculated hash value resembling the resource behind the requested URL instead of a timestamp. This means the server can decide when a resource has changed and when it has not. This could also be used for some type of optimistic locking. So, this is a type of caching on which the requesting client has some influence on. There are also other forms of caching which are purely on the server side. In most other Java web frameworks, the HttpSession object is a classic example, which belongs to this case. Play has a cache mechanism on the server side. It should be used to store big session data, in this case any data exceeding the 4KB maximum cookie size. Be aware that there is a semantic difference between a cache and a session. You should not rely on the data being in the cache and thus need to handle cache misses. You can use the Cache class in your controller and model code. The great thing about it is that it is an abstraction of a concrete cache implementation. If you only use one node for your application, you can use the built-in ehCache for caching. As soon as your application needs more than one node, you can configure a memcached in your application.conf and there is no need to change any of your code. Furthermore, you can also cache snippets of your templates. For example, there is no need to reload the portal page of a user on every request when you can cache it for 10 minutes. This also leads to a very simple truth. Caching gives you a lot of speed and might even lower your database load in some cases, but it is not free. Caching means you need RAM, lots of RAM in most cases. So make sure the system you are caching on never needs to swap, otherwise you could read the data from disk anyway. This can be a special problem in cloud deployments, as there are often limitations on available RAM. The following examples show how to utilize the different caching techniques. We will show four different use cases of caching in the accompanying test. First test: public class CachingTest extends FunctionalTest { @Test public void testThatCachingPagePartsWork() { Response response = GET("/"); String cachedTime = getCachedTime(response); assertEquals(getUncachedTime(response), cachedTime); response = GET("/"); String newCachedTime = getCachedTime(response); assertNotSame(getUncachedTime(response), newCachedTime); assertEquals(cachedTime, newCachedTime); } @Test public void testThatCachingWholePageWorks() throws Exception { Response response = GET("/cacheFor"); String content = getContent(response); response = GET("/cacheFor"); assertEquals(content, getContent(response)); Thread.sleep(6000); response = GET("/cacheFor"); assertNotSame(content, getContent(response)); } @Test public void testThatCachingHeadersAreSet() { Response response = GET("/proxyCache"); assertIsOk(response); assertHeaderEquals("Cache-Control", "max-age=3600", response); } @Test public void testThatEtagCachingWorks() { Response response = GET("/etagCache/123"); assertIsOk(response); assertContentEquals("Learn to use etags, dumbass!", response); Request request = newRequest(); String etag = String.valueOf("123".hashCode()); Header noneMatchHeader = new Header("if-none-match", etag); request.headers.put("if-none-match", noneMatchHeader); DateTime ago = new DateTime().minusHours(12); String agoStr = Utils.getHttpDateFormatter().format(ago. toDate()); Header modifiedHeader = new Header("if-modified-since", agoStr); request.headers.put("if-modified-since", modifiedHeader); response = GET(request, "/etagCache/123"); assertStatus(304, response); } private String getUncachedTime(Response response) { return getTime(response, 0); } private String getCachedTime(Response response) { return getTime(response, 1); } private String getTime(Response response, intpos) { assertIsOk(response); String content = getContent(response); return content.split("n")[pos]; } } The first test checks for a very nice feature. Since play 1.1, you can cache parts of a page, more exactly, parts of a template. This test opens a URL and the page returns the current date and the date of such a cached template part, which is cached for about 10 seconds. In the first request, when the cache is empty, both dates are equal. If you repeat the request, the first date is actual while the second date is the cached one. The second test puts the whole response in the cache for 5 seconds. In order to ensure that expiration works as well, this test waits for six seconds and retries the request. The third test ensures that the correct headers for proxy-based caching are set. The fourth test uses an HTTP ETag for caching. If the If-Modified-Since and If-None- Match headers are not supplied, it returns a string. On adding these headers to the correct ETag (in this case the hashCode from the string 123) and the date from 12 hours before, a 302 Not-Modified response should be returned. How to do it... Add four simple routes to the configuration as shown in the following code: GET / Application.index GET /cacheFor Application.indexCacheFor GET /proxyCache Application.proxyCache GET /etagCache/{name} Application.etagCache The application class features the following controllers: public class Application extends Controller { public static void index() { Date date = new Date(); render(date); } @CacheFor("5s") public static void indexCacheFor() { Date date = new Date(); renderText("Current time is: " + date); } public static void proxyCache() { response.cacheFor("1h"); renderText("Foo"); } @Inject private static EtagCacheCalculator calculator; public static void etagCache(String name) { Date lastModified = new DateTime().minusDays(1).toDate(); String etag = calculator.calculate(name); if(!request.isModified(etag, lastModified.getTime())) { throw new NotModified(); } response.cacheFor(etag, "3h", lastModified.getTime()); renderText("Learn to use etags, dumbass!"); } } As you can see in the controller, the class to calculate ETags is injected into the controller. This is done on startup with a small job as shown in the following code: @OnApplicationStart public class InjectionJob extends Job implements BeanSource { private Map<Class, Object>clazzMap = new HashMap<Class, Object>(); public void doJob() { clazzMap.put(EtagCacheCalculator.class, new EtagCacheCalculator()); Injector.inject(this); } public <T> T getBeanOfType(Class<T>clazz) { return (T) clazzMap.get(clazz); } } The calculator itself is as simple as possible: public class EtagCacheCalculator implements ControllerSupport { public String calculate(String str) { return String.valueOf(str.hashCode()); } } The last piece needed is the template of the index() controller, which looks like this: Current time is: ${date} #{cache 'mainPage', for:'5s'} Current time is: ${date} #{/cache} How it works... Let's check the functionality per controller call. The index() controller has no special treatment inside the controller. The current date is put into the template and that's it. However, the caching logic is in the template here because not the whole, but only a part of the returned data should be cached, and for that a #{cache} tag used. The tag requires two arguments to be passed. The for parameter allows you to set the expiry out of the cache, while the first parameter defines the key used inside the cache. This allows pretty interesting things. Whenever you are in a page where something is exclusively rendered for a user (like his portal entry page), you could cache it with a key, which includes the user name or the session ID, like this: #{cache 'home-' + connectedUser.email, for:'15min'} ${user.name} #{/cache} This kind of caching is completely transparent to the user, as it exclusively happens on the server side. The same applies for the indexCacheFor() controller. Here, the whole page gets cached instead of parts inside the template. This is a pretty good fit for nonpersonalized, high performance delivery of pages, which often are only a very small portion of your application. However, you already have to think about caching before. If you do a time consuming JPA calculation, and then reuse the cache result in the template, you have still wasted CPU cycles and just saved some rendering time. The third controller call proxyCache() is actually the most simple of all. It just sets the proxy expire header called Cache-Control. It is optional to set this in your code, because your Play is configured to set it as well when the http.cacheControl parameter in your application.conf is set. Be aware that this works only in production, and not in development mode. The most complex controller is the last one. The first action is to find out the last modified date of the data you want to return. In this case it is 24 hours ago. Then the ETag needs to be created somehow. In this case, the calculator gets a String passed. In a real-world application you would more likely pass the entity and the service would extract some properties of it, which are used to calculate the ETag by using a pretty-much collision-safe hash algorithm. After both values have been calculated, you can check in the request whether the client needs to get new data or may use the old data. This is what happens in the request.isModified() method. If the client either did not send all required headers or an older timestamp was used, real data is returned; in this case, a simple string advising you to use an ETag the next time. Furthermore, the calculated ETag and a maximum expiry time are also added to the response via response.cacheFor(). A last specialty in the etagCache() controller is the use of the EtagCacheCalculator. The implementation does not matter in this case, except that it must implement the ControllerSupport interface. However, the initialization of the injected class is still worth a mention. If you take a look at the InjectionJob class, you will see the creation of the class in the doJob() method on startup, where it is put into a local map. Also, the Injector.inject() call does the magic of injecting the EtagCacheCalculator instance into the controllers. As a result of implementing the BeanSource interface, the getBeanOfType() method tries to get the corresponding class out of the map. The map actually should ensure that only one instance of this class exists. There's more... Caching is deeply integrated into the Play framework as it is built with the HTTP protocol in mind. If you want to find out more about it, you will have to examine core classes of the framework. More information in the ActionInvoker If you want to know more details about how the @CacheFor annotation works in Play, you should take a look at the ActionInvoker class inside of it. Be thoughtful with ETag calculation Etag calculation is costly, especially if you are calculating more then the last-modified stamp. You should think about performance here. Perhaps it would be useful to calculate the ETag after saving the entity and storing it directly at the entity in the database. It is useful to make some tests if you are using the ETag to ensure high performance. In case you want to know more about ETag functionality, you should read RFC 2616. You can also disable the creation of ETags totally, if you set http.useETag=false in your application.conf. Use a plugin instead of a job The job that implements the BeanSource interface is not a very clean solution to the problem of calling Injector.inject() on start up of an application. It would be better to use a plugin in this case.
Read more
  • 0
  • 0
  • 2295

article-image-alfresco-3-web-scripts
Packt
21 Jul 2011
6 min read
Save for later

Alfresco 3: Web Scripts

Packt
21 Jul 2011
6 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco Introduction You all know about Web Services – which took the web development world by storm a few years ago. Web Services have been instrumental in constructing Web APIs (Application Programming Interface) and making the web applications work as Service-Oriented Architecture. In the new Web 2.0 world, however, many criticisms arose around traditional Web Services – thus RESTful services came into the picture. REST (Representational State Transfer) attempts to expose the APIs using HTTP or similar protocol and interfaces using well-known, light-weight and standard methods such as GET, POST, PUT, DELETE, and so on. Alfresco Web Scripts provide RESTful APIs of the repository services and functions. Traditionally, ECM systems have been exposing the interfaces using RPC (Remote Procedure Call) – but gradually it turned out that RPC-based APIs are not particularly suitable in the wide Internet arena where multiple environments and technologies reside together and talk seamlessly. In the case of Web Scripts, the RESTful services overcome all these problems and integration with an ECM repository has never been so easy and secure. Alfresco Web Scripts were introduced in 2006 and since then it has been quite popular with the developer and system integrator community for implementing services on top of the Alfresco repository and to amalgamate Alfresco with any other system. What is a Web Script? A Web Script is simply a URI bound to a service using standard HTTP methods such as GET, POST, PUT, or DELETE. Web Scripts can be written using simply the Alfresco JavaScript APIs and Freemarker templates, and optionally Java API as well with or without any Freemarker template. For example, the http://localhost:8080/alfresco/service/api/search/person.html ?q=admin&p=1&c=10 URL will invoke the search service and return the output in HTML. Internally, a script has been written using JavaScript API (or Java API) that performs the search and a FreeMarker template is written to render the search output in a structured HTML format. All the Web Scripts are exposed as services and are generally prefixed with http://<<server-url>>/<<context-path>>/<<servicepath>>. In a standard scenario, this is http://localhost:8080/alfresco/service Web Script architecture Alfresco Web Scripts strictly follow the MVC architecture. Controller: Written using Alfresco Java or JavaScript API, you implement your business requirements for the Web Script in this layer. You also prepare your data model that is returned to the view layer. The controller code interacts with the repository via the APIs and other services and processes the business implementations. View: Written using Freemarker templates, you implement exactly what you want to return in your Web Script. For data Web Scripts you construct your JSON or XML data using the template; and for presentation Web Scripts you build your output HTML. The view can be implemented using Freemarker templates, or using Java-backed Web Script classes. Model: Normally constructed in the controller layer (in Java or JavaScript), these values are automatically available in the view layer. Types of Web Scripts Depending on the purpose and output, Web Scripts can be categorized in two types: Data Web Scripts: These Web Scripts mostly return data in plenty after processing of business requirements. Such Web Scripts are mostly used to retrieve, update, and create content in the repository or query the repository. Presentation Web Scripts: When you want to build a user interface using Web Scripts, you use these Web Scripts. They mostly return HTML output. Such Web Scripts are mostly used for creating dashlets in Alfresco Explorer or Alfresco Share or for creating JSR-168 portlets. Note that this categorization of Web Script is not technically different—it is just a logical separation. This means data Web Scripts and presentation Web Scripts are not technically dissimilar, only usage and purpose is different. Web Script files Defining and creating a Web Script in Alfresco requires creating certain files in particular folders. These files are: Web Script Descriptor: The descriptor is an XML file used to define the Web Script – the name of the script, the URL(s) on which the script can be invoked, the authentication mechanism of the script and so on. The name of the descriptor file should be of the form: <<service-id>>.<<http-method>>. desc.xml; for example, helloworld.get.desc.xml. Freemarker Template Response file(s) optional: The Freemarker Template output file(s) is the FTL file which is returned as the result of the Web Script. The name of the template files should be of the form: &lt;<service-id>>.<<httpmethod>>.<< response-format>>.ftl; for example, helloworld.get.html.ftl and helloworld.get.json.ftl. Controller JavaScript file (optional): The Controller JavaScript file is the business layer of your Web Script. The name of the JavaScript file should be of the form: <<service-id>>.<<http-method>>.js; for example, helloworld.get.js. Controller Java file (optional): You can write your business implementations in Java classes as well, instead of using JavaScript API. Configuration file (optional): You can optionally include a configuration XML file. The name of the file should be of the form: <<service-id>>.<<http-method>>.config.xml; for example, helloworld.get.config.js. Resource Bundle file (optional): These are standard message bundle files that can be used for making Web Script responses localized. The name of message files would be of the form: <<service-id>>.<<http-method>>.properties; for example, helloworld.get.properties. The naming conventions of Web Script files are fixed – they follow particular semantics. Alfresco, by default, has provided a quite rich list of built-in Web Scripts which can be found in the tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscriptsorgalfresco folder. There are a few locations where you can store your Web Scripts. Classpath folder: tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscripts Classpath folder (extension): tomcatwebappsalfrescoWEB-INFclassesalfrescoextensiontemplateswebscripts Repository folder: /Company Home/Data Dictionary/Web Scripts Repository folder (extension): /Company Home/Data Dictionary/Web Scripts Extensions It is not advised to keep your Web Scripts in the orgalfresco folder; this folder is reserved for Alfresco default Web Scripts. Create your own folders instead. Or better, you should create your Web Scripts in the extension folders. Web Script parameters You of course need to pass some parameters to your Web Script and execute your business implementations around that. You can pass parameters by query string for the GET Web Scripts. For example: http://localhost:8080/alfresco/service/api/search/person.html?q=admin&p=1&c=10 In this script, we have passed three parameters – q (for the search query), p (for the page index), and c (for the number of items per page). You can also pass parameters bound in HTML form data in the case of POST Web Scripts. One example of such Web Script is to upload a file using Web Script.  
Read more
  • 0
  • 0
  • 1774

article-image-tips-deploying-sakai
Packt
19 Jul 2011
10 min read
Save for later

Tips for Deploying Sakai

Packt
19 Jul 2011
10 min read
  Sakai CLE Courseware Management: The Official Guide The benefits of knowing that frameworks exist Sakai is built on top of numerous third-party open source libraries and frameworks. Why write code for converting from XML text files to Java objects or connecting and managing databases, when others have specialized and thought out the technical problems and found appropriate and consistent solutions? This reuse of code saves effort and decreases the complexity of creating new functionality. Using third-party frameworks has other benefits as well; you can choose the best from a series of external libraries, increasing the quality of your own product. The external frameworks have their own communities who test them actively. Outsourcing generic requirements, such as the rudiments of generating indexes for searching, allows the Sakai community to concentrate on higher-level goals, such as building new tools. For developers, also for course instructors and system administrators, it is useful background to know, roughly, what the underlying frameworks do: For a developer, it makes sense to look at reuse first. Why re-invent the wheel? Why write with external framework X for manipulating XML files when other developers have already extensively tried and tested and are running framework Y? Knowing what others have done saves time. This knowledge is especially handy for the new-to-Sakai developers who could be tempted to write from scratch. For the system administrator, each framework has its own strengths, weaknesses, and terminology. Understanding the terminology and technologies gives you a head start in debugging glitches and communicating with the developers. For a manager, knowing that Sakai has chosen solid and well-respected open source libraries should help influence buying decisions in favor of this platform. For the course instructor, knowing which frameworks exist and what their potential is helps inform the debate about adding interesting new features. Knowing what Sakai uses and what is possible sharpens the instructors' focus and the ability to define realistic requirements. For the software engineering student, Sakai represents a collection of best practices and frameworks that will make the students more saleable in the labor market. Using the third-party frameworks This section details frameworks that Sakai is heavily dependent on: Spring (http://www.springsource.org/), Hibernate ((http://www.hibernate.org/), and numerous Apache projects (http://www.apache.org/). Generally, Java application builders understand these frameworks. This makes it relatively easier to hire programmers with experience. All projects are open source and the individual use does not clash with Sakai's open source license (http://www.opensource.org/licenses/ecl2.php). The benefit of using Spring Spring is a tightly architected set of frameworks designed to support the main goals of building modern business applications. Spring has a broad set of abilities, from connecting to databases, to transaction, managing business logic, validation, security, and remote access. It fully supports the most modern architectural design patterns. The framework takes away a lot of drudgery for a programmer and enables pieces of code to be plugged in or to be removed by editing XML configuration files rather than refactoring the raw code base itself. You can see for yourself; this is the best framework for the user provider within Sakai. When you log in, you may want to validate the user credentials using a piece of code that connects to a directory service such as LDAP , or replace the code with another piece of code that gets credentials from an external database or even reads from a text file. Thanks to Sakai's services that rely on Spring! You can give (called injecting) the wanted code to a Service manager, which then calls the code when needed. In Sakai terminology, within a running application a service manager manages services for a particular type of data. For example, a course service manager allows programmers to add, modify, or delete courses. A user service manager does the same for users. Spring is responsible for deciding which pieces of code it injects into which service manager, and developers do not need to program the heavy lifting, only the configuration. The advantage is that later, as a part of adapting Sakai to a specific organization, system administrators can also reconfigure authentication or many other services to tailor to local preferences without recompilation. Spring abstracts away underlying differences between different databases. This allows you to program once, each for MySQL , Oracle , and so on, without taking into account the databases' differences. Spring can sit on the top of Hibernate and other limited frameworks, such as JDBC (yet another standard for connecting to databases). This adaptability gives architects more freedom to change and refactor (the process of changing the structure of the code to improve it) without affecting other parts of the code. As Sakai grows in code size, Spring and good architectural design patterns diminish the chance breaking older code. To sum up, the Spring framework makes programming more efficient. Sakai relies on the main framework. Many tasks that programmers would have previously hard coded are now delegated to XML configuration files. Hibernate for database coupling Hibernate is all about coupling databases to the code. Hibernate is a powerful, high performance object/relational persistence and query service. That is to say, a designer describes Java objects in a specific structure within XML files. After reading these files, Hibernate gains the ability to save or load instances of the object from the database. Hibernate supports complex data structures, such as Java collections and arrays of objects. Again, it is a choice of an external framework that does the programmer's dog work, mostly via XML configuration. The many Apache frameworks Sakai is biased rightfully towards projects associated with the Apache Software Foundation (ASF) (http://www.apache.org/). Sakai instances run within a Tomcat server and many institutes place an Apache web server in front of the Tomcat server to deal with dishing out static content (content that does not change, such as an ordinary web page), SSL/TLS, ease of configuration, and log parsing. Further, individual internal and external frameworks make use of the Apache commons frameworks, (http://commons.apache.org/) which have reusable libraries for all kinds of specific needs, such as validation, encoding, e-mailing, uploading files, and so on. Even if a developer does not use the common libraries directly, they are often called by other frameworks and have significant impact on the wellbeing; for example, security of a Sakai instance. To ensure look and feel consistency, designers used common technologies, such as Apache Velocity, Apache Wicket , Apache MyFaces (an implementation of Java Server Faces), Reasonable Server Faces (RSF) , and plain old Java Server Pages (JSP) Apache Velocity places much of the look and feel in text templates that non-programmers can then manipulate with text editors. The use of Velocity is mostly superseded by JSF. However, as Sakai moves forward, technologies such as RSF and Wicket (http://wicket.apache.org/) are playing a predominate role. Sakai uses XML as the format of choice to support much of its functionality, from configuration files, to the backing up of sites and the storage of internal data representations, RSS feeds, and so on. There is a lot of runtime effort in converting to and from XML and translating XML into other formats. Here are the gory technical details: there are two main methods for parsing XML: You can parse (another word for process) XML into a Document Object Model (DOM) in the memory that you can later transverse and manipulate programmatically. XML can also be parsed via an event-driven mechanism where Java methods are called, for example, when an XML tag begins or ends, or there is a body to the tag. Programmatically simple API for XML (SAX) libraries support the second approach in Java. Generally, it is easier to program with DOM than SAX, but as you need a model of the XML in memory, DOM, by its nature, is more memory intensive. Why would that matter? In large-scale deployments, the amount of memory tends to limit a Sakai instance's performance rather than Sakai being limited by the computational power of the servers. Therefore, as Sakai heavily uses XML, whenever possible, a developer should consider using SAX and avoid keeping the whole model of the XML document in memory. Looking at dependencies As Sakai adapts and expands its feature set, expect the range of external libraries to expand. The table mentions libraries used, their links to the relevant home page, and a very brief description of their functionality. Name Homepage Description Apache-Axis http://ws.apache.org/axis/ SOAP web services Apache-Axis2 http://ws.apache.org/axis2   SOAP, REST web services. A total rewrite of Apache-axis. However, not currently used within Entity Broker, a Sakai specific component.   Apache Commons http://commons.apache.org Lower-level utilities Batik http://xmlgraphics.apache.org/batik/ Batik is a Java-based toolkit for applications or applets that want to use images in the Scalable Vector Graphics (SVG) format. Commons-beanutils http://commons.apache.org/beanutils/ Methods for Java bean manipulation Commons-codec http://commons.apache.org/codec Commons Codec provides implementations of common encoders and decoders, such as Base64, Hex, Phonetic, and URLs. Commons-digester http://commons.apache.org/digester Common methods for initializing objects from XML configuration Commons-httpclient http://hc.apache.org/httpcomponents-client Supports HTTP-based standards with the client side in mind Commons-logging http://commons.apache.org/logging/ Logging support Commons-validator http://commons.apache.org/validator Support for verifying the integrity of received data Excalibur http://excalibur.apache.org Utilities FOP http://xmlgraphics.apache.org/fop Print formatting ready for conversions to PDF and a number of other formats Hibernate http://www.hibernate.org ORM database framework Log4j http://logging.apache.org/log4j For logging Jackrabbit http://jackrabbit. apache.org http://jcp.org/en/jsr/detail?id=170 Content repository. A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more. James http://james.apache.org A mail server Java Server Faces http://java.sun.com/javaee/javaserverfaces Simplifies building user interfaces for JavaServer applications Lucene http://lucene.apache.org Indexing MyFaces http://myfaces.apache.org JSF implementation with implementation-specific widgets Pluto http://portals.apache.org/pluto The Reference Implementation of the Java Portlet Specfication Quartz http://www.opensymphony.com/quartz Scheduling Reasonable Server Faces (RSF) http://www2.caret.cam.ac.uk/rsfwiki RSF is built on the Spring framework, and simplifies the building of views via XHTML. ROME https://rome.dev.java.net ROME is a set of open source Java tools for parsing, generating, and publishing RSS and Atom feeds. SAX http://www.saxproject.org Event-based XML parser STRUTS http://struts.apache.org/ Heavy-weight MVC framework, not used in the core of Sakai, but rather some components used as part of the occasional tool Spring http://www.springsource.org Used extensively within the code base of Sakai. It is a broad framework that is designed to make building business applications simpler. Tomcat http://tomcat.apache.org Servlet container Velocity http://velocity.apache.org Templating Wicket http://wicket.apache.org Web app development framework Xalan http://xml.apache.org/xalan-j An XSLT (Extensible Stylesheet Language Transformation) processor for transforming XML documents into HTML, text, or other XML document types xerces http://xerces.apache.org/xerces-j XML parser For the reader who has downloaded and built Sakai from source code, you can automatically generate a list of current external dependencies via Maven. First, you will need to build the binary version and then print out the dependency report. To achieve this from within the top-level directory of the source code, you can run the following commands: mvn -Ppack-demo install mvn dependency:list The table is based on an abbreviated version of the dependency list, generated from the source code from March 2009. For those of you wishing to dive into the depths of Sakai, you can search the home pages mentioned in the table. In summary, Spring is the most important underlying third-party framework and Sakai spends a lot of its time manipulating XML.  
Read more
  • 0
  • 0
  • 3682
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-alice-3-controlling-behavior-animations
Packt
18 Jul 2011
11 min read
Save for later

Alice 3: Controlling the Behavior of Animations

Packt
18 Jul 2011
11 min read
  Alice 3 Cookbook 79 recipes to harness the power of Alice 3 for teaching students to build attractive and interactive 3D scenes and videos         Read more about this book       (For more resources related to this subject, see here.) Introduction You need to organize the statements that request the different actors to perform actions. Alice 3 provides blocks that allow us to configure the order in which many statements should be executed. This article provides many tasks that will allow us to start controlling the behavior of animations with many actors performing different actions. We will execute many actions with a specific order. We will use counters to run one or more statements many times. We will execute actions for many actors of the same class. We will run code for different actors at the same time to render complex animations. Performing many statements in order In this recipe, we will execute many statements for an actor with a specific order. We will add eight statements to control a sequence of movements for a bee. Getting ready We have to be working on a project with at least one actor. Therefore, we will create a new project and set a simple scene with a few actors. Select File | New... in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Add an instance of the Bee class to the scene, and enter bee for the name of this new instance. First, Alice will create the MyBee class to extend Bee. Then, Alice will create an instance of MyBee named bee. Follow the steps explained in the Creating a new instance from a class in a gallery recipe, in the article, Alice 3: Making Simple Animations with Actors. Add an instance of the PurpleFlower class, and enter purpleFlower for the name of this new instance. Add another instance of the PurpleFlower class, and enter purpleFlower2 for the name of this new instance. The additional flower may be placed on top of the previously added flower. Add an instance of the ForestSky class to the scene. Place the bee and the two flowers as shown in the next screenshot: How to do it... Follow these steps to execute many statements for the bee with a specific order: Open an existing project with one actor added to the scene. Click on Edit Code, at the lower-right corner of the big scene preview. Alice will show a smaller preview of the scene and will display the Code Editor on a panel located at the right-hand side of the main window. Click on the class: MyScene drop-down list and the list of classes that are part of the scene will appear. Select MyScene | Edit run. Select the desired actor in the instance drop-down list located at the left-hand side of the main window, below the small scene preview. For example, you can select bee. Make sure that part: none is selected in the drop-down list located at the right-hand side of the chosen instance. Activate the Procedures tab. Alice will display the procedures for the previously selected actor. Drag the pointAt procedure and drop it in the drop statement here area located below the do in order label, inside the run tab. Because the instance name is bee, the pointAt statement contains the this.bee and pointAt labels followed by the target parameter and its question marks ???. A list with all the possible instances to pass to the first parameter will appear. Click on this.purpleFlower. The following code will be displayed, as shown in the next screenshot: this.bee.pointAt(this.purpleFlower) Drag the moveTo procedure and drop it below the previously dropped procedure call. A list with all the possible instances to pass to the first parameter will appear. Select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01, as shown in the following screenshot: Click on the more... drop-down menu button that appears at the right-hand side of the recently dropped statement. Click on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears. Click on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the second statement: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the third statement: this.bee.delay(2.0) Drag the moveAwayFrom procedure and drop it below the previously dropped procedure call. Select 0.25 for the first parameter. Click on the more... drop-down menu button that appears and select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fourth statement: this.bee.moveAwayFrom(0.25, this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the turnToFace procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fifth statement: this.bee.turnToFace(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the moveTo procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the sixth statement: this.bee.moveTo(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_AND_END_GENTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the seventh statement: this.bee.delay(2.0) Drag the move procedure and drop it below the previously dropped procedure call. Select FORWARD and then 10.0. Click on the more... drop-down menu button, on duration and then on 10.0 in the cascade menu that appears. Click on the additional more... drop-down menu that appears, on asSeenBy and then on this.bee. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the eighth and final statement. The following screenshot shows the eight statements that compose the run procedure: this.bee.move(FORWARD, duration: 10.0, asSeenBy: this.bee, style: BEGIN_ABRUPTLY_AND_END_GENTLY) (Move the mouse over the image to enlarge it.) Select File | Save as... from Alice's main menu and give a new name to the project. Then you can make changes to the project according to your needs. How it works... When we run a project, Alice creates the scene instance, creates and initializes all the instances that compose the scene, and finally executes the run method defined in the MyScene class. By default, the statements we add to a procedure are included within the do in order block. We added eight statements to the do in order block, and therefore Alice will begin with the first statement: this.bee.pointAt(this.purpleFlower) Once the bee finishes executing the pointAt procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following second statement after the first one finishes: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) The do in order statement encapsulates a group of statements with a synchronous execution. Thus, when we add many statements within a do in order block, these statements will run one after the other. Each statement requires its previous statement to finish before starting its execution, and therefore we can use the do in order block to group statements that must run with a specific order. The moveTo procedure moves the 3D model that represents the actor until it reaches the position of the other actor. The value for the target parameter is the instance of the other actor. We want the bee to move to one of the petals of the first flower, purpleFlower, and therefore we passed this value to the target parameter: this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01) We called the getPart function for purpleFlower with IStemMiddle_IStemTop_IHPistil_IHPetal01 as the name of the part to return. This function allows us to retrieve one petal from the flower as an instance. We used the resulting instance as the target parameter for the moveTo procedure and we could make the bee move to the specific petal of the flower. Once the bee finishes executing the moveTo procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following third statement after the second one finishes: this.bee.delay(2.0) The delay procedure puts the actor to sleep in its current position for the specified number of seconds. The next statement specified in the do in order block will run after waiting for two seconds. The statements added to the run procedure will perform the following visible actions in the specified order: Point the bee at purpleFlower. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee away 0.25 units from the position of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Turn the bee to the face of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee forward 10 units. Begin the movement abruptly but end it gently. The total duration for the animation must be 10 seconds. The bee will disappear from the scene. The following screenshot shows six screenshots of the rendered frames: (Move the mouse over the image to enlarge it.) There's more... When you work with the Alice code editor, you can temporarily disable statements. Alice doesn't execute the disabled statements. However, you can enable them again later. It is useful to disable one or more statements when you want to test the results of running the project without these statements, but you might want to enable them back to compare the results. To disable a statement, right-click on it and deactivate the IsEnabled option, as shown in the following screenshot: The disabled statements will appear with diagonal lines, as shown in the next screenshot, and won't be considered at run-time: To enable a disabled statement, right-click on it and activate the IsEnabled option.
Read more
  • 0
  • 0
  • 1949

article-image-drupal-7-customizing-existing-theme
Packt
15 Jul 2011
9 min read
Save for later

Drupal 7: Customizing an Existing Theme

Packt
15 Jul 2011
9 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling    With the arrival of Drupal 6, sub-theming really came to the forefront of theme design. While previously many people copied themes and then re-worked them to achieve their goals, that process became less attractive as sub-themes came into favor. This article focuses on sub-theming and how it should be used to customize an existing theme. We'll start by looking at how to set up a workspace for Drupal theming. Setting up the workspace Before you get too far into attempting to modify your theme files, you should put some thought into your tools. There are several software applications that can make your work modifying themes more efficient. Though no specific tools are required to work with Drupal themes, you could do it all with just a text editor—there are a couple of applications that you might want to consider adding to your tool kit. The first item to consider is browser selection. Firefox has a variety of extensions that make working with themes easier. The Web Developer extension, for example, is hugely helpful when dealing with CSS and related issues. We recommend the combination of Firefox and the Web developer extension to anyone working with Drupal themes. Another extension popular with many developers is Firebug, which is very similar to the Web developer extension, and is indeed more powerful in several regards. Pick up Web developer, Firebug, and other popular Firefox add-ons at https://addons.mozilla.org/en-US/firefox/. There are also certain utilities you can add into your Drupal installation that will assist with theming the site. Two modules you definitely will want to install are Devel and Theme developer. Theme developer can save you untold hours of digging around trying to find the right function or template. When the module is active all you need to do is click on an element and the Theme developer pop-up window will show you what is generating the element, along with other useful information like potential template suggestions. The Devel module performs a number of functions and is a prerequisite for running Theme developer. Download Devel from: http://drupal.org/project/devel. You can find Theme developer at: http://drupal.org/project/devel_themer. Note that neither Devel nor Theme Developer are suitable for use in a development environment—you don't want these installed and enabled on a client's public site, as they can present a security risk. When it comes to working with PHP files and the various theme files, you will also need a good code editor. There's a whole world of options out there, and the right choice for you is really a personal decision. Suffice it to say: as long as you are comfortable with it, it's probably the right choice. Setting up a local development server Another key component of your workspace is the ability to preview your work—preferably locally. As a practical matter, previewing Drupal themes requires the use of a server; themes are difficult to preview with any accuracy without a server to execute the PHP code. While you can work on a remote server on your webhost, often this is undesirable due to latency or simple lack of availability. A quick solution to this problem is to set up a local server using something like the XAMPP package (or the MAMP package for Mac OSX). XAMPP provides a one step installer containing everything you need to set up a server environment on your local machine (Apache, MySQL, PHP, phpMyAdmin, and more). Visit http://www.ApacheFriends.org to download XAMPP and you can have your own Dev Server set up on your local machine in no time at all. Follow these steps to acquire the XAMPP installation package and get it set up on your local machine: Connect to the Internet and direct your browser to http://www.apachefriends.org. Select XAMPP from the main menu. Click the link labeled XAMPP for Windows. Click the .zip option under the heading XAMPP for Windows. Note that you will be re-directed to the SourceForge site for the actual download. When the pop-up prompts you to save the file, click OK and the installer will download to your computer. Locate the downloaded archive (.zip) package on your local machine, and double-click it. Double-click the new file to start the installer. Follow the steps in the installer and then click Finish to close the installer. That's all there is to it. You now have all the elements you need for your own local development server. To begin, simply open the XAMPP application and you will see buttons that allow you to start the servers. To create a new website, simply copy the files into a directory placed inside the /htdocs directory. You can then access your new site by opening the URL in your browser, as follows: http://localhost/sitedirectoryname. As a final note, you may also want to have access to a graphics program to handle editing any image files that might be part of your theme. Again, there is a world of options out there and the right choice is up to you. Planning the modifications A proper dissertation on site planning and usability are beyond the scope of this article. Similarly, this article is neither an HTML nor a CSS tutorial; accordingly, in this article we are going to focus on identifying the issues and delineating the process involved in the customization of an existing theme, rather than focusing on design techniques or coding-specific changes. Any time you set off down the path of transforming an existing theme into something new, you need to spend some time planning. The principle here is the same as in many other areas. A little time spent planning at the frontend of a project can pay off big in savings later. When it comes to planning your theming efforts, the very first question you have to answer is whether you are going to customize an existing theme or whether you will create a new theme. In either event, it is recommended that you work with sub-themes. The key difference is the nature of the base theme you select, that is, the theme you are going to choose as your starting point. In sub-theming, the base theme is the starting point. Sub-themes inherit the parent theme's resources; hence, the base theme you select will shape your theme building. Some base themes are extremely simple, designed to impose on the themer the fewest restrictions; others are designed to give you the widest range of resources to assist your efforts. However, since you can use any theme for a base theme, the reality is that most themes fall in between, at least in terms of their suitability for use as a base theme. Another way to think of the relationship between a base theme and a subtheme is in terms of a parent-child relationship. The child (sub-theme) inherits its characteristics from its parent (the base theme). There are no limits to the ability to chain together multiple parent-child relationships; a sub-theme can be the child of another sub-theme. When it comes to customizing an existing theme, the reality is often that the selection of the base theme will be dictated by the theme's default appearance and feature set; in other words, you are likely to select the theme that is already the closest to what you want. That said, don't limit yourself to a shallow surface examination of the theme. In order to make the best decision, you need to look carefully at the underlying theme's file and structures and see if it truly is the best choice. While the original theme may be fairly close to what you want, it may also have limitations that require work to overcome. Sometimes it is actually faster to start with a more generic theme that you already know and can work with easily. Learning someone else's code is always a bit of a chore and themes are like any other code—some are great, some are poor, most are simply okay. A best practices theme makes your life easier. In simplest terms, the process of customizing an existing theme can be broken into three steps: Select your base theme. Create a sub-theme from your base theme. Make the changes to your new sub-theme. Why it is not recommended to simply modify the theme directly? There are two following reasons: First, best practices say not to touch the original files; leave them intact so you can upgrade them without losing customizations. Second, as a matter of theming philosophy, it's better to leave the things you don't need to change in the base theme and focus your sub-theme on only the things you want to change. This approach to theming is more manageable and makes for much easier testing as you go. Selecting a base theme For the sake of simplicity, in this article, we are going to work with the default Bartik theme. We'll take Bartik, create a new sub-theme and then modify the subtheme to create the customized theme. Let's call the new theme "JeanB". Note that while we've named the theme "JeanB", when it comes to naming the theme's directory, we will use "jeanb" as the system only supports lowercase letters and underscores. In order to make the example easier to follow and to avoid the need to install a variety of third-party extensions, the modifications we will make in this article will be done using only the default components. Arguably, when you are building a site like this for deployment in the real world (rather than simply for skills development) you might wish to consider implementing one or more specialized third-party extensions to handle certain tasks.
Read more
  • 0
  • 0
  • 3700

article-image-drupal-7-fieldscck-field-display-management
Packt
12 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Field Display Management

Packt
12 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Field display The purpose of managing the field display is not only to beautify the visual representation of fields but also to affect how people read the information on a web page and the usability of a website. The design of a field display has to seem logical to users and easy to understand. Consider an online application form where the first name field is positioned in between the state and country fields. Although the application can gather the information just fine, this would be very illogical and bothersome to our users. It goes without saying that the first name should be in the personal details section while the state and country should go in the personal address section of the form. Time for action – a first look at the field display settings In this section, we will learn where to find the field display settings in Drupal. Now, let's take a look at the field display settings: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on this page. On the right of the table, click on the manage display link to go to the manage display administration page to adjust the order and positioning of, the field labels. Click on the manage display link to adjust the field display for the Recipe content type. This page lists all of the field display settings that are related to the content type we selected. (Move the mouse over the image to enlarge it.) If we click on the select list for any of the labels, there are three options that we can select, Above, Inline, and <Hidden>. If we click on the select list for any of the formats, there are five options that we can select from, namely, Default, Plain text, Trimmed, Summary or trimmed, and <Hidden>. However, the options will vary with field types. As in the case of the Difficulty field the multiple values field, if we click on the select list for Format, will show three options, Default, Key, and <Hidden>. What just happened? We have learned where to find the field display settings in Drupal, and we have taken a look at the options for the field display. When we click on the select list for labels, there are three options that we can use to control the display of the field label: Above: The label will be positioned above the field widget Inline: The label will be positioned to the left of the field widget <Hidden>:The label will not be displayed on the page When we click on the select list for formats, the options will be shown, but the options will be different, and depend on the field type we select. For the Body field, we will have four options that we can choose to control the body field display: Default: The field content will be displayed as we specified when we created the field Plain text: The field content will be displayed as plain text, which ignores HTML tags if the content contains any Trimmed: The field content will be truncated to a specified number of characters Summary or trimmed: The summary of the field will be displayed; if there is no summary entered, the content of the field will be trimmed to a specified number of characters <Hidden>:The field content will not be displayed Formatting field display in the Teaser view The teaser view of content is usually the first piece of information people will see on a homepage or a landing page, so it will be useful if there are options that could control the display in teaser view. For example, for the yummy recipe website, the client would like to have the number of characters displayed in teaser view limited to 300 characters, because they do not like to display too much text for each post on the homepage. Time for action – formatting the Body field display in teaser view In this section, we will format the Body field of the Recipe content type in teaser view: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on the following page: Click on the manage display link to adjust the field display for the Recipe content type. At the top-right of the page, there are two buttons, the first one is Default, the second one is Teaser, now click on the Teaser button. (Move the mouse over the image to enlarge it.) This page lists all the available fields for the teaser view of the Recipe content type. (Move the mouse over the image to enlarge it.) Now click on the gear icon to update the trim length settings: (Move the mouse over the image to enlarge it.) After clicking on the gear icon, which will display the Trim length settings. The default value of Trim length is 600, we change it to 300, and then click on the Update button to confirm the entered value. Click on the Save button at the bottom of the page to store the value into Drupal. If we go back to the homepage, we will see the recipe content in teaser view. It is now truncated to 300 characters. What just happened? We have formatted the Body field of the Recipe content type in Teaser view. Currently there are two view modes, one is the Default view mode, and the other is the Teaser view mode. When we need to format the field content in Teaser view, we have to switch to the Teaser view mode on the Manage display. administration page to modify these settings. Moreover, when entering data or updating the field display settings, we have to remember to click on the Save button at the bottom of the page to permanently store the value into Drupal. If we just click on the Update button, it will not store the value into Drupal, it will only confirm the value we entered, therefore, we always need to remember to click on the Save button after updating any settings. Furthermore, there are other fields which are positioned in the hidden section at the bottom of the page, which means those fields will not be shown in Teaser view. In our case only the Body field is shown in Teaser view. We can easily drag and drop a field to the hidden section to hide the field, or drag and drop a field above the hidden section to show the field on the screen.
Read more
  • 0
  • 0
  • 1327

article-image-drupal-7-fieldscck-using-image-field-modules
Packt
11 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Using the Image Field Modules

Packt
11 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Adding image fields to content types We have learned how to add file fields to content types. In this section, we will learn how to add image fields to content types so that we can attach images to our content. Time for action – adding an image field to the Recipe content type In this section, we will add an image field to the Recipe content type. Follow these steps: Click on the Structure link in the administration menu at the top of the page. Click on the Content types link to go to the content types administration page. Click on the manage fields link on the Recipe row as in the following screenshot, because we would like to add an image field to the recipe content type. (Move the mouse over the image to enlarge it.) Locate the Add new field section. In the Label field enter "Image", and in the Field name field enter "image". In the field type select list, select "image" as the field type; the field widget will be automatically switched to select Image as the field widget. After the values are entered, click on Save. (Move the mouse over the image to enlarge it.)   What just happened?   We added an image field to the Recipe content type. The process of adding an image field to the Recipe content type is similar to the process of adding a file field to the Recipe content type, except that we selected image field as the field type and we selected image as the field widget. We will configure the image field in the next section. Configuring image field settings We have already added the image field. In this section, we will configure the image field, learn how to configure the image field settings, and understand how they reflect to image outputs by using those settings. Time for action – configuring an image field for the Recipe content type In this section, we will configure image field settings in the Recipe content type. Follow these steps: After clicking on the Save button, Drupal will direct us to the next page, which provides the field settings for the image field. The Upload destination option is the same as the file field settings, which provide us an option to decide whether image files should be public or private. In our case, we select Public files. The last option is the Default image field. We will leave this option for now, and click on the Save field settings button to go to the next step. (Move the mouse over the image to enlarge it.) This page contains all the settings for the image field. The most common field settings are the Label field, the Required field, and the Help text field. We will leave these fields as default. (Move the mouse over the image to enlarge it.) The Allowed file extension section is similar to the file field we have already learned about. We will use the default value in this field, so we don't need to enter anything for this field. The File directory section is also the same as the settings in the file field. Enter "image_files" in this field. (Move the mouse over the image to enlarge it.) Enter "640" x "480" in the Maximum image resolution field and the Minimum image resolution field, and enter "2MB" in the maximum upload size field. (Move the mouse over the image to enlarge it.) Check the Enable Alt field and the Enable Title field checkboxes. (Move the mouse over the image to enlarge it.) Select thumbnail in the Preview image style select list, and select Throbber in the Progress indicator section. (Move the mouse over the image to enlarge it.) The bottom part of this page, the image field settings section, is the same as the previous page we just saved, so we don't need to re-enter the values. Click on the Save settings button at the bottom of the page to store all the values we entered on this page. (Move the mouse over the image to enlarge it.) After clicking on the Save settings button, Drupal sends us back to the Manage fields setting administration page. Now the image field is added to the Recipe content type. (Move the mouse over the image to enlarge it.)   What just happened?   We have added and configured an image field for the Recipe content type. We left the default values in the Label field, the Required field, and the Help text field. They are the most common settings in fields. The Allowed file extension section is similar to the file field that we have seen, which provides us with the ability to enter the file extension of the images which are allowed to be uploaded. The File directory field is the same as the one in the file field, which provides us with the option to save the uploaded files to a different directory to the default location of the file directory. The Maximum image resolution field allows us to specify the maximum width and height of image resolution that will be uploaded. If the uploaded image is bigger than the resolution we specified, it will resize images to the size we specified. If we did not specify the size, it will not have any restriction to images. The Minimum image resolution field is the opposite of the maximum image resolution. We specify the minimum width and height of image resolution that is allowed to be uploaded, not the maximum width and height of image resolution. If we upload image resolution less than the minimum size we specified, it will throw an error message and reject the image upload. The Enable Alt field and the Enable Title field can be enabled to allow site administrators to enter the ALT and Title attributes of the img tag in XHTML, which can improve the accessibility and usability of a website when using images. The Preview image style select list allows us to select which image style will be used to display while editing content. Currently it provides three image styles, thumbnail, medium, and large. The thumbnail image style will be used by default. We will learn how to create a custom image style in the next section. Have a go hero – adding an image field to the Cooking Tip content type It's time for another challenge. We have created an image field to the Recipe content type. We can use the same method we have learned here to add and configure an image field to the Cooking Tip content type. You can apply the same steps used to create image fields to the Recipe content type and try to understand the differences between the settings on the image field settings administration page.
Read more
  • 0
  • 0
  • 1776
article-image-drupal-7-fieldscck-using-file-field-modules
Packt
08 Jul 2011
4 min read
Save for later

Drupal 7 fields/CCK: Using the file field modules

Packt
08 Jul 2011
4 min read
Adding and configuring file fields to content types There are many cases where we need to attach files to website content. For instance, a restaurant owner might like to upload their latest menu in PDF format to their website, or a financial institution would like to upload a new product catalog so customers can download and print out the catalog if they need it. The File module is built into the Drupal 7 core, which provides us with the ability to attach files to content easily, to decide the attachment display format, and also to manage file locations. Furthermore, the File module is integrated with fields and provides a file field type, so we can easily attach files to content using the already discussed field system making the process of managing files much more streamlined. Time for action – adding and configuring a file field to the Recipe content type In this section, we will add a file field to the Recipe content type, which will allow files to be attached to Recipe content. Follow these steps: Click on the Structure link in the administration menu at the top of the page. The following page will display a list of options. Click on the Content types link to go to the Content types administration page. Since we want to add a file field to the Recipe content type, we will click on the manage fields link on the Recipe row as shown in the following screenshot: (Move the mouse over the image to enlarge it.) This page will display the existing fields of the Recipe content type. In the Label field enter "File", and in the Field name field enter "file". In the field type select list select File as the field type, the field widget will be automatically switched to File as the field widget. After the values are entered, click on Save. A new window will pop up which provides the field settings for the file field that we are creating. There are two checkboxes, and we will enable both these checkboxes. The last radio button option will be selected by default. Then click on the Save field settings button at the bottom of the page. We clicked on the Save field settings button to store the values for the file field settings that we selected. After that, it will direct us to the file field settings administration page, as in the following screenshot: We can leave the Label field as default as it will be filled automatically with the value we entered previously. We will also leave the Required field as default, because we do not want to force users to attach files to every recipe. In the Help text field, we can enter "Attach files to this recipe". In the Allowed file extensions section, we can enter the file extensions that are allowed to be uploaded. In this case, we will enter "txt, pdf, zip". In the File directory section, we can enter the name of a subdirectory that will store the uploaded files, and in this case, we will enter "recipe_files": In the Maximum upload size section, we can enter a value to limit the file size when uploading files. We will enter "2MB" in this field. The Enable Description field checkbox allows users to enter a description about the uploaded files. In this case, we will enable this option, because we would like users to enter a description of the uploaded files. (Move the mouse over the image to enlarge it.) In the Progress indicator section, we can select which indicator will be used when uploading files. We select Throbber as the progress indicator for this field. (Move the mouse over the image to enlarge it.) You will notice the bottom part of the page is exactly same as in the previous section. We can ignore the bottom part and click on the Save settings button to store all the values we have entered. Drupal will direct us back to the manage fields administration page with a message saying we have successfully saved the configuration for the file field. After creating the file field, the file field row will be added to the table. This table will display the details about the file field we just created. (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 3233

article-image-ejb-31-introduction-interceptors
Packt
06 Jul 2011
7 min read
Save for later

EJB 3.1: Introduction to Interceptors

Packt
06 Jul 2011
7 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook Introduction Most applications have cross-cutting functions which must be performed. These cross-cutting functions may include logging, managing transactions, security, and other aspects of an application. Interceptors provide a way to achieve these cross-cutting activities. The use of interceptors provides a way of adding functionality to a business method without modifying the business method itself. The added functionality is not intermeshed with the business logic resulting in a cleaner and easier to maintain application. Aspect Oriented Programming (AOP) is concerned with providing support for these cross-cutting functions in a transparent fashion. While interceptors do not provide as much support as other AOP languages, they do offer a good level of support. Interceptors can be: Used to keep business logic separate from non-business related activities Easily enabled/disabled Provide consistent behavior across an application Interceptors are specific methods invoked around a method or methods of a target EJB. We will use the term target, to refer to the class containing the method(s) an interceptor will be executing around. The interceptor's method will be executed before the EJB's method is executed. When the interceptor method executes, it is passed as an InvocationContext object. This object provides information relating to the state of the interceptor and the target. Within the interceptor method, the InvocationContext's method proceed can be issued that will result in the target's business method being executed or, as we will see shortly, the next interceptor in the chain. When the business method returns, the interceptor continues execution. This permits execution of code before and after the execution of a business method. Interceptors can be used with: Stateless session EJBs Stateful session EJBs Singleton session EJBs Message-driven beans The @Interceptors annotation defines which interceptors will be executed for all or individual methods of a class. Interceptor classes use the same lifecycle of the EJB they are applied to, in the case of stateful EJBs, which means the interceptor could be passivated and activated. In addition, they support the use of dependency injection. The injection is done using the EJB's naming context. More than one interceptor can be used at a time. The sequence of interceptor execution is referred to as an interceptor chain. For example, an application may need to start a transaction based on the privileges of a user. These actions should also be logged. An interceptor can be defined for each of these activities: validating the user, starting the transaction, and logging the event. The use of interceptor chaining is illustrated in the Using interceptors to handle application statistics recipe. Lifecycle callbacks such as @PreDestroy and @PostConstruct can also be used within interceptors. They can access interceptor state information as discussed in the Using lifecycle methods in interceptors recipe. Interceptors are useful for: Validating parameters and potentially changing them before they are sent to a method Performing security checks Performing logging Performing profiling Gathering statistics An example of parameter validation can be found in the Using the InvocationContext to verify parameters recipe. Security checks are illustrated in the Using interceptors to enforce security recipe. The use of interceptor chaining to record a method's hit count and the time spent in the method is discussed in the Using interceptors to handle application statistics recipe. Interceptors can also be used in conjunction with timer services. The recipes in this article are based largely around a conference registration application as developed in the first recipe. It will be necessary to create this application before the other recipes can be demonstrated. Creating the Registration Application A RegistrationApplication is developed in this recipe. It provides the ability of attendees to register for a conference. The application will record their personal information using an entity and other supporting EJBs. This recipe details how to create this application. Getting ready The RegistrationApplication consists of the following classes: Attendee – An entity representing a person attending the conference AbstractFacade – A facade-based class AttendeeFacade – The facade class for the Attendee class RegistrationManager – Used to control the registration process RegistrationServlet – The GUI interface for the application The steps used to create this application include: Creating the Attendee entity and its supporting classes Creating a RegistrationManager EJB to control the registration process Creating a RegistrationServlet to drive the application The RegistrationManager will be the primary vehicle for the demonstration of interceptors. How to do it... Create a Java EE application called RegistrationApplication. Add a packt package to the EJB module and a servlet package in the application's WAR module. Next, add an Attendee entity to the packt package. This entity possesses four fields: name, title, company, and id. The id field should be auto generated. Add getters and setters for the fields. Also add a default constructor and a three argument constructor for the first three fields. The major components of the class are shown below without the getters and setters. @Entity public class Attendee implements Serializable { private String name; private String title; private String company; private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Attendee() { } public Attendee(String name, String title, String company) { this.name = name; this.title = title; this.company = company; } } Next, add an AttendeeFacade stateless session bean which is derived from the AbstractFacade class. The AbstractFacade class is not shown here. @Stateless public class AttendeeFacade extends AbstractFacade<Attendee> { @PersistenceContext(unitName = "RegistrationApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public AttendeeFacade() { super(Attendee.class); } } Add a RegistrationManager stateful session bean to the packt package. Add a single method, register, to the class. The method should be passed three strings for the name, title, and company of the attendee. It should return an Attendee reference. Use dependency injection to add a reference to the AttendeeFacade. In the register method, create a new Attendee and then use the AttendeeFacade class to create it. Next, return a reference to the Attendee. @Stateful public class RegistrationManager { @EJB AttendeeFacade attendeeFacade; Attendee attendee; public Attendee register(String name, String title, String company) { attendee = new Attendee(name, title, company); attendeeFacade.create(attendee); return attendee; } } In the servlet package of the WAR module, add a servlet called RegistrationServlet. Use dependency injection to add a reference to the RegistrationManager. In the try block of the processRequest method, use the register method to register an attendee and then display the attendee's name. public class RegistrationServlet extends HttpServlet { @EJB RegistrationManager registrationManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet RegistrationServlet</title>"); out.println("</head>"); out.println("<body>"); Attendee attendee = registrationManager.register("Bill Schroder", "Manager", "Acme Software"); out.println("<h3>" + attendee.getName() + " has been registered</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the servlet. The output should appear as shown in the following screenshot: How it works... The Attendee entity holds the registration information for each participant. The RegistrationManager session bean only has a single method at this time. In later recipes we will augment this class to add other capabilities. The RegistrationServlet is the client for the EJBs.
Read more
  • 0
  • 0
  • 2065

article-image-ejb-31-working-interceptors
Packt
06 Jul 2011
3 min read
Save for later

EJB 3.1: Working with Interceptors

Packt
06 Jul 2011
3 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook        The recipes in this article are based largely around a conference registration application as developed in the first recipe of the previous article on Introduction to Interceptors. It will be necessary to create this application before the other recipes in this article can be demonstrated. Using interceptors to enforce security While security is an important aspect of many applications, the use of programmatic security can clutter up business logic. The use of declarative annotations has come a long way in making security easier to use and less intrusive. However, there are still times when programmatic security is necessary. When it is, then the use of interceptors can help remove the security code from the business logic. Getting ready The process for using an interceptor to enforce security involves: Configuring and enabling security for the application server Adding a @DeclareRoles to the target class and the interceptor class Creating a security interceptor How to do it... Configure the application to handle security as detailed in Configuring the server to handle security recipe. Add the @DeclareRoles("employee") to the RegistrationManager class. Add a SecurityInterceptor class to the packt package. Inject a SessionContext object into the class. We will use this object to perform programmatic security. Also use the @DeclareRoles annotation. Next, add an interceptor method, verifyAccess, to the class. Use the SessionContext object and its isCallerInRole method to determine if the user is in the "employee" role. If so, invoke the proceed method and display a message to that effect. Otherwise, throw an EJBAccessException. @DeclareRoles("employee") public class SecurityInterceptor { @Resource private SessionContext sessionContext; @AroundInvoke public Object verifyAccess(InvocationContext context) throws Exception { System.out.println("SecurityInterceptor: Invoking method: " + context.getMethod().getName()); if (sessionContext.isCallerInRole("employee")) { Object result = context.proceed(); System.out.println("SecurityInterceptor: Returned from method: " + context.getMethod().getName()); return result; } else { throw new EJBAccessException(); } } } Execute the application. The user should be prompted for a username and password as shown in the following screenshot. Provide a user in the employee role. The application should execute to completion. Depending on the interceptors in place, you will console output similar to the following: INFO: Default Interceptor: Invoking method: register INFO: SimpleInterceptor entered: register INFO: SecurityInterceptor: Invoking method: register INFO: InternalMethod: Invoking method: register INFO: register INFO: Default Interceptor: Invoking method: create INFO: Default Interceptor: Returned from method: create INFO: InternalMethod: Returned from method: register INFO: SecurityInterceptor: Returned from method: register INFO: SimpleInterceptor exited: register INFO: Default Interceptor: Returned from method: register How it works... The @DeclareRoles annotation was used to specify that users in the employee role are associated with the class. The isCallerInRole method checked to see if the current user is in the employee role. When the target method is called, if the user is authorized then the InterceptorContext's proceed method is executed. If the user is not authorized, then the target method is not invoked and an exception is thrown. See also EJB 3.1: Controlling Security Programmatically Using JAAS  
Read more
  • 0
  • 0
  • 1989
article-image-overview-web-services-sakai
Packt
06 Jul 2011
16 min read
Save for later

An overview of web services in Sakai

Packt
06 Jul 2011
16 min read
Connecting to Sakai is straightforward, and simple tasks, such as automatic course creation, take only a few lines of programming effort. There are significant advantages to having web services in the enterprise. If a developer writes an application that calls a number of web services, then the application does not need to know the hidden details behind the services. It just needs to agree on what data to send. This loosely couples the application to the services. Later, if you can replace one web service with another, programmers do not need to change the code on the application side. SOAP works well with most organizations' firewalls, as SOAP uses the same protocol as web browsers. System administrators have a tendency to protect an organization's network by closing unused ports to the outside world. This means that most of the time there is no extra network configuration effort required to enable web services. Another simplifying factor is that a programmer does not need to know the details of SOAP or REST, as there are libraries and frameworks that hide the underlying magic. For the Sakai implementation of SOAP, to add a new service is as simple as writing a small amount of Java code within a text file, which is then compiled automatically and run the first time the service is called. This is great for rapid application development and deployment, as the system administrator does not need to restart Sakai for each change. Just as importantly, the Sakai services use the well-known libraries from the Apache Axis project. SOAP is an XML message passing protocol that, in the case of Sakai sites, sits on top of the Hyper Text Transfer Protocol (HTTP). HTTP is the protocol used by web browsers to obtain web pages from a server. The client sends messages in XML format to a service, including the information that the service needs. Then the service returns a message with the results or an error message. The architects introduced SOAP-based web services first to Sakai , adding RESTful services later. Unlike SOAP, instead of sending XML via HTTP posts to one URL that points to a service, REST sends to a URL that includes information about the entity, such as a user, with which the client wishes to interact. For example, a REST URL for viewing an address book item could look similar to http://host/direct/addressbook_item/15. Applying URLs in this way makes for understandable, human-readable address spaces. This more intuitive approach simplifies coding. Further, SOAP XML passing requires that the client and the server parse the XML and at times, the parsing effort is expensive in CPU cycles and response times. The Entity Broker is an internal service that makes life easier for programmers and helps them manipulate entities. Entities in Sakai are managed pieces of data such as representations of courses, users, grade books, and so on. In the newer versions of Sakai, the Entity Broker has the power to expose entities as RESTful services. In contrast, for SOAP services, if you wanted a new service, you would need to write it yourself. Over time, the Entity Broker exposes more and more entities RESTfully, delivering more hooks free to integrate with other enterprise systems. Both SOAP and REST services sit on top of the HTTP protocol. Protocols This section explains how web browsers talk to servers in order to gather web pages. It explains how to use the telnet command and a visual tool called TCPMON (http://ws.apache.org/commons/tcpmon/tcpmontutorial.html) to gain insight into how web services and Web 2.0 technologies work. Playing with Telnet It turns out that message passing occurs via text commands between the browser and the server. Web browsers use HTTP to get web pages and the embedded content from the server and to send form information to the server. HTTP talks between the client and server via text (7-bit ASCII) commands. When humans talk with each other, they have a wide vocabulary. However, HTTP uses fewer than twenty words. You can directly experiment with HTTP using a Telnet client to send your commands to a web server. For example, if your demonstration Sakai instance is running on port 8080, the following command will get you the login page: telnet localhost 8080 GET /portal/login The GET command does what it sounds like and gets a web page. Forms can use the GET verb to send data at the end of the URL. For example, GET /portal/login?name=alan&age=15 is sending the variables name=alan and age=15 to the server. Installing TCPMON You can use the TCPMON tool to view requests and responses from a web browser such as Firefox. One of TCPMON's abilities is that it can act as an invisible man in the middle, recording the messages between the web browser and the server. Once set up, the requests sent from the browser go to TCPMON and it passes the request on to the server. The server passes back a response and then TCPMON, a transparent proxy, returns the response to the web browser. This allows us to look at all requests and responses graphically. First, you can set up TCPMON to listenon a given port number—by convention, normally port 8888—and then you can configure your web browser to send its requests through the proxy. Then, you can type the address of a given page into the web browser, but instead of going directly to the relevant server, the browser sends the request to the proxy, which then passes it on and passes the response back. TCPMON displays both the request and the responses in a window. You can download TCPMON here. After downloading and unpacking, you can—from within the build directory—run either tcpmon.bat for the Windows environment or tcpmon.sh for the UNIX/Linux environment. To configure a proxy, you can click on the Admin tab and then set the Listen Port to 8888 and select the Proxy radio button. After that, clicking on Add will create a new tab, where the requests and responses will be displayed later. Your favorite web browser now has to recognize the newly-setup proxy. For Firefox 3, you can do this by selecting the menu option Edit/Preferences, and then choosing the Advanced tab and the Network tab, as shown in the next screenshot. You will need to set the proxy options, HTTP proxy to 127.0.0.1, and the port number to 8888. If you do this, you will need to ensure that the No proxies text input is blank. Clicking on the OK button enables the new settings. (Move the mouse over the image to enlarge.) To use the Proxy from within Internet Explorer 7 for a Local Area Network (LAN), you can edit the dialog box found under Tools | Internet Options | Connections | LAN settings. Once the proxy is working, typing http://localhost:8080/portal/login in the address bar will seamlessly return the login page of your local Sakai instance. Otherwise, you will see an error message similar to Proxy Server Refused Connection for Firefox or Internet Explorer cannot display the webpage. To turn off the proxy settings, simply select the No Proxies radio box and click on OK for Firefox 3, and for Internet Explorer 7, unselect the Use a proxy server for the LAN tick box and click on OK Requests and returned status codes When TCPMON is running a proxy on port 8888, it allows you to view the requests from the browser and the response in an extra tab, as shown in the following screenshot. Notice the extra information that the browser sends as part of the request. HTTP/1.1 defines the protocol and version level and the lines below GET are the header variables. The User-Agent defines which client sends the request. The Accept headers tell the server what the capabilities of the browser are, and the Cookie header defines the value stored in a cookie. HTTP is stateless, in principle; each response is based only on the current request. However, to get around this, persistent information can be stored in cookies. Web browsers normally store their representation of a cookie as a little text file or in a small database on the end users' computers. Sakai uses the supporting features of a servlet container, such as Tomcat, to maintain state in cookies. A cookie stores a session ID, and when the server sees the session ID, it can look up the request's server-side state. This state contains information such as whether the user is logged in, or what he or she has ordered. The web browser deletes the local representation of the cookie each time the browser closes. A cookie that is deleted when a web browser closes is known as a session cookie. The server response starts with the protocol followed by a status number. HTTP/1.1 200 OK tells the web browser that the server is using HTTP version 1.1 and was able to return the requested web page successfully. 2xx status codes imply success. 3xx status codes imply some form of redirection and tell the web browser where to try to pick up the requested resource. 4xx status codes are for client errors, such as malformed requests or lack of permission to obtain the resource. 4xx states are fertile grounds for security managers to look in log files for attempted hacking. 5xx status codes mostly have to do with a failure of the server itself and are mostly of interest to system administrators and programmers during the debugging cycle. In most cases, 5xx status numbers are about either high server load or a broken piece of code. Sakai is changing rapidly and even with the most vigorous testing, there are bound to be the occasional hiccups. You will find accurate details of the full range of status codes at: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. Another important part of the response is Content-Type, which tells the web browser which type of material the response is returning, so the browser knows how to handle it. For example, the web browser may want to run a plug-in for video types and display text natively. Content-Length in characters is normally also given. After the header information is finished, there is a newline followed by the content itself. Web browsers interpret any redirects that are returned by sending extra requests. Web browsers also interpret any HTML pages and make multiple requests for resources such as JavaScript files and images. Modern browsers do not wait until the server returns all the requests, but render the HTML page live as the server returns the parts. The GET verb is not very efficient for posting a large amount of data, as the URL has a length limit of around 2000 characters. Further, the end user can see the form data, and the browser may encode entities such as spaces to make the URL unreadable. There is also a security aspect: if you are typing passwords in forms using GET, others may see your password or other details. This is not a good idea, especially at Internet Cafés where the next user who logs on can see the password in the browsing history. The POST verb is a better choice. Let us take as an example the Sakai demonstration login page (http://localhost:8080/portal/login). The login page itself contains a form tag that points to the relogin page with the POST method. <form method="post" action="http://localhost:8080/portal/relogin" enctype="application/x-www-form-urlencoded"> Note that the HTML tag also defines the content type. Key features of the POST request compared to GET are: The form values are stored as content after the header values There is a newline between the end of the header and the data The request mentions data and the amount of data by the use of the Content-Length header value The essential POST values for a login form with user admin (eid=admin) and password admin (pw=admin) will look like: POST http://localhost:8080/portal/relogin HTTP/1.1 Content-Type: application/x-www-form-urlencoded Content-Length: 31 eid=admin&pw=admin&submit=Login POST requests can contain much more information than GET requests, and the requests hide the values from the address bar of the web browser. This is not secure. The header is just as visible as the URL, so POST values are also neither hidden nor secure. The only viable solution is for your web browser to encrypt your transactions using SSL/TLS (http://www.ietf.org/rfc/rfc2246.txt) for security, and this occurs every time you connect to a server using an HTTPS URL. SOAP Sakai uses the Apache Axis framework, which the developers have configured to accept SOAP calls via POST. SOAP sends messages in a specific XML format with the Content-Type, otherwise known as MIME type, application/soap+xml. A programmer does not need to know more than that, as the client libraries take care of the majority of the excruciating low-level details. An example SOAP message generated by the Perl module, SOAP::Lite (http://www.soaplite.com/), for creating a login session in Sakai will look like the following POST data: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope soap_encodingStyle= "http://schemas.xmlsoap.org/soap/encoding/" > <c-gensym3 xsi_type="xsd:string">admin</c-gensym3> <c-gensym5 xsi_type="xsd:string">admin</c-gensym5> </login> </soap:Body> </soap:Envelope> There is an envelope with a body containing data for the service to consume. The important point to remember is that both the client and the server have to be able to parse the specific XML schema. SOAP messages can include extra security features, but Sakai does not require these. The architects expect organizations to encrypt web services using SSL/TSL. The last extra SOAP-related complexity is the Web Service Description Language (http://www.w3.org/TR/wsdl). Web services may change location or exist in multiple locations for redundancy. The service writer can define the location of the services and the data types involved with those services in another file, in XML format. JSON Also worth mentioning is JavaScript Object Notation (JSON), which is another popular format, passed using HTTP. When web developers realized that they could force browsers to load parts of a web page in at a time, it significantly improved the quality of the web browsing experience for the end user. This asynchronous loading enables all kinds of whiz-bang features, such as when you type in a search term and can choose from a set of search term completions before pressing on the Submit button. Asynchronous loading delivers more responsive and richer web pages that feel more like traditional desktop applications than a plain old web page. JSON is one of the formats of choice for passing asynchronous requests and responses. The asynchronous communication normally occurs through HTTP GET or POST, but with a specific content structure that is designed to be human readable and script language parser-friendly. JSON calls have the file extension .json as part of the URL. As mentioned in RFC 4627, an example image object communicated in JSON looks like: { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } By confusing the boundaries between client and server, a lot of the presentation and business logic is locked on the client side in scripting languages such as JavaScript. The scripting language orchestrates the loading of parts of pages and the generation of widget sets. Frameworks such as jQuery (http://jquery.com/) and MyFaces (http://myfaces.apache.org/) significantly ease the client-side programming burden. REST To understand REST, you need to understand the other verbs in HTTP (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html). The full HTTP set is OPTIONS, GET, HEAD, POST, PUT, DELETE, and TRACE. The HEAD verb returns from the server only the headers of the response without the content, and is useful for clients that want to see if the content has changed since the last request. PUT requests that the content in the request be stored at a particular location mentioned in the request. DELETE is for deleting the entity. REST uses the URL of the request to route to the resource, and the HTTP verb GET is used to get a resource, PUT to update, DELETE to delete, and POST to add a new resource. In general, POST request is for creating an item, PUT for updating an item, DELETE for deleting an item, and GET for returning information on the item. In SOAP, you are pointing directly towards the service the client calls or indirectly via the web service description. However, in REST, part of the URL describes the resource or resources you wish to work with. For example, a hypothetical address book application that lists all e-mail addresses in HTML format would look similar to the following: GET /email To list the addresses in XML format or JSON format: GET /email.xml GET /email.json To get the first e-mail address in the list: GET /email/1 To create a new e-mail address, of course remembering to add the rest of e-mail details to the end of the GET: POST /email In addition, to delete address 5 from the list use the following command: DELETE /email/5 To obtain address 5 in other formats such as JSON or XML, then use file extensions at the end of the URL, for example: GET /email/5.json GET /email/5.xml RESTful services are intuitively more descriptive than SOAP services, and they enable easy switching of the format from HTML to JSON to fuel the dynamic and asynchronous loading of websites. Due to the direct use of HTTP verbs by REST, this methodology also fits well with the most common application type: CRUD (Create, Read, Update, and Delete) applications, such as the site or user tools within Sakai. Now that we have discussed the theory, in the next section we shall discuss which Sakai-related SOAP services already exist.
Read more
  • 0
  • 0
  • 2975

article-image-drupal-7-themes-creating-dynamic-css-styling
Packt
05 Jul 2011
7 min read
Save for later

Drupal 7 Themes: Creating Dynamic CSS Styling

Packt
05 Jul 2011
7 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling The reader would benefit by referring the previous article on Dynamic Theming In addition to creating templates that are displayed conditionally, the Drupal system also enables you to apply CSS selectively. Drupal creates unique identifiers for various elements of the system and you can use those identifiers to create specific CSS selectors. As a result, you can provide styling that responds to the presence (or absence) of specific conditions on any given page. Employing $classes for conditional styling One of the most useful dynamic styling tools is $classes. This variable is intended specifically as an aid to dynamic CSS styling. It allows for the easy creation of CSS selectors that are responsive to either the layout of the page or to the status of the person viewing the page. This technique is typically used to control the styling where there may be one, two, or three columns displayed, or to trigger display for authenticated users. Prior to Drupal 6, $layout was used to detect the page layout. With Drupal 6 we got instead, $body_classes. Now, in Drupal 7, it's $classes. While each was intended to serve a similar purpose, do not try to implement the previous incarnations with Drupal 7, as they are no longer supported! By default $classes is included with the body tag in the system's html.tpl.php file; this means it is available to all themes without the necessity of any additional steps on your part. With the variable in place, the class associated with the body tag will change automatically in response to the conditions on the page at that time. All you need to do to take advantage of this and create the CSS selectors that you wish to see applied in the various situations. The following chart shows the dynamic classes available to you by default in Drupal 7: If you are not certain what this looks like and how it can be used, simply view the homepage of your site with the Bartik theme active. Use the view source option in your browser to then examine the body tag of the page. You will see something like this: <body class="html front not-logged-in one-sidebar sidebar-first page-node">. The class definition you see there is the result of $classes. By way of comparison, log in to your site and repeat this test. The body class will now look something like this: <body class="html front logged-in one-sidebar sidebar-first page-node">. In this example, we see that the class has changed to reflect that the user viewing the page is now logged in. Additional statements may appear, depending on the status of the person viewing the page and the additional modules installed. While the system implements this technique in relation to the body tag, its usage is not limited to just that scenario; you can use $classes with any template and in a variety of situations. If you'd like to see a variation of this technique in action (without having to create it from scratch), take a look at the Bartik theme. Open the node.tpl.php file and you can see the $classes variable added to the div at the top of the page; this allows this template to also employ the conditional classes tool. Note that the placement of $classes is not critical; it does not have to be at the top of the file. You can call this at any point where it is needed. You could, for example, add it to a specific ordered list by printing out $classes in conjunction with the li tag, like this: <li class="<?php print $classes; ?>"> $classes is, in short, a tremendously useful aid to creating dynamic theming. It becomes even more attractive if you master adding your own variables to the function, as discussed in the next section. Adding new variables to $classes To make things even more interesting (and useful), you can add new variables to $classes through use of the variable process functions. This is implemented in similar fashion to other preprocess function. Let's look at an example, in this case, taken from Drupal.org. The purpose here is to add a striping class keyed to the zebra variable and to make it available through $classes. To set this up, follow these steps: Access your theme's template.php file. If you don't have one, create it. Add the following to the file: <?php function mythemename_preprocess_node(&$vars) { // Add a striping class. $vars['classes_array'][] = 'node-' . $vars['zebra']; } ?> Save the file. The variable will now be available in any template in which you implement $classes. Creating dynamic selectors for nodes Another handy resource you can tap into for CSS styling purposes is Drupal's node ID system. By default, Drupal generates a unique ID for each node of the website. Node IDs are assigned at the time of node creation and remain stable for the life of the node. You can use the unique node identifier as a means of activating a unique selector. To make use of this resource, simply create a selector as follows: #node-[nid] { } For example, assume you wish to add a border to the node with the ID of 2. Simply create a new selector in your theme's stylesheet, as shown: #node-2 { border: 1px solid #336600 } As a result, the node with the ID of 2 will now be displayed with a 1-pixel wide solid border. The styling will only affect that specific node. Creating browser-specific stylesheets A common solution for managing some of the difficulties attendant to achieving true cross-browser compatibility is to offer stylesheets that target specific browsers. Internet Explorer tends to be the biggest culprit in this area, with IE6 being particularly cringe-worthy. Ironically, Internet Explorer also provides us with one of the best tools for addressing this issue. Internet Explorer implements a proprietary technology known as Conditional Comments. It is possible to easily add conditional stylesheets to your Drupal system through the use of this technology, but it requires the addition of a contributed module to your system, called Conditional Stylesheets. While it is possible to set up conditional stylesheets without the use of the module, it is more work, requiring you to add multiple lines of code to your template.php. With the module installed, you just add the stylesheet declarations to your .info file and then, using a simple syntax, set the conditions for their use. Note also that the Conditional Stylesheets module is in the queue for inclusion in Drupal 8, so it is certainly worth looking at now. To learn more, visit the project site at http://drupal.org/project/conditional_styles. If, in contrast, you would like to do things manually by creating a preprocess function to add the stylesheet and target it by browser key, please see http://drupal.org/node/744328. Summary This article covers the basics needed to make your Drupal theme responsive to the contents and the users. By applying the techniques discussed in this article, you can control the theming of pages based on content, state of the pages, or the users viewing them. Taking the principles one step further, you can also make the theming of elements within a page conditional. The ability to control the templates used and the styling of the page and its elements is what we call dynamic theming. Further resources on this subject: Drupal 7: Customizing an Existing Theme [Article] Drupal 7 Themes: Dynamic Theming [Article] 25 Useful Extensions for Drupal 7 Themers [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Building an Admin Interface in Drupal 7 Module Development [Article] Content in Drupal: Frequently Asked Questions (FAQ) [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 2003