Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-accessing-and-using-rdf-data-stanbol
Packt
30 Jul 2013
6 min read
Save for later

Accessing and using the RDF data in Stanbol

Packt
30 Jul 2013
6 min read
(For more resources related to this topic, see here.) Getting ready To start with, we need a Stanbol instance and Node.js. Additionally, we need the file rdfstore-js, which can be installed by executing the following command line: > npm install rdfstore How to do it... We create a file rdf-client.js with the following code: var rdfstore = require('rdfstore'); var request = require('request'); var fs = require('fs'); rdfstore.create(function(store) { function load(files, callback) { var filesToLoad = files.length; for (var i = 0; i < files.length; i++) { var file = files[i] fs.createReadStream(file).pipe( request.post( { url: 'http://localhost:8080/enhancer?uri=file: ///' + file, headers: {accept: "text/turtle"} }, function(error, response, body) { if (!error && response.statusCode == 200) { store.load( "text/turtle", body, function(success, results) { console.log('loaded: ' + results + " triples from file" + file); if (--filesToLoad === 0) { callback() } } ); } else { console.log('Got status code: ' + response.statusCode); } })); } } load(['testdata.txt', 'testdata2.txt'], function() { store.execute( "PREFIX enhancer:<http://fise.iks-project. eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. FILTER (lang(?label) = "en") }", function(success, results) { if (success) { console.log("*******************"); for (var i = 0; i < results.length; i++) { console.log(results[i].label.value + " in " + results[i].source.value); } } }); }); }); Create the data files:Our client loads two files. We use a simple testdata.txt file having the content: "The Stanbol enhancer can detect famous cities such as Paris and people such as Bob Marley." And a second testdata2.txt file with the following content: "Bob Marley never had a concert in Vatican City." We execute the code using Node.js command line: > node rdf-client.js The output is: loaded: 159 triples from file testdata2.txt loaded: 140 triples from file testdata2.txt ******************* Vatican City in file:///testdata2.txt Bob Marley in file:///testdata2.txt Bob Marley in file:///testdata.txt Paris, Texas in file:///testdata.txt Paris in file:///testdata.txt This time we see the labels of the entities and the file in which they appear. How it works… Unlike the usual clients, this client no longer analyses the returned JavaScript Object Notation (JSON) but processes the returned data as RDF. An RDF document is a directed graph. The following screenshot shows some RDF rendered as graph by the W3C We can create such an image by selecting RDF/XML as the output format on localhost:8080/enhancer , copying and pasting the XML generated, and running the engines on some text to www.w3.org/RDF/Validator/ , where we can request that triples and graphs be generated from it. Triples are the other way to look at RDF. An RDF graph (or document) is a set of triples of the form– subject-predicate-object, where subject and object are the nodes (vertices) and predicate is the arc (edge). Every triple is a statement describing a property of its subject: <urn:enhancement-f488d7ce-a1b7-faa6-0582-0826854eab5e> <http://fise. iks-project.eu/ontology/entity-reference> <http://dbpedia.org/resource/ Bob_Marley>. <http://dbpedia.org/resource/Bob_Marley> <http://www.w3.org/2000/01/rdf-schema#label> "Bob Marley"@en . There are two triples saying that an enhancement referenced Bob Marley and that the English label for Bob Marley is "Bob Marley". All the arches and most of the nodes are labeled by an Internationalized Resource Identifier (IRI), which defines a superset of the good old URLs including non-Latin characters. RDF can be serialized in many different formats. The two triples in the preceding command lines use the N-TRIPLES syntax. RDF/XML expresses (serializes) RDF graphs as XML documents. Originally, RDF/XML was referred to as the canonical serialization for RDF. Unfortunately, this caused some people to believe RDF would be somehow related to XML and thus inherit its flaws. A serialization format designed specifically for RDF that doesn't encode RDF into an existing format is Turtle. Turtle allows both explicit listing of triples as in N-TRIPLES but also supports various ways of expressing the graphs in a more concise and readable fashion. The JSON-LD, expresses RDF graphs in JSON. As this specification is currently still work in progress (see json-ld.org/), different implementations are incompatible and thus, for this example, we switched the Accept-Header to text/turtle. Another change in the code performing the request is that we added a uri query-parameter to the requested URL: 'http://localhost:8080/enhancer?uri=file:///' + file,   This defines the IRI naming used as a name for the uploaded content in the result graph. If this parameter is not specified, the enhancer will generate an IRI which is based on creating a hash of the content. But this line in the output would be less helpful: Paris in urn:content-item-sha1-3b16820497aae806f289419d541c770bbf87a796 Roughly the first half of our code takes care of sending the files to Stanbol and storing the returned RDF. We define a function load that asynchronously enhances a bunch of files and invokes a callback function when all files have successfully been loaded. The second half of the code is the function that's executed once all files have been processed. At this point, we have all the triples loaded in the store. We could now programmatically access the triples one by one, but it's easier to just query for the data we're interested in. SPARQL is a query language a bit similar to SQL but designed to query triple stores rather than relational databases. In our program, we have the following query (slightly simplified here): PREFIX enhancer:<http://fise.iks-project.eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. } The most important part is the section between curly brackets. This is a graph pattern that is like a graph, but with some variables instead of values. On execution, the SPARQL engine will check for parts of the RDF matching this pattern and return a table with a row for each selected value and a row for every matching value combination. In our case, we iterate through the result and output the label of the entity and the document in which the entity was referenced. There's more... The advantage of RDF is that many tools can deal with the data, ranging from command line tools such as rapper (librdf.org/raptor/rapper.html) for converting data to server applications, which allow to store large amounts of RDF data and build applications on top of it. Summary In this recipe, the advantage of using RDF (model-based) over the conventional JSON (syntax-based)method is explained. In the article, a client was created, rdf-client.js, which loaded two files, testdata.txt and testdata2.txt, and then were executed using Node.js command prompt. An RDF was rendered using W3C in the form of triples. Later, using SPARQL the triples were queried to extract the required information. Resources for Article : Further resources on this subject: Installing and customizing Redmine [Article] Web Services in Apache OFBiz [Article] Geronimo Architecture: Part 2 [Article]
Read more
  • 0
  • 1
  • 1401

article-image-digging-architecture
Packt
30 Jul 2013
31 min read
Save for later

Digging into the Architecture

Packt
30 Jul 2013
31 min read
(For more resources related to this topic, see here.) The big picture A very short description of a WaveMaker application could be: a Spring MVC server running in a Java container, such as Tomcat, serving file and JSON requests for a Dojo Toolkit-based JavaScript browser client. Unfortunately, such "elevator" descriptions can create more questions than they answer. For starters, although we will often refer to it as "the server," the WaveMaker server might be more aptly called an application server in most architectures. Sure, it is possible to have a useful application without additional servers or services beyond the WaveMaker server, but this is not typical. We could have a rich user interface to read against some in memory data set, for example. Far more commonly, the Java services running in the WaveMaker server are calling off to other servers or services, such as relational databases and RESTful web services. This means the WaveMaker server is often the middle or application tier server of a multi-tier application's architecture. Yet at the same time, the WaveMaker server can be eliminated completely. Applications can be packaged for uploading to PhoneGap build, http://build.phonegap.com/,directly from WaveMaker Studio. Both PhoneGap and the associated Apache project Cordova, http://cordova.apache.org,provide APIs to enable JavaScript to access native device functionality, such as capturing images with the camera and obtaining GPS location information. Packaged up and installed as a native application, the JavaScript files are loaded from the devices, file system instead of being downloaded from a server via HTTP. This means there is no origin domain to be constrained by. If the application only uses web services, or otherwise doesn't need additional services, such as database access, the WaveMaker server is neither used nor needed. Just because an application isn't installed on a mobile device from an app store doesn't mean we can't run it on a mobile device. Browsers on mobile devices are more capable than ever before. This means our client could be any device with a modern browser. You must also consider licensing in light of the bigger picture. WaveMaker, WaveMaker Studio, and the applications create with the Studio are released under the Apache 2.0 license, http://www.apache.org/licenses/LICENSE-2.0. The WaveMaker project was first released by WaveMaker Software in 2007. In March 2011, VMware (http://vmware.com) acquired the WaveMaker project. It was under VMware that WaveMaker 6.5 was released. In April 2013, Pramati Technlogies (http://pramati.com) acquired the assets of WaveMaker for its CloudJee (http://cloudjee.com) platform. WaveMaker continues to be developed and released by Pramati Technologies. Now that we understand where our client and server sit in the larger world, we will be primarily focused within and between those two parts. The overall picture of the client and server looks as shown in the following diagram: We will examine each piece of this diagram in detail during the course of this book. We shall start with the JavaScript client. Getting comfortable with the JavaScript client The client is a JavaScript client that runs in a modern browser. This means that most of the client, the HTML and DOM nodes that the browser interfaces with specifically, are created by JavaScript at runtime. The application is styled using CSS, and we can use HTML in our applications. However, we don't use HTML to define buttons and forms. Instead, we define components, such as widgets, and set their properties. These component class names and properties are used as arguments to functions that create DOM nodes for us. Dojo Toolkit To do this, WaveMaker uses the Dojo Toolkit, http://dojotoolkit.org/. Dojo, as it is generally referred to, is a modular, cross-browser, JavaScript framework with three sections. Dojo Core provides the base toolkit. On top of which are Dojo's visual widgets called Dijits. Finally, DojoX contains additional extensions such as charts and a color picker. DojoCampus' Dojo Explorer, http://dojocampus.com/explorer/, has a good selection of single unit demos across the toolkit, many with source code. Dojo allows developers to define widgets using HTML or JavaScript. WaveMaker users will better recognize the JavaScript approach. Specifically, WaveMaker 6.5.X uses version 1.6.1 of Dojo. Of the browsers supported by Dojo 1.6.1, http://dojotoolkit.org/reference-guide/1.8/releasenotes/1.6.html, Opera's "Dojo Core only" support prevents it from being supported by WaveMaker. This could change with Opera's move to WebKit. Building on top of the Dojo Toolkit, WaveMaker provides its own collections of widgets and underlying components. Although both can be called components, the name component is generally used for the non-visible parts, such as service calls to the server and the event notification system. Widgets, such as the Dijits, are visible components such as buttons and editors. Many, but not all, of the WaveMaker widgets extend functionality from Dojo widgets. When they do extend Dijits, WaveMaker widgets often add numerous functions and behaviors that are not part of Dojo. Examples include controlling the read-only state, formatting display values for currency, and merging components, such as buttons with icons in them. Combined with the WaveMaker runtime layers, these enhancements make it easy to assemble rich clients using only properties. WaveMaker's select editor (wm.SelectMenu) for example extends the Dojo Toolkit ComboBox (dijit.form.ComboBox) or the FilteringSelect (dijit.form.FilteringSelect) as needed. By default, a select menu has Dojo FilteringSelect as its editor, but it will use ComboBox instead if the user is on a mobile device or the developer has cleared the RestrictValues property tick box. A required select menu editor Let's consider the case of disabling a submit button when the user has not made a required list selection. In Dojo, this is done using JavaScript code, and for an experienced Dojo developer, this is not difficult. For those who may primarily consider Dojo a martial arts Studio however, it is likely another matter altogether. Using the WaveMaker framework provided widgets, no code is required to set up this inter-connection. This is simply a matter of visually linking or binding the button's disabled property to the lists' emptySelection property in the graphical binding dialog. Now the button will be disabled if the user has not made a selection in the grid's list of items. Logically, we can think of this as setting the disabled property to the value of the grid's emptySelection property, where emptySelection is true unless and until a row has been selected. Where WaveMaker most notably varies from the Dojo way of things is the layout engine. WaveMaker handles the layout of container widgets using its own engine. Containers are those widgets that contain other widgets, such as panels, tabs, and dialogs. This makes it easier for developers to arrange widgets in WaveMaker Studio. A result of this is that border, padding, and margin are set using properties on widgets, not by CSS. Border, padding, and margin are widget properties in WaveMaker, and are not controlled by CSS. Dojo made easy Having the Dojo framework available to us makes web development easier both when using the WaveMaker framework and when doing custom work. Dojo's modular and object-oriented functions, such as dojo.declare and dojo.inherited, for example, simplify creating custom components. The key takeaway here is that Dojo itself is available to you as a developer if you wish to use it directly. Many developers never need to utilize this capability, but it is available to you if you ever do wish to take advantage of it. Running the CRM Simple sample again from either the console in the browser development tools or custom project page code, we could use Dojo's byId() function to get a div, for example, the main title label: >dojo.byId("main_labelTitle"). In practice, the WaveMaker style of getting a DOM node via the component name, for example, main.labelTitle.domNode, is more practical and returns the same result. If a function or ability in Dojo is useful, the WaveMaker framework usually provides a wrapper of some sort for you. Just as often, the WaveMaker version is friendlier or otherwise easier to use in some way. For example, this.connect(), WaveMaker's version of dojo.connect(), tracks connections for you. This avoids the need for you to remember to call disconnect() to remove the reference added by every call to connect(). For more information about using Dojo functions in WaveMaker, see the Dojo framework page in the WaveMaker documentation at: http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Dojo+Framework. Binding and events Two solid examples of WaveMaker taking a powerful feature of Dojo and providing friendlier versions are topic notifications and event handling. Dojo.connect() enables you to register a method to be called when something happens. In other words: "when X happens, please also do Y". Studio provides visual tooling for this in the events section of a component's properties. Buttons have an event drop-down menu for their click event. Asynchronous server call components, live variables, and service variables, have tooled events for reviewing data just before the call is made and for the successful, and not so successful, returns from the call. These menus are populated with listings of likely components and if appropriate, functions. Invoking other service calls, particularly when a server call depends on data from the results of some previous server call, and navigation calls to other layers and pages within the application are easy examples of how WaveMaker's visual tooling of dojo.connect simplifies web development. WaveMaker's binding dialog is a graphical interface on the topic subscription system. Here we are "binding" a live variable that returns rows from the lineitem table to be filtered by the data value of the orderid editor in the form on the new order page: The result of this binding is that when the value of the orderid editor changes, the value in the filter parameter of this live variable will be updated. An event indicating that the value of this orderid editor has changed is published when the data value changes. This live variable's filter is being subscribed to that topic and can now update its value accordingly. Loading the client Web applications start from index.html, and a WaveMaker application is no different. If we examine index.html of a WaveMaker application, we see the total content is less than 100 lines. We have some meta tags in the head, mostly for Internet Explorer (MSIE) and iOS support. In the body, there are more entries to help out with older versions of MSIE, including script tags to use Chrome Frame if we so choose. If we cut all that away, index.html is rather simple. In the head, we load the CSS containing the projects theme and define a few lines of style classes for wavemakerNode and _wm_loading: <script>var wmThemeUrl = "/wavemaker/lib/wm/base/widget/themes/wm_default/theme.css";</script> <style type="text/css"> #wavemakerNode { height: 100%; overflow: hidden; position: relative; } #_wm_loading { text-align: center; margin: 25% 0px 25% 0px; } </style> Next we load the file config.js, which as its name suggests, is about configuration. The following line of code is used to load the file: <script type="text/javascript" src = "config.js"></script> Config.js defines the various settings, variables, and helper functions needed to initialize the application, such as the locale setting. Moving into the body tag of index.html, we find a div named wavemakerNode: <div id="wavemakerNode"> The next div tag is the loader gif, which is given in the following code: <div id="_wm_loading" style="z-index: 100;"> <table style='width:100%;height: 100%;'><tr><td align='center'><img alt="Loading" src = "/wavemaker/lib/boot/images/loader.gif" />&nbsp;&nbsp;Loading...</td></tr></table> </div> This is the standard spinner shown while the application is loading. With the loader gif now spinning, we begin the real work with runtimeLoader.js, as given in the following line of code: <script type="text/javascript" src = "/wavemaker/lib/runtimeLoader.js"></script> When running a project from Studio, the client runtime is loaded from Studio via WaveMaker. Config.js and index.html are modified for deployment while the client runtime is copied into the applications webapproot. runtimeLoader, as its name suggests, loads the WaveMaker runtime. With the runtime loaded, we can now load the top level project.a.js file, which defines our application using the dojo.declare() method. The following line of code loads the file: <script type="text/javascript" src = "project.a.js"></script> Finally, with our application class defined, we set up an instance of our application in wavemakerNode and run it. There are two modes for loading a WaveMaker application: debug and gzip mode. The debug mode is useful for debugging, as you would expect. The gzip mode is the default mode. The test mode of the Run , Test , or Compile button in Studio re-deploys the active project and opens it in debug mode. This is the only difference between using Test and Run in Studio. The Test button adds ?debug to the URL of the browser window; the Run button does not. Any WaveMaker application can be loaded in debug mode by adding debug to the URL parameters. For example, to load the CRM Simple application from with WaveMaker in debug mode, use the URL http://crm_simple.localhost:8094.com/?debug; detecting debug in the URL sets the djConfig.debugBoot flag, which alters the path used in runtimeLoader. djConfig.debugBoot = location.search.indexOf("debug") >=0; Like a compiled program, debug mode preserves variable names and all the other details that optimization removes which we would want available to use when debugging. However, JavaScript is not compiled into byte code or machine specific instructions. On the other hand, in gzip mode, the browser loads a few optimized packages containing all the source code in merged files. This reduces the number of files needed to load our application, which significantly improves loading time. These optimized packages are also minified. Minification removes whitespace and replaces variable names with short names, further reducing the volume of code to be parsed by the browser, and therefore further improving performance. The result is a significant reduction in the number of requests needed and the number of bytes transferred to load an application. A stock application in gzip mode requires 22 to 24 requests to load some 300 KB to 400 KB of content, depending on the application. In debug mode, the same app transfers over 1.5 MB in more than 500 requests. The index.html file, and when security is enabled, login.html, are yours to edit. If you are comfortable doing so, you can customize these files such as adding additional script tags. In practice, you shouldn't need to customize index.html, as you have full control of the application loaded into the wavemakerNode. Also, upgraded scripts in future versions of WaveMaker may need to programmatically update index.html and login.html. Changes to the X-US-Compatible meta tag are often required when support for newer versions of Internet Explorer becomes available, for example. These scripts can't possibly know about every customization you may make. Customization of index.html may cause these scripts to fail, and may require you to manually update these files. If you do encounter such a situation, simply use the index.html file from a project newly created in the new version as a template. Springing into the server side The WaveMaker server is a Java application running in a Java Virtual Machine (JVM). Like the client, it builds upon proven frameworks and libraries. In the case of the server, the foundational block is the SpringSource framework, http://www.springsource.org/SpringSource, or the Spring framework. The Spring framework is the most popular enterprise Java development framework today, and for good reason. The server of a WaveMaker application is a Spring application that includes the WaveMaker common, json, and runtime modules. More specifically, the WaveMaker server uses the Spring Web MVC framework to create a DispatcherServlet that delegates client requests to their handlers. WaveMaker uses only a handful of controllers, as we will see in the next section. The effective result is that it is the request URL that is used to direct a service call to the correct service. The method value of the request is the name of the client exposed function with the service to be called. In the case of overloaded functions, the signature of the params value is used to find the method matching by signature. We will look at example requests and responses shortly. Behind this controller is not only the power of the Spring framework, but also a number of leading frameworks such as Hibernate and, JaxWS, and libraries such as log4j and Apache commons. Here too, these libraries are available to you both directly in any custom work you might do and indirectly as tooled features of Studio. As we are working with a Spring server, we will be seeing Spring beans often as we examine the server-side configuration. One need not be familiar with Spring to reap its benefits when using custom Java in WaveMaker. Spring makes it easy to get access to other beans from our Java code. For example, if our project has imported a database as MyDB, we could get access to the service and any exposed functions in that service using getServiceBean().The following code illustrates the use of getServiceBean(): MyDB myDbSvc = (MyDB)RuntimeAccess.getInstance().getServiceBean("mydb"); We start by getting an instance of the WaveMaker runtime. From the returned runtime instance, we can use the getServiceBean() method to get a service bean for our mydb database service. There are other ways we could have got access to the service from our Java code; this one is pretty straightforward.  Starting from web.xml Just as the client side starts with index.html, a Java servlet starts in WEB-INF with web.xml. A WaveMaker application web.xml is a rather straightforward Spring MVC web.xml. You'll notice many servlet-mappings, a few listeners, and filters. Unlike index.html, web.xml is managed directly by Studio. If you need to add elements to the web-app context, add them to user-web.xml. The content of user-web.xml is merged into web.xml when generating the deployment package.  The most interesting entry is probably contextConfigLocation of /WEB-INF/project-springapp.xml. Project-springapp.xml is a Spring beans file. Immediately after the schema declaration is a series of resource imports. These imports include the services and entities that we create in Studio as we import databases and otherwise add services to our project. If you open project-spring.xml in WEB-INF, near the top of the file you'll see a comment noting how project-spring.xml is yours to edit. For experienced Spring users, here is the entry point to add any additional imports you may need. An example of such can be found at http://dev.wavemaker.com/wiki/bin/Spring. In that example, an additional XML file, ServerFileProcessor.xml, is used to enable component scanning on a package and sets some properties on those components. Project-spring.xml is then used to import ServerFileProcessor.xml into the application context. Many users of WaveMaker still think of Spring as the season between Winter and Summer. Such users do not need to think about these XML files. However, for those who are experienced with Java, the full power of the Spring framework is accessible to them. Also in project-springapp.xml is a list of URL mappings. These mappings specific request URLs that require handling by the file controller. Gzipped resources, for example, require the header Content-Encoding to be set to gzip. This informs the browser the content is gzip encoded and must be uncompressed before being parsed. >There are a few names that use ag in the server. WaveMaker Software the company was formerly known as ActiveGrid, and had a previous web development tool by the same name. The use of ag and com.activegrid stems back to the project's roots, first put down when the company was still known as ActiveGrid. Closing out web.xml is the Acegi filter mapping. Acegi is the security module used in WaveMaker 6.5 . Even when security is not enabled in an application, the Acegi filter mapping is included in web.xml. When security is not enabled in the project, an empty project-security.xml is used. Client and server communication Now that we've examined the client and server, we need to better understand the communication between the two. WaveMaker almost exclusively uses the HTTP methods GET and POST. In HTTP, GET is used, as you might suspect even without ever having heard of RFC 2626 (https://tools.ietf.org/html/rfc2616), to request, or get, a specific resource. Unless installed as a native application on a mobile device, a WaveMaker web application is loaded via a GET method. From index.html and runtimeLoad.js to the user defined pages and any images used on those images, the applications themselves are loaded into the browser using GET. All service calls, database reads and writes, or otherwise any invocations of a Java service functions, on the other hand, are POST. The URL of these POST functions is always the service named .json. For example, calls to a Java service named userPrefSvc would always be to the URL /userPrefSvc.json. Inside the POST method's request payload will be any required parameters including the method of the service to be invoked. The response will be the response returned from that call. PUT methods are not possible because we cannot nor do not want to know all possible WaveMaker server calls at "designtime", while the project files are open for writing in the Studio. This pattern avoids any URL length constraints, enabling lengthy datasets to be transferred while freeing up the URL to pass parameters such as page state. Let's take a look at an example. If you want to follow along in your browser's console, this is the third request of three when we select "Fog City Books" in the CRM Simple application when running the application with the console open. The following URL is the request URL: http://crm_simple.localhost:8094/services/runtimeService.json The following is request payload: {"params":["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{"properties":["id","item"],"filters":["id.orderid=9"],"matchMode":"start","ignoreCase":false},{"maxResults":500,"firstResult":0}],"method":"read","id":251422} The response is as follows: {"dataSetSize":2,"result":[{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} As we expect, the request URL is to a service (in this case named runtime service), with the .json extension. Runtime service is the built-in WaveMaker service for reading and writing with the Hibernate (http://www.hibernate.org), data models generated by importing a database. Security service and WaveMaker service are the other built-in services used at runtime. The security service is used for security functions such as getUserName() and logout(). Note this does not include login, which is handled by Acegi. The WaveMaker service has functions such as getServerTimeOffset(), used to adjust for time zones, and remoteRESTCall(), used to proxy some web service calls. How the runtime service functions is easy to understand by observation. Inside the request payload we have, as the URL suggested, a JavaScript Object Notation (JSON) structure. JSON (http://www.json.org/), is a lightweight data-interchange format regularly used in AJAX applications. Dissecting our example request from the top of the structure enclosed in the outer-most {}'s looks like the following: {"params":[…….],"method":"read","id":251422} We have three top level name-value pairs to our request object: params, method, and id. The id is 251422; method is read and the params value is an array, as indicated by the [] brackets: ["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{},{ }] In our case, we have an array of five values. The first is the database service name, custpurchaseDB. Next we have what appears to be the package and class name we will be reading from, not unlike from in a SQL query. After which, we have a null and two objects. JSON is friendly to human reading, and we could continue to unwrap the two objects in this request in a similar fashion.  when we discuss database services and check out the response. At the top level, we have dataSetSize, the number of results, and the array of the results: {"dataSetSize":2,"result":[]} Inside our result array we have two objects: [{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} Our first item has the compound key of itemid 2 with orderid 9. This is the item Kidnapped, which is a book costing $11.99. The other object in our result array also has the orderid 9, as we expect when reading line items from the selected order. This one is also a book, the item Gravity's Rainbow. Types To be more precise about the com.custpurchasdb.data.Lineitem parameter in our read request, it is actually the type name of the read request. WaveMaker projects define types from primitive types such as Boolean and custom complex types such as Lineitem. In our runtime read example, com.custpurchasedb.data.Lineitem is both the package and class name of the imported Hibernate entity and the type name for the line item entity in the project. Maintaining type information enables WaveMaker to ease a number of development issues. As the client knows the structure of the data it is getting from the server, it knows how to display that data with minimal developer configuration, if any. At design time, Studio uses type information in many areas to help us correctly configure our application. For example, when we set up a grid, type information enables Studio to present us with a list of possible column choices for the grid's dataset type. Likewise, when we add a form to the canvas for a database insert, it is type information that Studio uses to fill the form with appropriate editors. Line item is a project-wide type as it is defined in the server side. In the process of compiling the project's Java services sources, WaveMaker defines system types for any type returned to the client in a client facing function. To be added to the type system, a class must: Be public Define public getters and setters Be returned by a client exposed function Have a service class that extends JavaServiceSuperClass or uses the @ExposeToClient annotation WaveMaker 6.5.1 has a bug that prevents types from being generated as expected. Be certain to use 6.5.2 or newer versions to avoid this defect. It is possible to create new project types by adding a Java service class to the project that only defines types. Following is an example that creates a new simple type called Record to the project. Our definition of Record consists of an integer ID and a string. Note that there are two classes here. MyCustomTypes is the service class containing a method returning the type Record. As we will not be calling it, the function getNewRecord() need not do anything other than return a record. Creating a new default instance is an easy way to do this. The class Record is defined as an inner class. An inner class is a class defined within another class. In our case, Record is defined within MyCustomTypes: // Java Service class MyCustomTypes package com.myco.types; import com.wavemaker.runtime.javaservice.JavaServiceSuperClass; import com.wavemaker.runtime.service.annotations.ExposeToClient; public class MyCustomTypes extends JavaServiceSuperClass { public Record getNewRecord(){ return new Record(); } // Inner class Record public class Record{ private int id; private String name; public int getId(){ return id; } public void setId(int id){ this.id = id; } public String getName(){ return this.name; } public void setName(String name){ this.name = name; } } }  To add the preceding code to our WaveMaker project, we would add a Java service to the project using the class name MyCustomTypes in the Package and Class Name editor of the New Java Service dialog. The preceding code extends JavaServiceSuperClass and uses the package com.myco.types. A project can also have client-only types using the type definition option from the advanced section of the Studio insert menu. Type definitions are useful when we want to be able to pass structured data around within the client but we will not be sending or receiving that type to the server. For example, we may want to have application scoped wm.Variable storing a collection of current record selection information. This would enable us to keep track of a number of state items across all pages. Communication with the server is likely to be using only a few of those types at a time, so no such structure exists in the server side. Using wm.Variable enables us to bind each Record ID without using code. The insert type definition menu brings up the Type Definition Generator dialog. The generator takes JSON input and is pre-populated with a sample type. The sample type defines a person object, albeit an unusual one, with a name, an array of numbers for age, a Boolean (hasFoot), and a related person object, friend. Replace the sample type with your own JSON structure. Be certain to change the type name to something meaningful. After generating the type, you'll immediately see the newly minted type in type selectors, such as the type field of wm.Variable. Studio is pretty good at recognizing type changes. If for some reason Studio does not recognize a type change, the easiest thing to do is to get Studio to re-read the owning object. If a wm.Variable fails to show a newly added field to a type in its properties, change the type of the variable from the modified type to some other type and then back again. Studio is also an application One of the more complex WaveMaker applications is the Studio. That's right, Studio is itself an application built out of WaveMaker widgets and using the runtime and server. Being the large, complex application we use to build applications, it can sometimes be difficult to understand where the runtime ends and Studio begins. With that said, Studio remains a treasure trove of examples and ideas to explore. Let's open a finder, explorer, shell, or however you prefer to view the file system of a WaveMaker Studio installation. Let's look in the studio folder. If you've installed WaveMaker to c:program filesWaveMaker6.5.3.Release, the default on Windows, we're looking at c:program filesWaveMaker6.5.3.Releasestudio. This is the webapproot of the Studio project: For files, we've discussed index.html in loading the client. The type definition for the project types is types.js. The types.js definition is how the client learns of the server's Java types. Moving on to the directories alphabetically, we start with the app folder. The app folder can be considered a large utility folder these days. The branding folder, http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Branding, is a sample of the branding feature for when you want to easily re-brand applications for different customers. The build folder contains the optimized build files we discussed when loading our application in gzip mode. This build folder is for the Studio itself. The images folder is, as we would hope, where images are kept. The content of the doc in jsdoc is pretty old. Use jsref at the online wiki, http://dev.wavemaker.com/wiki/bin/wmjsref_6.5/WebHome, for a client API reference instead. Language contains the National Language Support (NLS) files to localize Studio into other languages. In 6.5.X, there is a Japanese (ja) and Spanish (es) directory in addition to the English (en) default thanks to the efforts of the WaveMaker community and a corporate partner. For more on internationalization applications with WaveMaker, navigate to http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Localization#HLocalizingtheClientLanguage. The lib folder is very interesting, so let's wrap up this top level before we dig into that one. The META-INF folder contains artifacts from the WaveMaker Maven build process that probably should be removed for 6.5.2. The pages folder contains the page definitions for Studio's pages. These pages can be opened in Studio. They also can be a treasure trove of tips and tricks if you see something when using Studio that you don't know how to do in your application. Be careful however, as some pages are old and use outdated classes or techniques. Other constructs are only used by Studio and aren't tooled. This means some pages use components that can only be created by code. The other major difference between a project's pages folder is that Studio page folders do not contain the same number of files. They do not have the optimized pageName.a.js file, for example. The services folder contains the Service Method Definition (SMD) files for Studio's services. These are summaries of a projects exposed services, one file per service, used at runtime by the client. Each callable function, its input parameter, and its return types are defined. Finally, WEB-INF we have discussed already when we examined web.xml. In Studio's case, replace project with studio in the file names. Also under WEB-INF, we have classes and lib. The classes folder contains Java class files and additional XML files. These files are on the classpath. WEB-INFlib contains JAR files. Studio requires significantly more JAR files, which are automatically added to projects created by Studio. Now let's get back to the lib folder. Astute readers of our walk through of index.html likely noticed the references to /wavemaker/lib in src tags for things such as runtimeloader. You might have also noticed that this folder was not present in the project and wondered how these tags could not fail. As a quick look at the URL of Studio running in a browser will demonstrate, /wavemaker is the Studio's context. This means the JavaScript runtime is only copied in as part of generating the deployment package. The lib folder is loaded directly from Studio's context when you test run an application from Studio using the Run or Test button. RuntimeLoader.js we encountered following index.html as it is the start of the loading of client modules. Manifest.js is an entry point into the loading process. Boot contains pre-initialization, such as the spinning loader image. Next we have another build folder. This one is the one used by applications and contains all possible build files. Not every JavaScript module is packaged up into an optimized build file. Some modules are so specific or rarely used that they are best loaded individually. Otherwise, if there's a build package available to applications, these them. Dojo lives in the dojo folder. I hope you don't find it surprising to find a dijit, dojo, and dojox folder in there. The folder github provides the library path github for JS Beautifier, http://jsbeautifier.org/. The images in the project images folder include a copy of Silk Icons, http://www.famfamfam.com/lab/icons/silk/, a great Creative Common licensed PNG icon set. This brings us to wm. We definitely saved the most interesting folder for our last stop on this tour. For in lib/wm, we have manifest.js, the top level of module loading when using debug mode in the runtime loader. In wm/lib/base, is the top level of the WaveMaker module space used at runtime. This means in wm/lib/base we have the WaveMaker components and widgets folders. These two folders contain the most commonly used sets of classes by WaveMaker developers using any custom JavaScript in a project. This also means we will be back in these folders again too. Summary In this article, we reviewed the WaveMaker architecture. We started with some context of what we mean by "client" and "server" in the context of this book. We then proceeded to dig into the client and the server. We reviewed how both build upon leading frameworks, the Dojo Toolkit and the SpringSource Framework in particular. We examined the running of an application from the network point of view and how the client and server communicated throughout. We dissected a JSON request to the runtime service and encountered project types. We also learned about both project and client type definitions. We ended by revisiting the file system. This time, however, we walked through a Studio installation. Studio is also a WaveMaker application. In the next article, we'll get comfortable with the Studio as a visual tool. We'll look at everything from the properties panels to the built-in source code editors. Resources for Article : Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Service Oriented JBI: Invoking External Web Services from ServiceMix [Article] Web Services in Apache OFBiz [Article]
Read more
  • 0
  • 0
  • 6078

article-image-building-your-first-zend-framework-application
Packt
26 Jul 2013
15 min read
Save for later

Building Your First Zend Framework Application

Packt
26 Jul 2013
15 min read
(For more resources related to this topic, see here.) Prerequisites Before you get started with setting up your first ZF2 Project, make sure that you have the following software installed and configured in your development environment: PHP Command Line Interface Git : Git is needed to check out source code from various github.com repositories Composer : Composer is the dependency management tool used for managing PHP dependencies The following commands will be useful for installing the necessary tools to setup a ZF2 Project: To install PHP Command Line Interface: $ sudo apt-get install php5-cli To install Git: $ sudo apt-get install git To install Composer: $ curl -s https://getcomposer.org/installer | php ZendSkeletonApplication ZendSkeletonApplication provides a sample skeleton application that can be used by developers as a starting point to get started with Zend Framework 2.0. The skeleton application makes use of ZF2 MVC, including a new module system. ZendSkeletonApplication can be downloaded from GitHub (https://github.com/zendframework/ZendSkeletonApplication). Time for action – creating a Zend Framework project To set up a new Zend Framework project, we will need to download the latest version of ZendSkeletonApplication and set up a virtual host to point to the newly created Zend Framework project. The steps are given as follows: Navigate to a folder location where you want to set up the new Zend Framework project: $ cd /var/www/ Clone the ZendSkeletonApplication app from GitHub: $ git clone git://github.com/zendframework/ ZendSkeletonApplication.git CommunicationApp In some Linux configurations, necessary permissions may not be available to the current user for writing to /var/www. In such cases, you can use any folder that is writable and make necessary changes to the virtual host configuration. Install dependencies using Composer: $ cd CommunicationApp/ $ php composer.phar self-update $ php composer.phar install The following screenshot shows how Composer downloads and installs the necessary dependencies: Before adding a virtual host entry we need to set up a hostname entry in our hosts file so that the system points to the local machine whenever the new hostname is used. In Linux this can be done by adding an entry to the /etc/hosts file: $ sudo vim /etc/hosts In Windows, this file can be accessed at %SystemRoot%system32driversetchosts. Add the following line to the hosts file: 127.0.0.1 comm-app.local The final hosts file should look like the following: Our next step would be to add a virtual host entry on our web server; this can be done by creating a new virtual host's configuration file: $ sudo vim /usr/local/zend/etc/sites.d/vhost_comm-app-80.conf This new virtual host filename could be different for you depending upon the web server that you use; please check out your web server documentation for setting up new virtual hosts. For example, if you have Apache2 running on Linux, you will need to create the new virtual host file in /etc/apache2/sites-available and enable the site using the command a2ensite comm-app.local. Add the following configuration to the virtual host file: <VirtualHost *:80> ServerName comm-app.local DocumentRoot /var/www/CommunicationApp/public SetEnv APPLICATION_ENV "development" <Directory /var/www/CommunicationApp/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If you are using a different path for checking out the ZendSkeletonApplication project make sure that you include that path for both DocumentRoot and Directory directives. After configuring the virtual host file, the web server needs to be restarted: $ sudo service zend-server restart Once the installation is completed, you should be able to open http://comm-app.local on your web browser. This should take you to the following test page : Test rewrite rules In some cases, mod_rewrite may not have been enabled in your web server by default; to check if the URL redirects are working properly, try to navigate to an invalid URL such as http://comm-app.local/12345; if you get an Apache 404 page, then the .htaccess rewrite rules are not working; they will need to be fixed, otherwise if you get a page like the following one, you can be sure of the URL working as expected. What just happened? We have successfully created a new ZF2 project by checking out ZendSkeletonApplication from GitHub and have used Composer to download the necessary dependencies including Zend Framework 2.0. We have also created a virtual host configuration that points to the project's public folder and tested the project in a web browser. Alternate installation options We have seen just one of the methods of installing ZendSkeletonApplication; there are other ways of doing this. You can use Composer to directly download the skeleton application and create the project using the following command: $ php composer.phar create-project --repositoryurl="http://packages.zendframework.com" zendframework/skeleton-application path/to/install You can also use a recursive Git clone to create the same project: $ git clone git://github.com/zendframework/ZendSkeletonApplication.git --recursive Refer to: http://framework.zend.com/downloads/skeleton-app Zend Framework 2.0 – modules In Zend Framework, a module can be defined as a unit of software that is portable and reusable and can be interconnected to other modules to construct a larger, complex application. Modules are not new in Zend Framework, but with ZF2, there is a complete overhaul in the way modules are used in Zend Framework. With ZF2, modules can be shared across various systems, and they can be repackaged and distributed with relative ease. One of the other major changes coming into ZF2 is that even the main application is now converted into a module; that is, the application module. Some of the key advantages of Zend Framework 2.0 modules are listed as follows: Self-contained, portable, reusable Dependency management Lightweight and fast Support for Phar packaging and Pyrus distribution Zend Framework 2.0 – project folder structure The folder layout of a ZF2 project is shown as follows: Folder name Description config Used for managing application configuration. data Used as a temporary storage location for storing application data including cache files, session files, logs, and indexes. module Used to manage all application code. module/Application This is the default application module that is provided with ZendSkeletonApplication. public This is the default application module that is provided with ZendSkeletonApplication. vendor Used to manage common libraries that are used by the application. Zend Framework is also installed in this folder. vendor/zendframework endor/zendframework Zend Framework 2.0 is installed here. Time for action – creating a module Our next activity will be about creating a new Users module in Zend Framework 2.0. The Users module will be used for managing users including user registration, authentication, and so on. We will be making use of ZendSkeletonModule provided by Zend, shown as follows: Navigate to the application's module folder: $ cd /var/www/CommunicationApp/ $ cd module/ Clone ZendSkeletonModule into a desired module name, in this case it is Users: $ git clone git://github.com/zendframework/ZendSkeletonModule.git Users After the checkout is complete, the folder structure should look like the following screenshot: Edit Module.php ; this file will be located in the Users folder under modules (CommunicationApp/module/Users/module.php) and change the namespace to Users. Replace namespace ZendSkeletonModule; with namespace Users;. The following folders can be removed because we will not be using them in our project: * Users/src/ZendSkeletonModule * Users/view/zend-skeleton-module What just happened? We have installed a skeleton module for Zend Framework; this is just an empty module, and we will need to extend this by creating custom controllers and views. In our next activity, we will focus on creating new controllers and views for this module. Creating a module using ZFTool ZFTool is a utility for managing Zend Framework applications/projects, and it can also be used for creating new modules; in order to do that, you will need to install ZFTool and use the create module command to create the module using ZFTool: $ php composer.phar require zendframework/zftool:dev-master $ cd vendor/zendframework/zftool/ $ php zf.php create module Users2 /var/www/CommunicationApp Read more about ZFTool at the following link: http://framework.zend.com/manual/2.0/en/modules/zendtool.introduction.html MVC layer The fundamental goal of any MVC Framework is to enable easier segregation of three layers of the MVC, namely, model, view, and controller. Before we get to the details of creating modules, let's quickly try to understand how these three layers work in an MVC Framework: Model : The model is a representation of data; the model also holds the business logic for various application transactions. View : The view contains the display logic that is used to display the various user interface elements in the web browser. Controller : The controller controls the application logic in any MVC application; all actions and events are handled at the controller layer. The controller layer serves as a communication interface between the model and the view by controlling the model state and also by representing the changes to the view. The controller also provides an entry point for accessing the application. In the new ZF2 MVC structure, all the models, views, and controllers are grouped by modules. Each module will have its own set of models, views, and controllers, and will share some components with other modules. Zend Framework module – folder structure The folder structure of Zend Framework 2.0 module has three vital components—the configurations, the module logic, and the views. The following table describes how contents in a module are organized: Folder name Description config Used for managing module configuration src Contains all module source code, including all controllers and models view Used to store all the views used in the module Time for action – creating controllers and views Now that we have created the module, our next step would be having our own controllers and views defined. In this section, we will create two simple views and will write a controller to switch between them: Navigate to the module location: $ cd /var/www/CommunicationApp/module/Users Create the folder for controllers: $ mkdir -p src/Users/Controller/ Create a new IndexController file, < ModuleName >/src/<ModuleName>/Controller/: $ cd src/Users/Controller/ $ vim IndexController.php Add the following code to the IndexController file: <?php namespace UsersController; use ZendMvcControllerAbstractActionController; use ZendViewModelViewModel; class IndexController extends AbstractActionController { public function indexAction() { $view = new ViewModel(); return $view; } public function registerAction() { $view = new ViewModel(); $view->setTemplate('users/index/new-user'); return $view; } public function loginAction() { $view = new ViewModel(); $view->setTemplate('users/index/login'); return $view; } } The preceding code will do the following actions; if the user visits the home page, the user is shown the default view; if the user arrives with an action register, the user is shown the new-user template; and if the user arrives with an action set to login, then the login template is rendered. Now that we have created the controller, we will have to create necessary views to render for each of the controller actions. Create the folder for views: $ cd /var/www/CommunicationApp/module/Users $ mkdir -p view/users/index/ Navigate to the views folder, <Module>/view/<module-name>/index: $ cd view/users/index/ Create the following view files: index login new-user For creating the view/users/index/index.phtml file, use the following code: <h1>Welcome to Users Module</h1> <a href="/users/index/login">Login</a> | <a href = "/users/index/register">New User Registration</a> For creating the view/users/index/login.phtml file, use the following code: <h2> Login </h2> <p> This page will hold the content for the login form </p> <a href="/users"><< Back to Home</a> For creating the view/users/index/new-user.phtml file, use the following code: <h2> New User Registration </h2> <p> This page will hold the content for the registration form </p> <a href="/users"><< Back to Home</a> What just happened? We have now created a new controller and views for our new Zend Framework module; the module is still not in a shape to be tested. To make the module fully functional we will need to make changes to the module's configuration, and also enable the module in the application's configuration. Zend Framework module – configuration Zend Framework 2.0 module configuration is spread across a series of files which can be found in the skeleton module. Some of the configuration files are described as follows: Module.php: The Zend Framework 2 module manager looks for the Module.php file in the module's root folder. The module manager uses the Module.php file to configure the module and invokes the getAutoloaderConfig() and getConfig() methods. autoload_classmap.php: The getAutoloaderConfig() method in the skeleton module loads autoload_classmap.php to include any custom overrides other than the classes loaded using the standard autoloader format. Entries can be added or removed to the autoload_classmap.php file to manage these custom overrides. config/module.config.php: The getConfig() method loads config/module.config.php; this file is used for configuring various module configuration options including routes, controllers, layouts, and various other configurations. Time for action – modifying module configuration In this section will make configuration changes to the Users module to enable it to work with the newly created controller and views using the following steps: Autoloader configuration – The default autoloader configuration provided by the ZendSkeletonModule needs to be disabled; this can be done by editing autoload_classmap.php and replacing it with the following content: <?php return array(); Module configuration – The module configuration file can be found in config/module.config.php; this file needs to be updated to reflect the new controllers and views that have been created, as follows: Controllers – The default controller mapping points to the ZendSkeletonModule; this needs to be replaced with the mapping shown in the following snippet: 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), Views – The views for the module have to be mapped to the appropriate view location. Make sure that the view uses lowercase names separated by a hyphen (for example, ZendSkeleton will be referred to as zend-skeleton): 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), Routes – The last module configuration is to define a route for accessing this module from the browser; in this case we are defining the route as /users, which will point to the index action in the Index controller of the Users module: 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( 'route' => '/users', 'defaults' => array( '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), After making all the configuration changes as detailed in the previous sections, the final configuration file, config/module.config.php, should look like the following: <?php return array( 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( // Change this to something specific to your module 'route' => '/users', 'defaults' => array( //Change this value to reflect the namespace in which // the controllers for your module are found '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), 'may_terminate' => true, 'child_routes' => array( // This route is a sane default when developing a module; // as you solidify the routes for your module, however, // you may want to remove it and replace it with more // specific routes. 'default' => array( 'type' => 'Segment', 'options' => array( 'route' => '/[:controller[/:action]]', 'constraints' => array( 'controller' => '[a-zA-Z][a-zA-Z0-9_-]*', 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', ), 'defaults' => array( ), ), ), ), ), ), ), 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), ); Application configuration – Enable the module in the application's configuration—this can be done by modifying the application's config/application.config.php file, and adding Users to the list of enabled modules: 'modules' => array( 'Application', 'Users', ), To test the module in a web browser, open http://comm-app.local/users/ in your web browser; you should be able to navigate within the module. The module home page is shown as follows: The registration page is shown as follows: What just happened? We have modified the configuration of ZendSkeletonModule to work with the new controller and views created for the Users module. Now we have a fully-functional module up and running using the new ZF module system. Have a go hero Now that we have the knowledge to create and configure own modules, your next task would be to set up a new CurrentTime module. The requirement for this module is to render the current time and date in the following format: Time: 14:00:00 GMT Date: 12-Oct-2012 Summary We have now learned about setting up a new Zend Framework project using Zend's skeleton application and module. In our next chapters, we will be focusing on further development on this module and extending it into a fully-fledged application. Resources for Article : Further resources on this subject: Magento's Architecture: Part 2 [Article] Authentication with Zend_Auth in Zend Framework 1.8 [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 6143
Visually different images

article-image-creating-your-first-freemarker-template
Packt
26 Jul 2013
10 min read
Save for later

Creating your first FreeMarker Template

Packt
26 Jul 2013
10 min read
(For more resources related to this topic, see here.) Step 1 – setting up your development directory If you haven't done so, create a directory to work in. I'm going to keep this as simple as possible, so we won't need a complicated directory structure. Everything can be done in one directory.Put the freemarker.jar in the directory. All future talk about files and running from the command-line will refer to your working directory. If you want to, you can set up a more advanced project-like set of directories. Step 2 – writing your first template This is a quick start, so let's just dive in and write the template. Open a file for editing called hello.ftl. The ftl extension is customary for FreeMarker Template Language files, but you are free to name your template files anything you want. Put this line in your file: Hello, ${name}! FreeMarker will replace the ${name} expression with the value of an element called name in the model. FreeMarker calls this an interpolation. I prefer to refer to this as "evaluating an expression", but you will encounter the term interpolation in the documentation. Everything else you have put in this initial template is static text. If name contained the value World, then this template would evaluate to: Hello, World! Step 3 – writing the Java code Templates are not scripts that can be run, so we need to write some Java code to invoke the FreeMarker engine and combine the template with a populated model. Here is that code: import java.io.*;import java.util.*;import freemarker.template.*;public class HelloFreemarker { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("name", "World"); Template template = cfg.getTemplate("hello.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} The highlighted line says that FreeMarker should look for FTL files in the "working directory" where the program is run as a simple Java application. If you set your project up differently, or run in an IDE, you may need to change this to an absolute path. The first thing we do is create a FreeMarker freemarker.template.Configuration object. This acts as a factory for freemarker.template.Template objects. FreeMarker has its own internal object types that it uses to extract values from the model.In order to use the objects that you supply, it must wrap these in its own native types. The job of doing this is done by an object wrapper. You must provide an object wrapper. It will always be FreeMarker's own freemarker.template.DefaultObjectWrapper unless you havespecial object wrapping requirements. Finally, we set the root directory for loading templates. For the purposes of our sample code, everything is in the same directory so we just set it to ".". Setting the template directory can throw an java.lang.IOException exception in this code. We simply allow that to be thrown out of the method. Next, we create our model, which is a simple map of java.lang.String keys to java.lang.Object values. The values can be simple object types such as String or java.lang.Number, or they can be complex object types, including arrays and collections. Our needs are simple here, so we're going to map "name" to the string "World". The next step is to get a Template object. We ask the Configuration instance to load the template into a Template object. This can also throw an IOException. The magic finally happens when we ask the Template instance to process the model and create an output. We already have the model, but where does the output go? For this, we need an implementation of java.io.Writer. For convenience, we are going to wrap the java.io.PrintWriter in java.lang.System.out with a java.io.OutputStreamWriter and give that to the template. After compiling this program, we can run it from the command line: java -cp .;freemarker.jar HelloFreemarker For Linux or OSX, you would use a ":" instead of a ";" in the command: java -cp .:freemarker.jar HelloFreemarker The result should be that the program prints out: Hello, World! Step 4 – moving beyond strings If you plan to create simple templates populated with preformatted text, then you now know all you need to know about FreeMarker. Chances are that you will, so let's take a look at how FreeMarker handles formatting other types and complex objects. Let's try binding the "name" object in our model to some other types of objects. We can replace: model.put("name", "World"); with: model.put("name", 123456789); The output format of the program will depend on the default locale, so if you are in the United States, you will see this: Hello, 123,456,789! If your default locale was set to Germany, you would see this: Hello, 123.456.789! FreeMarker does not call toString() method on instances of Number types it employs java.text.DecimalFormat. Unless you want to pass all of your values to FreeMarker as preformatted strings, you are going to need to understand how to control the way FreeMarker converts values to text. If preformatting all of the items in your model sounds like a good idea, it isn't. Moving "view" logic into your "controller" code is a sure-fre way to make updating the appearance of your site into a painful experience. Step 5 – formatting different types In the previous section, we saw how FreeMarker will choose a default method of formatting numbers. One of the features of this method is that it employs grouping separators: a comma or a period every three digits. It may also use a comma rather than a period to denote the decimal portion of the number. This is great for humans who may expect these formatting details, but if your number is destined to be parsed by a computer, it needs to be free of grouping separators and it must use a period as a decimal point. In this case, you need a way to control how FreeMarker decides to format a number. In order to control exactly how model objects are converted to text FreeMarker provides operators called built-ins. Let's create a new template called types.ftl and put in some expressions that use built-ins to control formatting: String: ${string?html}Number: ${number?c}Boolean: ${boolean?string("+++++", "-----")}Date: ${.now?time}Complex: ${object} The value .now come is a special variable that is automatically provided by FreeMarker. It contains the date and time when the Template began processing. There are other special variables, but this is the only one you're likely to use. This template is a little more complicated than the last template. The " ?" at the end of a variable name denotes the use of a built-in. Before we explore these particular built-ins, let's see them in action. Create a java program, FreemarkerTypes, which populates a model with values for our new template: import java.io.*;import java.math.BigDecimal;import java.util.*;import freemarker.template.*;public class FreemarkerTypes { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("string", "easy & fast "); model.put("number", new BigDecimal("1234.5678")); model.put("boolean", true); model.put("object", Locale.US); Template template = cfg.getTemplate("types.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} Run the FreemarkerType program the same way you ran HelloFreemarker. You will see this output: String: easy &amp; fastNumber: 1234.5678Boolean: +++++Date: 9:12:33 AMComplex: en_US Let's walk through the template and see how the built-ins affected the output. Our purpose is to get a solid foundation in the basics. We'll look at more details about how to use FreeMarker features in later articles. First we output a String modified with the html built-in. This encoded the string for HTML, turning the & into the &amp; HTML entity. You will want this applied to a lot of your expressions on HTML pages in order to ensure proper display of your text and to prevent cross-site scripting ( XSS ) attacks. The second line outputs a number with the c built-in. This tells FreeMarker that the number should be written for parsing by computers. As we saw in the previous section, FreeMarker will by default format numbers with grouping separators. It will also localize the decimal point, using a comma instead of a period. This is great when you are displaying numbers to humans, but not computers. If you want to put an ID number in a URL or a price in an XML document, you will want to use this built-in to format it. Next, we format a Boolean. It may surprise you to learn that unless you use the string built-in, FreeMarker will not format a Boolean value at all. In fact, it throws an exception. Conceptually, "true" and "false" have no universal text representation. If you use string with no arguments, the interpolation will evaluate to either "true" or "false", but this is a default you can change. Here, we have told the built-in to use a series of + characters for "true" and a series of – characters for "false". Another type which FreeMarker will not process without a built-in is java.util.Date. The main issue here is that FreeMarker doesn't know whether you want to display a date, a time, or both. By specifying the time built-in we are letting FreeMarker know that we want to display a time. The output shown previously was generated shortly past nine o'clock in the morning. Finally, we see a complex object converted to text with no built-ins. Complex objects are turned into text by calling their toString() method, so you can use string built-ins on them. Step 6 – where do we go from here? We've reached the end of the Quick start section. You've created two simple templates and worked with some of the basic features of FreeMarker. You might be wondering what are the other built-ins, or what options they offer. In the upcoming sections we'll look at these options and also ways to change the default behavior. Another issue we've glossed over is errors. Once you have applied some of these built-ins, you must make sure that you supply the correct types for the named model elements. We also haven't looked at what happens when a referenced model element is missing. The FreeMarker manual provides excellent reference for all of this. Rather than trying to find your way around on your own, we'll take a guided tour through the important features in the Top Features section of the article. Quick start versus slow start A key difference between the Quick start and Top Features sections is that we'll be starting with the sample output. In this article, we created templates and evaluated them to see what we would get. In a real-world project, you will get better results if you worked backwards from the desired result. In many cases, you won't have a choice. The sample output will be generated by web designers and you will be expected to produce the same HTML with dynamic content. In other cases, you will need to work from mock-ups and decide the HTML for yourself. In these cases, it is still worth creating a static sample document. These static samples will show you where you need to apply some of the techniques. Summary In this article, we discussed how to create a freemarker template. Resources for Article: Further resources on this subject: Getting Started with the Alfresco Records Management Module [Article] Installing Alfresco Software Development Kit (SDK) [Article] Apache Felix Gogo [Article]
Read more
  • 0
  • 0
  • 4496

article-image-scaffolding-command-line-tool
Packt
25 Jul 2013
4 min read
Save for later

Scaffolding with the command-line tool

Packt
25 Jul 2013
4 min read
(For more resources related to this topic, see here.) CakePHP comes packaged with the Cake command-line tool, which provides a number of code generation tools for creating models, controllers, views, data fixtures, and more, all on the fly. Please note that this is great for prototyping, but is non-ideal for a production environment. On your command line, from the cake-starter folder, type the following: cd appConsolecake bake You will see something similar to the following: > Console/cake bakeWelcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app/---------------------------------------------------------------Interactive Bake Shell---------------------------------------------------------------[D]atabase Configuration[M]odel[V]iew[C]ontroller[P]roject[F]ixture[T]est case[Q]uitWhat would you like to Bake? (D/M/V/C/P/F/T/Q)> As you can see, there's a lot to be done with this tool. Note that there are other commands beside bake, such as schema, which we be our main focus in this article. Creating the schema definition Inside the app/Config/Schema folder, create a file called glossary.php. Insert the following code into this file: <?php/** * This schema provides the definitions for the core tables in the glossary app. * * @var $glossary_terms - The main terms/definition table for the app * @var $categories - The categories table * @var $terms_categories - The lookup table, no model will be created. * * @author mhenderson * */class GlossarySchema extends CakeSchema { public $glossaryterms = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'title' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'name' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $glossaryterms_categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'glossaryterm_id' => array('type' => 'integer', 'null' => false), 'category_id' => array('type' => 'string', 'null' => false) );} This class definition represents three tables: glossaryterms , categories, and a lookup table to facilitate the relationship between the two tables. Each variable in the class represents a table, and the array keys inside of the variable represent the fields in the table. As you can see, the first two tables match up with our earlier architecture description. Creating the database schema On the command line, assuming you haven't moved to any other folders, type the following command: Console/cake schema create glossary You should then see the following responses. When prompted, type y once to drop the tables, and again to create them. Welcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app---------------------------------------------------------------Cake Schema Shell---------------------------------------------------------------The following table(s) will be dropped.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to drop the table(s)? (y/n)[n] > yDropping table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.The following table(s) will be created.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to create the table(s)? (y/n)[y] > yCreating table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.End create. If you look at your database now, you will notice that the three tables have been created. We can also make modifications to the glossary.php file and run the cake schema command again to update it. If you want to try something a little more daring, you can use the migrations plugin found at https://github.com/CakeDC/migrations. This plugin allows you to save "snapshots" of your schema to be recalled later, and also allows you to write custom scripts to migrate "up" to a certain snapshot version, or migrate "down" in the event of an emergency or a mistake. Summary In this article we saw the use of the schema tool and also its database. Resources for Article: Further resources on this subject: Create a Quick Application in CakePHP: Part 1 [Article] Working with Simple Associations using CakePHP [Article] Creating and Consuming Web Services in CakePHP 1.3 [Article]
Read more
  • 0
  • 0
  • 3270

article-image-choosing-lync-2013-clients
Packt
25 Jul 2013
5 min read
Save for later

Choosing Lync 2013 Clients

Packt
25 Jul 2013
5 min read
(For more resources related to this topic, see here.) What clients are available? At the moment, we are writing a list that includes the following clients: Full client, as a part of Office 2013 Plus The Lync 2013 app for Windows 8 Lync 2013 for mobile devices The Lync Basic 2013 version A plugin is needed to enable Lync features on a virtual desktop. We need the full Lync 2013 client installation to allow Lync access to the user. Although they are not clients in the traditional sense of the word, our list must also include the following ones: The Microsoft Lync VDI 2013 plugin Lync Online (Office 365) Lync Web App Lync Phone Edition Legacy clients that are still supported (Lync 2010, Lync 2010 Attendant, and Lync 2010 Mobile) Full client (Office 2013) This is the most complete client available at the moment. It includes full support for voice, video, IM (similarly to the previous versions), and integration for the new features (for example, high-definition video, the gallery feature to see multiple video feeds at the same time, and chat room integration). In the following screenshot, we can see a tabbed conversation in Lync 2013: Its integration with Office implies that the group policies for Lync are now part of the Office group policy's administrative templates. We have to download the Office 2013 templates from the Microsoft site and install the package in order to use them (some of the settings are shown in the following screenshot): Lync is available with the Professional Plus version of Office 2013 (and with some Office 365 subscriptions). Lync 2013 app for Windows 8 The Lync 2013 app for Windows 8 (also called Lync Windows Store app) has been designed and optimized for devices with a touchscreen (with Windows 8 and Windows RT as operating systems). The app (as we can see in the following screenshot) is focused on images and pictures, so we have a tile for each contact we want in our favorites. The Lync Windows Store app supports contact management, conversations, and calls, but some features such as Persistent Chat and the advanced management of Enterprise Voice, are still an exclusive of the full client. Also, talking about conferencing, we will not be able to act as the presenter or manage other participants. The app is integrated with Windows 8, so we are able to use Search to look for Lync contacts (as shown in the following screenshot): Lync 2013 for mobile devices The Lync 2013 client for mobile devices is the solution Microsoft offers for the most common tablet and smartphone systems (excluding those tablets using Windows 8 and Windows RT with their dedicated app). It is available for Windows phones, iPad/iPhone, and for Android. The older version of this client was basically an IM application, and that is something that somehow limited the interest in the mobile versions of Lync. The 2013 version that we are talking about includes support for VOIP and video (using Wi-Fi networks and cellular data networks), meetings, and for voice mail. From an infrastructural point of view, enabling the new mobile client means to apply the Lync 2013 Cumulative Update 1 (CU1) on our Front End and Edge servers and publish a DNS record (lyncdiscover) on our public name servers. If we have had previous experience with Lync 2010 mobility, the difference is really noticeable. The lyncdiscover record must be pointed to the reverse proxy. Reverse proxy deployment requires for a product to be enabled to support Lync mobility, and a certificate with the lyncdiscover's public domain name needs to be included. Lync Basic 2013 version Lync Basic 2013 is a downloadable client that provides basic functionalities. It does not provide support for advanced call features, multiparty videos or galleries, and skill-based searches. Lync Basic 2013 is dedicated to companies with Lync 2013 on-premises, and it is for Office 365 customers that do not have the full client included with their subscription. A client will look really similar to the full one, but the display name on top is Lync Basic as we can see in the following screenshot: Microsoft Lync VDI 2013 plugin As we said before, the VDI plugin is not a client; it is software we need to install to enable Lync on virtual desktops based on the most used technologies, such as Microsoft RDS, VMware View, and XenDesktop. The main challenge of a VDI scenario is granting the same features and quality we expect from a deployment on a physical machine. The plugin uses "Media Redirection", so that audio and video originate and terminate on the plugin running on the thin client. The user is enabled to connect conferencing/telephony hardware (for example microphones, cams, and so on) to the local terminal and use the Lync 2013 client installed on the virtual desktop as it was running locally. The plugin is the only Lync software installed at the end-user workplace. The details of the deployment (Deploying the Lync VDI Plug-in ) are available at http://technet.microsoft.com/en-us/library/jj204683.aspx. Resources for Article : Further resources on this subject: Innovation of Communication and Information Technologies [Article] DPM Non-aware Windows Workload Protection [Article] Installing Microsoft Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 1370
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-merchandising-success
Packt
25 Jul 2013
17 min read
Save for later

Merchandising for Success

Packt
25 Jul 2013
17 min read
(For more resources related to this topic, see here.) Shop categories Creating product categories, like most things in PrestaShop, is easy and we will cover that soon. First we need to plan the ideal category structure, and this demands a little thought. Planning your category structure You should think really hard about the following questions: What is your business key – general scope or specific? Remember, if the usability is complex for you, it will be difficult to get future customers. So what will make the navigation simple and intuitive for your customers? What structure will support any plan you might have for expanding the range in the future? What do your competitors use? What could you do to make your structure better for your customers than anybody else's? When you have worked it out, we will create the category structure and then we will create the content (images and descriptions) for your category pages. First you need to consider what categories you want for your product range. Here are some examples: If your business is geared to the general scope, then it could be something like: Books Electronics Home and garden Fashion, jewelry, and beauty However, if your business is a closed market, for example electronics, then it could be something like: Cameras and photography Mobile and home phones Sound and vision Video games and consoles You get the idea. My examples don't have categories, subcategories, or anything deeper just for the sake of it. There are no prizes for compartmentalizing. If you think a fairly fat structure is what your customer wants, then that is what you should do. If you are thinking, "Hang on, I don't have any categories let alone any subcategories," don't panic. If your research and common sense says you should only have a few categories without any subcategories, then stick to it. Simplicity is the most important thing. Pleasing your customer and making your shop intuitive for your customer will make you more money than obscure compartmentalizing of your products. Creating your categories Have your plan close at hand. Ideally, have it written down or, if it is very simple, have it clearly in your head. Enough of the theory, it is now time for action. Time for action – how to create product categories Make sure that you are logged into your PrestaShop back office. We will do this in two steps. First we will create your structure as per your plan, then in the next Time for action section, we will implement the category descriptions. Let's get on with the structure of your categories: Click on Catalog and you will see the categories. Click on one. Now click on the green + symbol to add a new subcategory. PrestaShop defines even your top-level categories as subcategories because the home category is considered to be the the top-level category. Just type in the title of your first main category. Don't worry about the other options. The descriptions are covered in a minute and the rest is to do with the search engines. You have created your first category. Now that you are back to the home category, you can click on the green button again to create your next main category. To do so, save as before and remember to check the Home radio button, when you are ready, to create your next main category. Repeat until all top-level categories are created. Have a quick look at your shop front to make sure you like what you see. Here is a screenshot from the PrestaShop demo store: Now for the subcategories. We will create one level at a time as earlier. So we will create all the subcategories before creating any categories within subcategories. In your home category, you will have a list of your main categories. Click on the first one in the list that requires a subcategory. Now click on the create subcategory + icon. Type the name of your subcategory, leaving the other options, and click on Save. Go back to the main category if you want to create another subcategory. Play around with clicking in and out of categories and subcategories until you get used to how PrestaShop works. It isn't complicated, but it is easy to get lost and start creating stuff in the wrong place. If this happened to you, just click on the bin icon to delete your mistake. Then pay close attention to the category or subcategory you are in and carry on. You can edit the category order from the main catalog page by selecting the box of the category you want to move and then clicking an up or down arrow. Finish creating your full category structure. Play with the category and subcategory links on your shop front to see how they work and then move on. What just happened? Superb! Your category structure is done and you should be fairly familiar with navigating around your categories in your control panel. Now we can add the category and subcategory descriptions. I left it empty until now because as you might have noticed, the category creation palaver can be a bit fiddly and it makes sense to keep it as straightforward as possible. Here are some tips for writing good category descriptions followed by a quick Time for action section for entering descriptions into the category itself. Creating content for your categories and subcategories I see so many shops online with really dull category descriptions. Category descriptions should obviously describe but they should also sell! Here are a few tips for writing some enticing descriptions: Keep them short—two paragraphs at the most. People do not visit your website to read. The detail should be in the products themselves. Similar to a USP, category descriptions should be a combination of fact and emotive description that focuses on the benefit to the customer. Try and be as specific as you can about each category and subcategory so that each description is accurate and relevant in its own right. For example, don't let the category steal all the glory from a subcategory. It is very important for SEO. Time for action – adding category descriptions Be ready with the text for all your categories or you can, of course, type them as you go: Go to Catalog and then on the first categories' Edit button. Enter your category description and click on Save. Click on the subcategories of your first category. Then enter and save a description for each (if any). Navigate to the second main category and enter a description. Repeat the same for each of the subcategories in turn. Reiterate the preceding steps for each category. What just happened? You now have a fully functioning category structure. Now we can go on to look at adding some of your products. Adding products Click on the Catalog tab and then click on product. It is pretty similar to category. In the Time for action section, I will cover what to enter in each box as a separate item. However, I will skip over a few items like meta tags because they are best dealt with on a site-wide basis separately. The other important option is the product description. This deserves special treatment because it needs to be effective at selling your product. With the categories, I specifically showed you how to create the structure before filling in the descriptions because I know others who have got into a muddle in the past. It is less likely, but still possible, to get into a bit of a muddle with the products as well. This is especially true if you have lots of them. Perhaps you should be the judge of whether to fill in your catalog before adding descriptions or add descriptions as you go. So here is a handy guide to create great product descriptions. It will help you to decide whether you should fill product descriptions at the same time as the rest of the details, or whether you should just enter the product title and revisit them later to fill in the rest of the details. Product descriptions that sell Don't fall into the trap of simply describing your products. It might be true that a potential customer does need to know the dry facts like sizes and other uninspiring information, but don't put this information in the brief description or description boxes. PrestaShop provides a place for bare facts—the Features tab (there will be more on this soon). The brief description and description boxes that will be described in more detail soon are there to sell to your customers —to increase their interest to a level that makes them "want" the product. It actually suggests they pop it in their cart and buy it. The way you do this is with a very simple and age-old formula that actually works. And,of course, having whetted your appetite, it would be rude not to tell you about it. So here it goes. Actually selling the product Don't just tell your customers about your product, sell them the product. Explain to them why they should buy it! Use the FAB technique—feature, advantage, benefit: Tell the customer about a feature: This teddy bear is made from a new fiber and wool mix This laptop has the brand new i7 processor made by Intel This guide was written by somebody who has survived cancer And the advantage that feature gives them: So it is really, really soft and fluffy! i7 is the very first processor series with a DDR3 integrated memory controller! So all the information and advice is real and practical Then emphasize the real emotive benefit this gives them: Which means your little boy or girl is going to feel safe, loved, and secure with this wonderful bear Meaning that this laptop gives your applications, up to a 30 percent performance boost over every other processor series ever made Giving you or your loved one the very best chance of beating with cancer and having more precious time they have with the people they love Don't just stop at one feature. Highlight the most important features. By most important features, of course I mean the features that lead to the best most emotive and personal benefits. Not too many though. If your product has loads of benefits, then try and pick just the best ones. Three is perfect. Three really is a magic number. All the best things come in threes and scientific research actually proves that thoughts or ideas presented in threes influence human emotion the most. If you must have more than three features, summarize them in a quick bulleted list. Three is good: Soft, strong, and very long Peace, love, and understanding Relieves pain, and clears your nose without drowsiness Ask for the sale When you have used the FAB technique, ask the customer to part with their money! Say something like, "Select the most suitable option for you and click on Add to cart" or "Remember that abc is the only xyz with benefit 1, benefit 2, and benefit 3.Order yours now!" Create some images with GIMP If you have a favorite photo editor then great. If you haven't, then I suggest you use GIMP. It's cool, easy, and free: www.gimp.org. Time for action – how to add a product to PrestaShop Let's add some products: Click on catalog and then click on product. Click on the Add a new product link. You will see the following screenshots. Okay, I admit it. It does look a little bit daunting. But actually it is not that difficult. Much of it is optional, and even more we will revisit after further discussion. So don't despair. There is a table of explanations for you after the screenshots. Field Explanation Name The short name/description of your product. There is a brief description and a full description box later, but perhaps a bit more than a short name should go here. For example, 50 cm golden teddy bear-extra fluffy version. Status Choose Enabled or Disabled. If your product is for sale as soon as you're open, click Enabled. If your product is discontinued or needs to be removed from sale for any reason, click Disabled. Reference An optional unique reference for your product. For example, 50cmFT - xfluff. EAN13 The European Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Jan The Japanese Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. UPC The USA and Canadian Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Visibility If you want to show the item on the catalog, only on the search or everywhere. Type You can chose if there is a physical product, pack, or a downloadable product. Options To make the product available/unavailable to order. To show or hide the price. To enable/disable the online message. Condition If the item is brand new, second hand, or refurbished. Short description Here you need to add a brief description about the item. This text will be shown on the catalog. Description When a customer clicks on the item, he will read this text. Tags Leave blank for now. Fill in your product page as described previously. Click on the Images tab at the top of the product page. Browse to the image you created earlier and upload it. Note that PrestaShop will compress the image for you. It is worth having a look at the final image and maybe varying the amount (if any) that you apply when creating your product images. Click on Save and then go and admire your product in your store front. Repeat until all your products are done, but don't forget to check how things look from the customer's point of view. Visit the category and product pages to check whether things are like the way you had expected them to be. If you have a huge range that is going to take you a long time, then consider just entering your key products. Proceed with this to get the money coming in and add the rest of your range in a bit over the course of time. What just happened? Now you have something to actually sell, let's go and showcase some of your products. Here is how to make some of your products stand out from the crowd. Highlighting products Next is a list of the different ways to promote elements of your range. There is also an explanation of each option and how to do it, as well. New products So you have just found some great new products. How do you let your visitors know about it? You could put an announcement on your front page. But what if a potential customer doesn't visit your front page or perhaps misses the announcement? Welcome to the new products module. Time for action – how to highlight your newest products The following are the quick steps to enable and configure the highlighting of any new products you add. Once this is set up, it will happen automatically, now and in the future. Click on the Modules tab and scroll down to the New products block module. Click on Install. Scroll back down to the module you just installed and click on Configure. Choose a number of products to be showcased and click on Save. Don't forget to have a look at your shop front to see how it works. Click around a few different pages and see how the highlighted product alternates. What just happened? Now you are done with new products and they will never go unnoticed. Specials Special refers to the price. This is the traditional special offer that customers know and love. Time for action – creating a special offer The following steps help us create special offers and make sure they will never go unnoticed: Click on the Catalog tab and navigate to the category or subcategory that contains the product you want to make available as a special offer. Click on the Products to go to its details page. Click on Prices and go to the Specific prices section. Click on Add a new specific price. You can enter an actual monetary amount in the first box or a percentage in the second box. Monetary amounts work well for individual discounts and percentages work well as part of a wider sale. But this is not a hard-and-fast rule. So choose what you think your customers might prefer. Click on Save. Now go and have a look at the category that the product is in and click on the product as well. You'll notice the smart enticing manner that PrestaShop uses to highlight the offer. You can have as many or as few special offers as you like. But what if you wanted to really push a product offer or a wider sale? Yes, you guessed it, there's a module. Click on the Modules tab and scroll down to Specials block and click on Install. Getting the hang of this? Thought so. Go and have a look at the effect on your store. What just happened? Your first sale is underway. Recently viewed What's this then? When customers browse products, they forget what they have seen or how to find it again. By prominently displaying a module with their most recent viewings, they can comfortably click back and forth comparing until they have made a buying decision. Now you don't need me to tell you how to set this up. Go to the module, switch it on, and you're done. Best sellers This is just what it says. Not necessarily an offer or anything else is special about it. But if it sells, well there must be something worth talking about it. Install the best sellers module in the usual place to highlight these items. Accessories I love accessories. It's all about add-on sales. Ever been to a shop to buy a single item and come out with several? Electrical retailers are brilliant at this. Go in for a PC and come out with a printer, scanner, camera, ink, paper, and the list goes on. Is it because their customers are stupid? Of course they are not! It is because they offer compelling or essential accessories that are relevant to the sale. By creating accessories, you will get a new tab at the bottom of each relevant product page along with PrestaShop making suggestions at key points of the sale. All we have to do is tell PrestaShop what is an accessory to our various products and PrestaShop will do the rest. Time for action – creating an accessory Accessories are products. So any product can be an accessory of any other product. All you have to do is decide what is relevant to what. Just think about appropriate accessories for your products and read on. The quick guide for creating accessories are as follows: Click on the Catalog tab, then click on product. Find the product you think should have some accessories. Click on it to edit it by navigating to Associations on the page and find the Accessories section, as shown in the following screenshot: Find the product that you wish to be an accessory by typing the first letters of the product name and selecting it. Save your amended product. You can add as many accessories to each product as you like. Go and have a look at your product on your shop front and notice the Accessories tab. What just happened? You just learned how to accessorize. It's silly not to accessorize, not because it costs you nothing, but because a few clicks could significantly increase your turnover. Now we can go on to explore more product ideas.
Read more
  • 0
  • 0
  • 806

article-image-events
Packt
25 Jul 2013
2 min read
Save for later

Events

Packt
25 Jul 2013
2 min read
(For more resources related to this topic, see here.) What is a payload? The payload of an event, the event object, carries any necessary state from the producer to the consumer and is nothing but an instance of a Java class. An event object may not contain a type variable, such as <T>.   We can assign qualifiers to an event and thus distinguish an event from other events of an event object type. These qualifiers act like selectors, narrowing the set of events that will be observed for an event object type. There is no distinction between a qualifier of a bean type and that of an event, as they are both defined with @Qualifier. This commonality provides a distinct advantage when using qualifiers to distinguish between bean types, as those same qualifiers can be used to distinguish between events where those bean types are the event objects. An event qualifier is shown here: @Qualifier @Target( { FIELD, PARAMETER } ) @Retention( RUNTIME ) public @interface Removed {} How do I listen for an event? An event is consumed by an observer method , and we inform Weld that our method is used to observe an event by annotating a parameter of the method, the event parameter , with @Observes . The type of event parameter is the event type we want to observe, and we may specify qualifiers on the event parameter to narrow what events we want to observe. We may have an observer method for all events produced about a Book event type, as follows: public void onBookEvent(@Observes Book book) { ... } Or we may choose to only observe when a Book is removed, as follows: public void onBookRemoval(@Observes @Removed Book book) { ... } Any additional parameters on an observer method are treated as injection points. An observer method will receive an event to consume if: The observer method is present on a bean that is enabled within our application The event object is assignable to the event parameter type of the observer method
Read more
  • 0
  • 0
  • 950

article-image-implementing-log-screen-using-ext-js
Packt
18 Jul 2013
31 min read
Save for later

Implementing a Log-in screen using Ext JS

Packt
18 Jul 2013
31 min read
In this article Loiane Groner, author of Mastering Ext JS, talks about developing a login page for an application using Ext JS. It is very common to have a login page for an application, which we can use to control access to the system by identifying and authenticating the user through the credentials presented by him/her. Once the user is logged in, we can track the actions performed by the user. We can also restrain access of some features and screens of the system that we do not want a particular user or even a specific group of users to have access to. In this article, we will cover: Creating the login page Handling the login page on the server Adding the Caps Lock warning message in the Password field Submitting the form by pressing the Enter key Encrypting the password before sending to the server (For more resources related to this topic, see here.) The Login screen The Login window will be the first view we are going to implement in this project. We are going to build it step by step and it will have the following capabilities: User will enter the username and password to log in Client-side validation (username and password required to log in) Submit the Login form by pressing Enter Encrypt the password before sending to the server Password Caps Lock warning (similar to Windows OS) Multilingual capability Except for the multilingual capability, we will implement all the other features throughout this topic. So at the end of the implementation, we will have a Login window that looks like the following: So let's get started! Creating the Login screen Under the app/view directory, we will create a new file named Login.js.In this file, we will implement all the code that the user is going to see on the screen. Inside the Login.js file, we will implement the following code: Ext.define('Packt.view.Login', { // #1 extend: 'Ext.window.Window', // #2 alias: 'widget.login', // #3 autoShow: true, // #4 height: 170, // #5 width: 360, // #6 layout: { type: 'fit' // #7 }, iconCls: 'key', // #8 title: "Login", // #9 closeAction: 'hide', // #10 closable: false // #11 }); On the first line (#1) we have the definition of the class. To define a class we use Ext.define, followed by parentheses (()), and inside the parentheses we first declare the name of the class, followed by a comma (") and curly brackets ({}), and at the end a semicolon. All the configurations and properties (#2 to #11) go inside curly brackets. We also need to pay attention to the name of the class. This is the formula suggested by Sencha in Ext JS MVC projects: App Namespace + package name + name of the JS file. we defined the namespace as Packt (configuration name inside the app.js file). We are creating a View for this project, so we will create the JS file under the view package/directory. And then, the name of the file we created is Login.js; therefore, we will lose the .js part and use only Login as the name of the View. Putting all together, we have Packt.view.Login and this will be the name of our class. Then, we are saying that the Login class will extend from the Window class (#2), because we want it to be displayed inside a window, and not on any other component. We are also assigning this class an alias (#3). The alias for a class that extends from a component always starts with widget., followed by the alias we want to assign. The naming convention for an alias is lowercase . It is also important to remember that the alias must be unique in an application. In this case we want to assign login as alias to this class so later we can instantiate this same class using its alias (that is the same as xtype). For example, we can instantiate the Login class using four different options: Using the complete name of the class, which is the most used one: Ext.create('Packt.view.Login'); Using the alias in the Ext.create method: Ext.create('widget.login'); Using the Ext.widget, which is a shorthand way of using Ext.ClassManager.instantiateByAlias: Ext.widget('login'); Using the xtype as an item of another component: items: [ { xtype: 'login' } ] In this book we will use the first, third, and fourth options most of the time. Then we have autoShow configured to true (#4). What happens with the window is that instantiating the component is not enough for displaying it. When we instantiate the window we will have its reference, but it will not be displayed on the screen. If we want it to be displayed we need to call the method show() manually. Another option is to have the autoShow configuration set to true. This way the window will be automatically displayed when we instantiate it. We also have height (#5) and width (#6) of the window. We set the layout as fit (#7) because we want to add a form inside this window that will contain the username and password fields. And using the fit layout the form will occupy all the body space of the window. Remember that when using the fit layout we can only have one item as a child component. We are setting an iconCls (#8) property to the window; this way we will have an icon of a key in the header of the window. We can also give a title for the window (#9), and in this case we chose Login. Following is the declaration of the key style used by the iconCls property: .key { background-image:url('../icons/key.png') !important; } All the styles we will create to use as iconCls have a format like the preceding one. And at last we have the closeAction (#10) and closable (#11) configurations. The closeAction configuration will tell if we want to destroy the window when we close it. In this case, we do not want to destroy it; we only want to hide it. The closable configuration tells if we want to display the X icon on the top-right corner of the window. As this is a Login window, we do not want to give this option for the user. If you would like to, you can also add the resizable and draggable options as false. This will prevent the user to drag the Login window around and also to resize it. So far, this will be the output we have. A single window with an icon at the top-left corner with a title Login : The next step is to add the form with the username and password fields. We are going to add the following code to the Login class: items: [ { xtype: 'form', // #12 frame: false, // #13 bodyPadding: 15, // #14 defaults: { // #15 xtype: 'textfield', // #16 anchor: '100%', // #17 labelWidth: 60 // #18 }, items: [ { name: 'user', fieldLabel: "User" }, { inputType: 'password', // #19 name: 'password', fieldLabel: "Password" } ] } ] As we are using the fit layout, we can only declare one child item in this class. So we are going to add a form (#12) and to make the form to look prettier, we are going to remove the frame property (#13) and also add padding to the form body (#14). The form's frame property is by default set to false. But by default, there is a blue border that appears if we to do not explicitly add this property set to false. As we are going to add two fields to the form, we probably want to avoid repeating some code. That is why we are going to declare some field configurations inside the defaults configuration of the form (#15); this way the configuration we declare inside defaults will be applied to all items of the form, and we will need to declare only the configurations we want to customize. As we are going to declare two fields, both of them will be of type textfield. The default layout of the form is the anchor layout, so we do not need to make this declaration explicit. However, we want both fields can occupy all the horizontal available space of the body of the form. That is why we are declaring anchor as 100% (#17). By default, the width attribute of the label of the TextField class is 100 pixels. It is too much space for a label User and Password, so we are going to decrease this value to 60 pixels (#18). And finally, we have the user text field and the password text field. The configuration name is what we are going to use to identify each field when we submit the form to the server. But there is only one detail missing: when the user types the password into the field the system cannot display its value, we need to mask it somehow. That is why inputType is 'password' (#19) for the password field, as we want to display bullets instead of the original value, and the user will not be able to see the password value. Now we have improved our Login window a little more. This is the output so far: Client-side validations The field component in Ext JS provides some client-side validation capability. This can save time and also bandwidth (the system will only make a server request when it is sure the information has passed the basic validation). It also helps to point out to the user where they have gone wrong in filling out the form. Of course, it is also good to validate the information again on the server side for security reasons, but for now we will focus on the validations we can apply to the form of our Login window. Let's brainstorm some validations we can apply to the username and password fields: The username and password must be mandatory—how are going to authenticate the user without a username and password? The user can only enter alphanumeric characters (A-Z, a-z, and 0-9) in both the fields. The user can only type between 3 and 25 chars in the username field. The user can only type between 3 and 15 chars in the password field. So let's add into the code the ones that are common to both fields: allowBlank: false, // #20 vtype: 'alphanum', // #21 minLength: 3, // #22 msgTarget: 'under' // #23 We are going to add the preceding configurations inside the defaults configuration of the form, as they all apply to both the fields we have. First, both need to be mandatory (#20), we can only allow to enter alphanumeric characters (#21) and the minimum number of characters the user needs to input is three (#22). Then, a last common configuration is that we want to display any validation error message under the field (#23). And the only validation customized for each field is that we can enter a maximum of 25 characters in the User field: name: 'user', fieldLabel: "User", maxLength : 25 And a maximum of 15 characters in the Password field: inputType: 'password', name: 'password', fieldLabel: "Password", maxLength : 15 After we apply the client validations, we will have the following output in case the user went wrong in filling out the Login window: If you do not like it, we can change the place where the error message appears. We just need to change the msgTarget value. The available options are: title, under, side, and none. We can also show the error message as a tooltip (qtip) or display it in a specific target (inner HTML of a specific component). Creating custom VTypes Many systems have a special format for passwords. Let's say we need the password to have at least one digit (0-9), one letter lowercase, one letter uppercase, one special character (@, #, $, %, and so on) and its length between 6 and 20 characters. We can create a regular expression to validate that the password is entering into the app. And to do this, we can create a custom VType to do the validation for us. Creating a custom VType is simple. For our case, we can create a custom VType called passRegex: Ext.apply(Ext.form.field.VTypes, { customPass: function(val, field) { return /^((?=.*d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{6,20})/.test(val); }, customPassText: 'Not a valid password. Length must be at least 6 characters and maximum of 20Password must contain one digit, one letter lowercase, one letter uppercase, onse special symbol @#$% and between 6 and 20 characters.', }); customPass is the name of our custom VType, and we need to declare a function that will validate our regular expression. customPassText is the message that will be displayed to the user in case the incorrect password format is entered. The preceding code can be added anywhere on the code, inside the init function of a controller, inside the launch function of the app.js, or even in a separate JavaScript file (recommended) where you can put all your custom VTypes. To use it, we simply need to add vtype: 'customPass' to our Password field. To learn more about regular expressions, please visit http://www.regular-expressions.info/. Adding the toolbar with buttons So far we have created the Login window, which contains a form with two fields and it is already being validated as well. The only thing missing is to add the two buttons: cancel and submit . We are going to add the buttons as items of a toolbar and the toolbar will be added on the form as a docked item. The docked items can be docked to either on the top, right, left, or bottom of a panel (both form and window components are subclasses of panel). In this case we will dock the toolbar to the bottom of the form. Add the following code right after the items configuration of the form: dockedItems: [ { xtype: 'toolbar', dock: 'bottom', items: [ { xtype: 'tbfill' //#24 }, { xtype: 'button', // #25 itemId: 'cancel', iconCls: 'cancel', text: 'Cancel' }, { xtype: 'button', // #26 itemId: 'submit', formBind: true, // #27 iconCls: 'key-go', text: "Submit" } ] } ] If we take a look back to the screenshot of the Login screen we first presented at the beginning of this article, we will notice that there is a component for the translation/multilingual capability. And after this component there is a space and then we have the Cancel and Submit buttons. As we do not have the multilingual component yet, we can only implement the two buttons, but they need to be at the right end of the form and we need to leave that space. That is why we first need to add a toolbar fill component (#24), which is going to instruct the toolbar's layout to begin using the right-justified button container. Then we will add the Cancel button (#25) and then the Submit button (#26). We are going to add icons to both buttons (iconCls) and later, when we implement the controller class, we will need a way to identify the buttons. This is why we assigned itemId to both of them. We already have the client validations, but even with the validations, the user can click on the Submit button and we want to avoid this behavior. That is why we are binding the Submit button to the form (#27); this way the button will only be enabled if the form has no error from the client validation. In the following screenshot, we can see the current output of the Login form (after we added the toolbar) and also verify the behavior of the Submit button: Running the code To execute the code we have created so far, we need to make a few changes in the app.js file. First, we need to declare views we are using (only one in this case). Also, as we are going to instantiate using the Login class' xtype, we need to declare this class in the requires declaration: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And the last change is inside the launch function. now we only need to replace the console.log message with the Login instance (#1): splashscreen.next().fadeOut({ duration: 1000, remove:true, listeners: { afteranimate: function(el, startTime, eOpts ){ Ext.widget('login'); // #1 } } }); Now the app.js is OK and we can execute what we have implemented so far! Using itemId versus id Ext.Cmp is bad! Before we create the controller, we will need to have some knowledge about Ext.ComponentQuery selectors. And in this topic we will discuss a subject to help us to understand better why we took some decisions while creating the Login window and why we are going to take some other decisions on the controller topic. Whenever we can, we will always try to use the itemId configuration instead of id to uniquely identify a component. And here comes the question, why? When using id, we need to make sure that id is unique, and none of all the other components of the application has the same id. Now imagine the situation where you are working with other developers of the same team and it is a big application. How can you make sure that id is going to be unique? Pretty difficult, don't you think? And this can be a hard task to achieve. Components created with an id may be accessed globally using Ext.getCmp, which is a short-hand reference for Ext.ComponentManager.get. Just to mention one example, when using Ext.getCmp to retrieve a component by its id, it is going to return the last component declared with the given id. And if the id is not unique, it can return the component that you are not expecting and this can lead into an error of the application. Do not panic! There is an elegant solution, which is using itemId instead of id. The itemId can be used as an alternative way to get a reference of a component. The itemId is an index to the container's internal MixedCollection, and that is why the itemId is scoped locally to the container. This is the biggest advantage of the itemId. For example, we can have a class named MyWindow1, extending from window and inside this class we can have a button with item ID submit. Then we can have another class named MyWindow2, also extending from window, and also with a button with item ID submit. Having two item IDs with the same value is not an issue. We only need to be careful when we use Ext.ComponentQuery to retrieve the component we want. For example, if we have a Login window whose alias is login and another screen called the Registration window whose alias is registration. Both the windows have a button Save whose itemId is save. If we simply use Ext.ComponentQuery.query('button#save'), the result will be an array with two results. However, if we narrow down the selector even more, let's say we want the Login window's Save button, and not the Registration window's Save button, we need to use Ext.ComponentQuery.query('login button#save'), and the result will be a single item, which is exactly we expect. You will notice that we will not use Ext.getCmp in the code of our project. Because it is not a good practice; especially for Ext JS 4 and also because we can use itemId and Ext.ComponentQuery instead. We will understand Ext.ComponentQuery better during the next topic. Creating the login controller We have created the view for the Login screen so far. As we are following the MVC architecture, we are not implementing the user interaction on the View class. If we click on the buttons on the Login class, nothing will happen because we have not yet implemented this logic. We are going to implement this logic now on the controller class. Under the app/controller directory, we will create a new file named Login.js. In this file we will implement all the code related to the events management of the Login screen. Inside the Login.js file we will implement the following code, which is only a base of the controller class we are going to implement: Ext.define('Packt.controller.Login', { // #1 extend: 'Ext.app.Controller', // #2 views: [ 'Login' // #3 ], init: function(application) { // #4 this.control({ // #5 }); } }); As usual, on the first line of the class we have its name (#1). Following the same formula we used for the view/Login.js we will have Packt (app namespace) + controller (name of the package) + Login (which is the name of the file), resulting in Packt.controller.Login. Note that that the controller JS file (controller/Login.js) has the same name as view/Login.js, but that is OK because they are in a different package. It is good to use a similar name for the views, models, stores and controllers because it is going to be easier to maintain the project later. For example, let's say that after the project is in production, we need to add a new button on the Login screen. With only this information (and a little bit of MVC concept knowledge) we know we will need to add the button code on the view/Login.js file and listen to any events that might be fired by this button on the controller/Login.js. Easier maintainability is also a great pro of using the MVC architecture. The controller classes need to extend from Ext.app.Controller (#2), so we will always use this parent class for our controllers. Then we have the views declaration (#3), which is where we are going to declare all the views that this controller will care about. In this case, we only have the Login view so far. We will add more views later on this article. Next, we have the init method declaration (#4). The init method is called before the application boots, before the launch function of Ext.application (app.js). The controller will also load the views, models, and stores declared inside its class. Then we have the control method configured (#5). This is where we are going to listen to all events we want the controller to react. And as we are coding the events fired by the Login window and its child components, this will be our scope in this controller. Adding the controller to app.js Now that we already have a base of the login controller, we need to add it to the app.js file. We can remove this code, since the controller will be responsible for loading the view/Login.js file for us: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And add the controllers declaration: controllers: [ 'Login' ], And as our project is only starting, declaring the views on the controller classes will help us to have a code more organized, as we do not need to declare all the application's views in the app.js file. Listening to the button click event Our next step now is to start listening to the Login window events. First, we are going to listen to the Submit and Cancel buttons. We already know that we are going to add the listeners inside the this.control declaration. The format that we need to use is the following: 'Ext.ComponentQuery selector': { eventWeWantToListenTo: functionOrMethodWeWantToExecute } First, we need to pass the selector that is going to be used by the Ext.ComponentQuery class to find the component. Then we need to list the event that we want to listen to. And then, we need to declare the function that is going to be executed when the event we are listening to is fired, or declare the name of the controller method that is going to be executed when the event is fired. In our case, we are going to declare the method only for code organization purposes. Now let's focus on finding the correct selector for the Submit and Cancel buttons. According to Ext.ComponentQuery API documentation, we can retrieve components by using their xtype (if you are already familiar with jQuery, you will notice that Ext.ComponentQuery selectors are very similar to jQuery selectors' behavior). Well, we are trying to retrieve two buttons, and their xtype is button. We try then the selector button. But before we start coding, let's make sure that this is the correct selector to avoid us to change the code all the time when trying to figure out the correct selector. There is one very useful tip we can try: open the browser console (command editor), type the following command, and click on Run : Ext.ComponentQuery.query('button'); As we can see in the screenshot, it returned an array of the buttons that were found by the selector we used, and the array contains six buttons; too many buttons and it is not what we want. We want to narrow down to the Submit and Cancel buttons. Let's try to draw a path of the Login window using the components xtype we used: We have a Login window (xtype: login or window), inside the window we have a form (xtype: form), inside the form we have a toolbar (xtype: toolbar), and inside the toolbar we have two buttons (xtype: button). Therefore, we have login-form-toolbar-button. However, if we use login-form-button we will have the same result, because we do not have any other buttons inside the form. So we can try the following command: Ext.ComponentQuery.query('login form button'); So let's try this last selector on the command editor: Now the result is an array of two buttons and these are the buttons that we are looking for! There is still one detail missing: if we use the login form button selector, it will listen to the click event (which is the event we want to listen to) of both buttons. When we click on the Cancel button one thing should happen (reset the form) and when we click on the Submit button, another thing should happen (submit the form to the server to validate the login). So we still want to narrow down the selector even more, until it returns the Cancel button and another selector that will return the Submit button. Going back to the view/Login code, notice that we declared a configuration named itemId to both buttons. We can use these itemId configurations to identify the buttons in a unique way. According to the Ext.ComponentQuery API docs, we can use # as a prefix of itemId. So let's try the following command on the command editor to get the Submit button reference: Ext.ComponentQuery.query('login form button#submit'); The output will be only one button as we expect: Now let's try the following command to retrieve the Cancel button reference: Ext.ComponentQuery.query('login form button#cancel'); The output will be only one button as we expect: So now we have the selectors that we were looking for! Console command editor is a great tool and using it can save us a lot of time when trying to find the exact selector that we want, instead of coding, testing, not the selector we want, code again, test again, and so on. Could we use only button#submit or button#cancel as selectors? Yes, we could use a shorter selector. However, it would work perfectly for now. As the application grows and we declare many more classes and buttons, the event would be fired for all buttons that have the itemId named submit or cancel and this could lead to an error in the application. We always need to remember that itemId is scoped locally to the container. By using login form button as the selector, we make sure that the event will come from the button from the Login window. So let's implement the code inside the controller class: init: function(application) { this.control({ "login form button#submit": { // #1 click: this.onButtonClickSubmit // #2 }, "login form button#cancel": { // #3 click: this.onButtonClickCancel // #4 } }); }, onButtonClickSubmit: function(button, e, options) { console.log('login submit'); // #5 }, onButtonClickCancel: function(button, e, options) { console.log('login cancel'); // #6 } In the preceding code, we have first the listener to the Submit button (#1), and on the following line we say that we want to listen to the click event, and then, when the click event of the Submit button is fired, the onButtonClickSubmit method should be executed (#2). Then we have the same for the Cancel button: we have the listener to the Cancel button (#3), and on the following line we say that we want to listen to the click event, and then, when the click event of the Cancel button is fired, the onButtonClickCancel method should be executed (#4). Next, we have the declaration of the methods onButtonClickSubmit and onButtonClickCancel. For now, we are only going to output a message on the console to make sure that our code is working. So we are going to output login submit (#5) in case the user clicks on the Submit button, and login cancel (#6) in case the user clicks on the Cancel button. But how do you know which are the parameters the event method can receive? You can find the answer to this question in the documentation. If we take a look at the click event in the documentation, this is what we will find: This is exactly what we declared. For all the other event listeners, we will go to the docs and see which are the parameters the event accepts, and then list them as parameters in our code. This is also a very good practice. We should always list out all the arguments from the docs, even if we are only interested in the first one. This way we always know that we have the full collection of the parameters, and this can come very handy when we are doing maintenance of the application. Let's go ahead and try it. Click on the Cancel button and then on the Submit button. This should be the output: Cancel button listener implementation Let's remove the console.log messages and add the code we actually want the methods to execute. First, let's work on the onButtonClickCancel method. When we execute this method, we want it to reset the form. So this is the logic sequence we want to program: Get the Login form reference. Call the method getForm, which is going to return the form basic class. Call the reset method to reset the form. The form basic class provides input field management, validation, submission, and form loading services. The Ext.form.Panel class (xtype: form) works as the container, and it is automatically hooked up with an instance of Ext.form.Basic. That is why we need to get the form basic reference to call the reset method. If we take a look at the parameters we have available on the onButtonClickCancel method, we have: button, e, and options, and none of them provides us the form reference. So what can we do about it? We can use the up method from the Button class (inherited from the AbstractComponent class). With this method, we can use a selector to try to retrieve the form. The up method navigates up the component hierarchy, searching from an ancestor container that matches the passed selector. As the button is inside a toolbar that is inside the form we are looking for, if we use button.up('form'), it will retrieve exactly what we want. Ext JS will see what is the first ancestor in the hierarchy of the button and will find a toolbar. Not what we are looking for. So it goes up again and it will find a form, which is what we are looking for. So this is the code that we are going to implement inside the onButtonClickCancel method: button.up('form').getForm().reset(); Some people like to implement the toolbar inside the window instead of the form. No problem at all, it is only a matter of how you like to implement it. In this case, if the toolbar that contains the Submit button is inside the Window class we can use: button.up('window').down('form').getForm().reset() And we will have the same result! Submit button listener implementation Now we need to implement the onButtonClickSubmit method. Inside this method, we want to program the logic to send the username and password values to the server so that the user can be authenticated. We can implement two programming logics inside this method: the first one is to use the submit method that is provided by the form basic class and the second one is to use an Ajax call to submit the values to the server. Either way we will achieve what we want to do. However, there is one detail that we need to know prior to making this decision: if using the submit method of the form basic class, we will not be able to encrypt the password before we send it to the server, and if we take a look at the parameters sent to the server, the password will be a plain text, and this is not good. Using the Ajax request will result the same; however, we can encrypt the password value before sending to the server. So apparently, the second option seems better and that is the one that we will implement. So to summarize, following are the steps we need to perform in this method: Get the Login form reference Get the Login window reference (so that we can close it once the user has been authenticated) Get the username and password values from the form Encrypt the password Send login information to the server Handle the server response If user is authenticated display application If not, display an error message First, let's get the references that we need: var formPanel = button.up('form'), login = button.up('login'), user = formPanel.down('textfield[name=user]').getValue(), pass = formPanel.down('textfield[name=password]').getValue(); To get the form reference, we can use the button.up('form') code that we already used in the onButtonClickCancel method; to get the Login window reference we can do the same thing, only changing the selector to login or window. Then to get the values from the User and Password fields we can use the down method, but this time the scope will start from the form reference. For the selector we will use the text field xtype, and to make sure we are retrieving the text field we want, we can create an itemId attribute, but there is no need for it. We can use the name attribute since the user and password fields have different names and they are unique within the Login window. To use attributes within a selector we must wrap it in brackets. The next step is to submit the values to the server: if (formPanel.getForm().isValid()) { Ext.Ajax.request({ url: 'php/login.php', params: { user: user, password: pass } }); } If we try to run this code, the application will send the request to the server, but we will get an error as the response because we do not have the login.php page implemented yet. That's OK because we are interested in other details right now. With Firebug or Chrome Developer Tools enabled, open the Net tab and filter by the XHR requests. Make sure to enter a username and password (any valid value so that we can click on the Submit button). This will be the output: We still do not have the password encrypted. The original value is still being displayed and this is not good. We need to encrypt the password. Under the app directory, we will create a new folder named util where we are going to create all the utility classes. We will also create a new file named MD5.js; therefore, we will have a new class named Packt.util.MD5. This class contains a static method called encode and this method encodes the given value using the MD5 algorithm. To understand more about the MD5 algorithm go to http://en.wikipedia.org/wiki/MD5. As Packt.util.MD5 is big, we will not list its code here, but you can download the source code of this book from http://www.packtpub.com/mastering-ext-javascript/book or get the latest version at https://github.com/loiane/masteringextjs). If you would like to make it even more secure, you can also use SSL and ask for a random salt string from the server, salt the password and hash it. You can learn more about it at one the following URLs: http://en.wikipedia.org/wiki/Transport_Layer_Security and http://en.wikipedia.org/wiki/Salt_(cryptography). A static method does not require an instance of the class to be able to be called. In Ext JS, we can declare static attributes and methods inside the static configuration. As the encode method from Packt.util.MD5 class is static, we can call it like Packt.util.MD5.encode(value);. So before Ext.Ajax.request, we will add the following code: pass = Packt.util.MD5.encode(pass); We must not forget to add the Packt.util.MD5 class on the controller's requires declaration (the requires declaration is right after the extend declaration): requires: [ 'Packt.util.MD5' ], Now, if we try to run the code again, and check the XHR requests on the Net tab, we will have the following output: The password is encrypted and it is much safer now.
Read more
  • 0
  • 0
  • 8288

article-image-implementing-document-management
Packt
17 Jul 2013
19 min read
Save for later

Implementing Document Management

Packt
17 Jul 2013
19 min read
(For more resources related to this topic, see here.) Managing spaces A space in Alfresco is nothing but a folder, which contains content as well as sub-spaces. Space users are the users invited to a space to perform specific actions, such as editing content, adding content, discussing a particular document, and so on. The exact capability a given user has within a space is a function of their role or rights. Consider the capability of creating a sub-space. By default, to create a sub-space, one of the following must apply: The user is the administrator of the system The user has been granted the Contributor role. The user has been granted the Coordinator role. The user has been granted the Collaborator role. Similarly, to edit space properties, a user will need to be the administrator or be granted a role that gives them rights to edit the space. These roles include Editor, Collaborator, and Coordinator.  Space is a smart folder Space is a folder with additional features such as security, business rules, workflow, notifications, local search, and special views. These additional features which make a space a smart folder are explained as follows: Space security: You can define security at the space level. You can specify a user or a group of users, who may perform certain actions on content in a space. For example, on the Marketing Communications space in intranet, you can specify that only users of the marketing group can add the content and others can only see the content. Space business rules: Business rules, such as transforming content from Microsoft Word to Adobe PDF and sending notifications when content gets into a space can be defined at space level. Space workflow: You can define and manage content workflow on a space. Typically, you will create a space for the content to be reviewed, and a space for approved content. You will create various spaces for dealing with the different stages the work flows through, and Alfresco will manage the movement of the content between those spaces. Space events: Alfresco triggers events when content gets into a space, or when content goes out of a space, or when content is modified within a space. You can capture such events at space level and trigger certain actions, such as sending e-mail notifications to certain users. Space aspects: Aspects are additional properties and behavior, which could be added to the content, based on the space in which it resides. For example, you can define a business rule to add customer details to all the customer contract documents in your intranet's Sales space. Space search: Alfresco search can be limited to a space. For example, if you create a space called Marketing, you can limit the search for documents within the Marketing space, instead of searching the entire site. Space syndication: Space content can be syndicated by applying RSS feed scripts on a space. You can apply RSS feeds on your News space, so that other applications and websites can subscribe for news updates. Space content: Content in a space can be versioned, locked, checked-in and checked-out, and managed. You can specify certain documents in a space to be versioned and others not. Space network folder: Space can be mapped to a network drive on your local machine, enabling you to work with the content locally. For example, using CIFS interface, space can be mapped to the Windows network folder. Space dashboard view: Content in a space can be aggregated and presented using special dashboard views. For example, the Company Policies space can list all the latest policy documents which are updated for the past one month or so. You can create different views for Sales, Marketing and Finance departmental spaces. Importance of space hierarchy Like regular folders, a space can have child spaces (called sub-spaces) and sub-spaces can further have sub-spaces of their own. There is no limitation on the number of hierarchical levels. However, the space hierarchy is very important for all the reasons specified above in the previous section. Any business rule and security defined at a space is applicable to all the content and sub-spaces underlying that space by default. Use the created system users, groups, and spaces for various departments as per the example. Your space hierarchy should look similar to the following screenshot: A space in Alfresco enables you to define various business rules, a dashboard view, properties, workflow, and security for the content belonging to each department. You can decentralize the management of your content by giving access to departments at individual space levels. The example of the intranet space should contain sub-spaces, as shown in the preceding screenshot. If you have not already created spaces, you must do it now by logging in as administrator. Also, it is very important to set security (by inviting groups of users to these spaces). Editing a space Using a web client, you can edit the spaces you have added previously. Note that you need to have edit permissions on the spaces to edit them. Editing space properties Every space listed will have clickable actions, as shown in the following screenshot: These clickable actions will be dynamically generated for each space based on the current user's permissions on that space. If you have copy permission on a space, you will notice the copy icon as a clickable action for that space. On clicking the View Details action icon, the detailed view of a space will be displayed, as shown in the next screenshot: The detailed view page of a space allows you to select a dashboard view for viewing and editing existing space properties, to categorize the space, to set business rules, and to run various actions on the space, as shown in the preceding screenshot. To edit space properties, click on the Edit Space Properties icon, shown in the preceding screenshot. You can change the name of the space and other properties as needed. Deleting space and its contents From the list of space actions, you can click on the Delete action to delete the space. You need to be very careful while deleting a space as all the business rules, sub-spaces, and the entire content within the space will also be deleted. Moving or copying space by using the clipboard From the list of space actions, you can click on the Cut action to move a space to the clipboard. Then you can navigate to any space hierarchy, assuming that you have the required permissions to do so, and paste this particular space, as required. Similarly, you can use the Copy action to copy the space to some other space hierarchy. This is useful if you have an existing space structure (such as a marketing project or engineering project), and you would like to replicate it along with the data it contains. The copied or moved space will be identical in all aspects to the original (source) space. When you copy a space, the space properties, categorization, business rules, space users, entire content within the space, and all sub-spaces along with their content will also be copied. Creating a shortcut to a space for quick access If you need to frequently access a space, you can create a shortcut (similar to the Favorite option in Internet browsers) to that space, in order to reach the space in just one click. From the list of space actions, you can click on the Create Shortcut action to create a shortcut to the existing space. Shortcuts are listed in the left-hand side shelf. Consider a scenario where after creating the shortcut, the source space is deleted. The shortcuts are not automatically removed as there is a possibility for the user to retrieve the deleted space. What will happen when you click on that shortcut link in the Shelf? If the source space is not found (deleted by user), then the shortcut will be removed with an appropriate error message. Choosing a default view for your space There are four different out-of-the-box options available (as shown in the screenshot overleaf). These options support the display of the space's information: Details View: This option provides listings of sub-spaces and content, in horizontal rows. Icon View: This option provides a title, description, timestamp, and action menus for each sub-space and content item present in the current space. Browse View: Similar to the preceding option, this option provides title, description, and list of sub-spaces for each space. Dashboard View: This option is disabled and appears in gray. This is because you have not enabled the dashboard view for this space. In order to enable dashboard view for a space, you need to select a dashboard view (Refer to the icon shown in the preceding screenshot). Sample space structure for a marketing project Let us say you are launching a new marketing project called Product XYZ Launch. Go to the Company Home | Intranet | Marketing Communications space and create a new space called Product XYZ Launch and create various sub-spaces as needed. You can create your own space structure within the marketing project space to manage content. For example, you can have a space called 02_Drafts to keep all the draft marketing documents and so on. Managing content Content could be of any type, as mentioned at the start of this article. By using the Alfresco web client application, you can add and modify content and its properties. You can categorize content, lock content for safe editing, and can maintain several versions of the content. You can delete content, and you can also recover the deleted content. This section uses the space you have already created as a part of your Intranet sample application. As a part of sample application, you will manage content in the Intranet | Marketing Communications space. Because you have secured this space earlier, only the administrator (admin) and users belonging to the Marketing group (Peter Marketing and Harish Marketing) can add content in this space. You can log in as Peter Marketing to manage content in this space. Creating content A web client provides two different interfaces for adding content. One can be used to create inline editable content, such as HTML, text, and XML, and the other can be used to add binary content, such Microsoft office files and scanned images. You need to have either administrator, contributor, collaborator, or coordinator roles on a space to create content within that space.  Creating text documents HTML, text, and XML To create an HTML file in a space, follow these steps: Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. On the header, click on Create | Create Content. The first pane of the Create Content wizard appears. You can track your progress through the wizard from the list of steps at the left of the pane. Provide name of the HTML file, select HTML as Content Type and click on the Next button. The Enter Content pane of the wizard appears, as shown in the next screenshot. Note that Enter Content is now highlighted in the list of steps at the left of the pane:   You can see that there is a comprehensive set of tools to help you format your HTML document. Enter some text, using some of the formatting features. If you know HTML, you can also use the HTML editor by clicking on the HTML icon. The HTML source editor is displayed. Once you update the HTML content, click on the Update button to return to the Enter Content pane in the wizard, with the contents updated. After the content is entered and edited in the Enter Content pane, click on Finish. You will see the Modify Content Properties screen, which can used to update the metadata associated with the content. Give some filename with .html as extension. Also, you will notice that then Inline Editing checkbox is selected by default. Once you are done with editing the properties, click on the OK button to return to the 02_Drafts space, with your newly created file inserted. You can launch the newly created HTML file by clicking on it. Your browser launches most of the common files, such as HTML, text, and PDF. If the browser could not recognize the file, you will be prompted with the windows dialog box containing the list of applications, from which you must choose an application. This is the normal behavior if you try to launch a file on any Internet page. Uploading binary files – Word, PDF, Flash, Image, and Media Using a web client, you can upload content from your hard drive. Choose a file from your hard disk that is not an HTML or text file. I chose Alfresco_CIGNEX.docx from my hard disk for the sample application. Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. To upload a binary file in a space, follow these steps: In the space header, click on the Add Content link. The Add Content dialog appears. To specify the file that you want to upload, click Browse. In the File Upload dialog box, browse to the file that you want to upload. Click Open. Alfresco inserts the full path name of the selected file in the Location textbox. Click on the Upload button to upload the file from your hard disk to the Alfresco repository. A message informs you that your upload was successful, as shown in the following screenshot. Click OK to confirm. Modify the Content Properties dialog appears. Verify the pre-populated properties and add information in the textboxes. Click OK to save and return to the 02_Drafts space. The file that you uploaded appears in the Content Items pane. Alfresco extracts the file size from the properties of the disk file, and includes the value in the size row. Editing content You can edit the content in Alfresco in three different ways: by using the Edit Online, Edit Offline, and Update actions. Note that you need to have edit permissions on the content to edit them. Online editing of HTML, text, and XML HTML files and plain text files can be created and edited online. If you have edit access to a file, you will notice a small pencil (Edit Online) icon, as shown in the following screenshot: Clicking on the pencil icon will open the file in its editor. Each file type is edited in its own WYSIWYG editor. Once you select to edit online, a working copy of the file will be created for editing, whereas the original file will be locked, as shown in the next screenshot. The working copy can be edited further as needed by clicking on the Edit Online button. Once you are done with editing, you can commit all the changes to the original document by clicking on the Done Editing icon. For some reason, if you decided to cancel editing of a document and discard any changes, you can do that by clicking on the Cancel Editing button given below. If you cancel editing of a document, the associated working copy will be deleted and all changes to it since it was checked out will be lost. The working copy can be edited by any user who has edit access to the document or the folder containing the document. For example, if user1 created the working copy and user2 has edit access to the document, and then both user1 and user2 can edit the working copy. Consider a scenario where user1 and user2 are editing the working copy simultaneously. If user1 commits the changes first, then the edits done by user2 will be lost. Hence, it is important to follow best practices in editing the working copy. Some of these best practices are listed here for your reference: Securing the edit access to the working copy to avoid multiple users simultaneously editing the file Saving the working copy after each edit to avoid losing the work done Following the process of allowing only the owner of the document edit the working copy. If others need to edit, they can claim the ownership Triggering the workflow on working copy to confirm the changes before committing Offline editing of files If you wish to download the files to your local machine, edit it locally, and then upload the updated version to Alfresco, then you might consider using the Edit Offline option (pencil icon). Once you click on the Edit Offline button, the original file will be locked automatically and a working copy of the file will be created for download. Then you will get an option to save the working copy of the document locally on your laptop or personal computer. If you don't want to automatically download the files for offline editing, you can turn off this feature. In order to achieve this, click on the User Profile icon in the top menu, and uncheck the option for Offline Editing, as shown here: The working copy can be updated by clicking on the Upload New Version button. Once you have finished editing the file, you can commit all the changes to the original document by clicking on the Done Editing icon. Or you can cancel all the changes by clicking on the Cancel Editing button. Uploading updated content If you have edit access to a binary file, you will notice the Update action icon in the drop-down list for the More actions link. Upon clicking on the Update icon, the Update pane opens. Click on the Browse button to upload the updated version of the document from your hard disk. It is always a good practice to check out the document and update the working copy rather than directly updating the document. Checking the file out avoids conflicting updates by locking the document, as explained in the previous section. Content actions Content will have clickable actions, as shown in the upcoming screenshot. These clickable actions (icons) will be dynamically generated for a content based on the current user's permissions for that content. For example, if you have copy permission for the content, you will notice the Copy icon as a clickable action for that content. Deleting content Click on the Delete action, from the list of content actions, to delete the content. Please note that when content is deleted, all the previous versions of that content will also be deleted. Moving or copying content using the clipboard From the list of content actions, as shown in the preceding screenshot, you can click on the Cut action to move content to the clipboard. Then, you can navigate to any space hierarchy and paste this particular content as required. Similarly, you can use the Copy action to copy the content to another space. Creating a shortcut to the content for quick access If you have to access a particular content very frequently, you can create a shortcut (similar to the way you can with Internet and Windows browser's Favorite option) to that content, in order to reach the content in one click. From the list of content actions, as shown in the preceding screenshot, you can click on the Create Shortcut action to create a shortcut to the existing content. Shortcuts are listed in the left-hand side Shelf. Managing content properties Every content item in Alfresco will have properties associated with it. Refer to the preceding screenshot to see the list of properties, such as Title, Description, Author, Size, and Creation Date. These properties are associated with the actual content file, named Alfresco_CIGNEX.docx. The content properties are stored in a relational database and are searchable using Advanced Search options. What is Content Metadata? Content properties are also known as Content Metadata. Metadata is structured data, which describes the characteristics of the content. It shares many similar characteristics with the cataloguing that takes place in libraries. The term "Meta" derives from the Greek word denoting a nature of a higher order or more fundamental kind. A metadata record consists of a number of predefined elements representing specific attributes of content, and each element can have one or more values. Metadata is a systematic method for describing resources, and thereby improving access to them. If access to the content will be required, then it should be described using metadata, so as to maximize the ability to locate it. Metadata provides the essential link between the information creator and the information user. While the primary aim of metadata is to improve resource discovery, metadata sets are also being developed for other reasons, including: Administrative control Security Management information Content rating Rights management Metadata extractors Typically, in most of the content management systems, once you upload the content file, you need to add the metadata (properties), such as title, description, and keywords to the content manually. Most of the content, such as Microsoft Office documents, media files, and PDF documents contain properties within the file itself. Hence, it is double the effort, having to enter those values again in the content management system along with the document. Alfresco provides built-in metadata extractors for popular document types to extract the standard metadata values from a document and populate the values automatically. This is very useful if you are uploading the documents through FTP, CIFS, or WebDAV interface, where you do not have to enter the properties manually, as Alfresco will transfer the document properties automatically. Editing metadata To edit metadata, you need to click the Edit Metadata icon () in content details view. Refer the Edit Metadata icon shown in the screenshot, which shows a detailed view of the Alfresco_CIGNEX.docx file. You can update the metadata values, such as Name and Description for your content items. However, certain metadata values, such as Creator, Created Date, Modifier, and Modified Date are read-only and you cannot change them. Certain properties, such as Modifier and Modified Date will be updated by Alfresco automatically, whenever the content is updated. Adding additional properties Additional properties can be added to the content in two ways. One way is to extend the data model and define more properties in a content type.  The other way is to dynamically attach the properties and behavior through Aspects. By using aspects, you can add additional properties, such as Effectivity, Dublin Core Metadata, and Thumbnailable, to the content. 
Read more
  • 0
  • 0
  • 1206
article-image-getting-started-adobe-premiere-pro-cs6-hotshot
Packt
11 Jul 2013
14 min read
Save for later

Getting Started with Adobe Premiere Pro CS6 Hotshot

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting the story right! This is basic housekeeping and ignoring it is like making your own editing life much more frustrating. So take a deep breath, think of calm blue oceans, and begin by getting this project organized. First you need to set the Timeline correctly and then you will create a short storyboard of the interview; again you will do this by focusing on the beginning, middle, and end of the story. Always start this way as a good story needs these elements to make sense. For frame-accurate editing it's advisable to use the keyboard as much as possible, although some actions will need to be performed with the mouse. Towards the end of this task you will cover some new ground as you add and expand Timeline tracks in preparation for the tasks ahead. Prepare for Lift Off Once you have completed all the preparations detailed in the Mission Checklist section, you are ready to go. Launch Premiere Pro CS6 in the usual way and then proceed to the first task. Engage Thrusters First you will open the project template, save it as a new file, and then create a three-clip sequence; the rough assembly of your story. Once done, perform the following steps: When the Recent Projects splash screen appears, select Hotshots Template – Montage. Wait for the project to finish loading and save this as Hotshots – Interview Project. Close any sequences open on the Timeline. Select Editing Optimized Workspace. Select the Project panel and open the Video bin without creating a separate window. If you would like Premiere Pro to always open a bin without creating a separate window, select Edit | Preferences | General from the menu. When the General Options window displays, locate the Bins option area and change the Double-Click option to Open in Place. Import all eight video files into the Video folder inside the Project 3 folder. Create a new sequence. Pick any settings at random, you will correct this in the next step. Rename the sequence as Project 3. Match the Timeline settings with any clip from the Video bin, and then delete the clip from the Timeline. Set the Project panel as the active panel and switch to List View if it is not already displayed. Create the basic elements of a short story for this scene using only three of the available clips in the Video bin. To do this, hold down the Ctrl or command key and click on the clips named ahead. Make sure you click on them in the same order as they are presented here: Intro_Shot.avi Two_Shot.avi Exit_Shot.avi Ensure the Timeline indicator is at the start of the Timeline and then click on the Automate to Sequence icon. When the Automate To Sequence window appears, change Ordering to Selection Order and leave Placement as the default (Sequentially). Uncheck the Apply Default Audio Transition , Apply Default Video Transition, and Ignore Audio checkboxes. Click on OK or press Enter on the keyboard to complete this action. Right-click on the Video 1 track and select Add Tracks from the context menu. When the Add Tracks window appears, set the number of video tracks to be added as 2 and the number of audio tracks to be added as 0. Click on OK or press Enter to confirm these changes. Dial open the Audio 1 track (hint – small triangle next to Audio 1), then expand the Audio 1 track by placing the cursor at the bottom of the Audio 1 area, and clicking on it, and dragging it downwards. Stop before the Master audio track disappears below the bottom of the Timeline panel. The Master audio track is used to control the output of all the audio tracks present on the Timeline; this is especially useful when you come to prepare your timeline for exporting to DVD. The Master audio track also allows you to view the left and right audio channels of your project. More details on the use of the Master audio track can be found in the Premiere Pro reference guide, which can be downloaded from http://helpx.adobe.com/pdf/premiere_pro_reference.pdf. Make sure the Timeline panel is active and zoom in to show all the clips present (hint – press backslash). You should end this section with a Timeline that looks something like the following screenshot. Save your project (Press Ctrl + S or command + S) before moving on to the next task. Objective Complete - Mini Debriefing How did you do? Review the shortcuts listed next. Did you remember them all? In this task you should have automatically matched up the Timeline to the clips with one drag-and-drop, plus a delete. You should have then sent three clips from the Project panel to the Timeline using the Automate to Sequence function. Finally you should have added two new video tracks and expanded the Audio 1 track. Keyboard shortcuts covered in this task are as follows: (backslash): Zoom the Timeline to show all populated clips Ctrl or command + double-click: Open bin without creating a separate Project panel (also see the tip after step 3 in the Engage Thrusters section) Ctrl or command + N: Create a new sequence Ctrl or command + (backslash): Create new bin in the Project panel Ctrl or command + I: Open the Import window Shift + 1: Set the Project panel as active Shift + 3: Set Timeline as active Classified Intel In this project, the Automate to Timeline function is being used to create a rough assembly of three clips. These are placed on the Timeline in the order that you clicked on them in the project bin. This is known as the selection order and allows the Automate to Timeline function to ignore the clips-relative location in the project bin. This is not a practical work flow if you have too many clips in your Project panel (how would you remember the selection order of twenty clips?). However, for a small number of clips, this is a practical workflow to quickly and easily send a rough draft of your story to the Timeline in just a few clicks. If you remember nothing else from this book, always remember how to correctly use Automate To Timeline! Extracting audio fat Raw material from every interview ever filmed will have lulls and pauses, and some stuttering. People aren't perfect and time spent trying to get lines and timing just right can lead to an unfortunate waste of filming time. As this performance is not live, you, the all-seeing editor, have the power to cut those distracting lulls and pauses, keeping the pace on beat and audience's attention on track. In this task you will move through the Timeline, cutting out some of the audio fat using Premiere Pro's Extract function, and to get this frame accurate, you will use as many keyboard shortcuts as possible. Engage Thrusters You will now use the Extract function to remove "dead" audio areas from the Timeline. Perform the following steps: Set the Timeline panel as active then play the timeline back by pressing the L key once. Make a mental note of the silences that occur in the first clip (Intro_Shot.avi). Return the Timeline indicator to the start of the Timeline using the Home key. Zoom in on the Timeline by pressing the + (plus) key on the main keyboard area. Do this until your Timeline looks something like the screenshot just after the following tip: To zoom in and out of the Timeline use the + (plus) and - (minus) keys in the main keyboard area, not the ones in the number pad area. Pressing the plus or minus key in the number area allows you to enter an exact number of frames into whichever tool is currently active. You should be able to clearly see the first area of silence starting at around 06;09 on the Timeline. Use the J, K, and L keyboard shortcuts to place the Timeline indicator at this point. Press the I key to set an In point here, then move the Timeline indicator to the end of the silence (around 08;17), and press the O key to set an Out point. Press the # (hash) key on your keyboard to remove the marked section of silence using Premiere Pro's Extract function. Important information on Sync Locking tracks The above step will only work if you have the Sync Lock icons toggled on for both the Video 1 and Audio 1 tracks. The Sync Lock icon controls which Timeline tracks will be altered when using a function such as Extract. For example; if the Sync Lock icon was toggled off for the Audio 1 track, then only the video would be extracted, which is counterproductive to what you are trying to achieve in this task! By default each new project should open with the Sync Lock icon toggled on for all video and audio tracks that already exist on the Timeline, and those added at a later point in the project. More information on Sync Lock can be found in the Premiere Pro reference guide (tinyurl.com/cz5fvh9). Repeat steps 5 and 6 to remove silences from the following Timeline areas (you should judge these points for yourself rather than slavishly following the suggestions given next): i. Set In point at 07;11 and Out point at 08;10. ii.Press # (hash). iii.Set In point at 11;05 and Out point at 12;13. iv.Press # (hash). Play back the Timeline to make sure you haven't extracted away too much audio and clipped the end of a sentence. Use the Trim tool to restore the full sentence. You may have spotted other silences on the Timeline; for the moment leave them alone. You will deal with these using other methods later in this project. Save the project before moving on to the next section. Objective Complete - Mini Debriefing At the end of this section you should have successfully removed three areas of silence from the Intro_Shot.avi clip. You did this using the Extract function, an elegant way of removing unwanted areas from your clips. You may also have refreshed your working knowledge of the Trim tool. If this still feels a lit le alien to you, don't worry, you will have a chance to practice trimming skills later in this project. Classified Intel Extract is another cunningly simple function that does exactly what it says; it extracts a section of footage from the Timeline, and then closes the gap created by this ac i on. In one step it replicates the razor cut and ripple delete. Creating a J-cut (away) One of the most common video techniques used in interviews and documentaries (not to mention a number of films) is called a J-cut. This describes cutting away some of the video, while leaving the audio beneath intact. The deleted video area is then replaced with alternative footage. This creates a voice-over effect that allows for a seamless transfer between the alternative viewpoints and the original speaker. In this task you will create a J-cut by replacing the video at the start of Intro_Shot.avi, leaving the voice of the newsperson and replacing his image with cutaway shots of what he is describing. You will make full use of four-point edits. Engage Thrusters Create J-cuts and cutaway shots using work flows you should now be familiar with. Perform the following steps to do so: Send the Cutaways_1.avi clip from the Project panel to the Source Monitor. In the Source Monitor, create an In point at 00;00 and an Out point just before the shot changes (around 04;24). Switch to the Timeline and send the Timeline indicator to the start of the Timeline (00;00). Create an In point here. Use a keyboard shortcut of your choice to identify the point just before the newsperson mentions the "Local village shop". (hint – roughly at 06;09). Create an Out point here. You want to create a J-cut, which means protecting the audio track that is already on the Timeline. To do this click once on the Audio 1 track header so it turns dark gray. Switch back to the Source Monitor and send the marked Cutaways_1.avi clip to the Timeline using the Overwrite function (hint – press the '.' (period) key). When the Fit Clip window appears, select Change Clip Speed (Fit to Fill), and click on OK or press Enter on the keyboard. The village scene cutaway shot should now appear on Video 1, but Audio 1 should retain the newsperson's dialog. His inserted village scene clip will have also slowed slightly to match what's being said by the newsperson. Repeat steps 2 to 7 to place the Cutaways_1.avi clip that shows the shot of the village shop, to match the village church and the village pub on the Timeline with the newsperson's dialog. The following are some suggestions on times, but try to do this step first of all without looking too closely at them: For the village shop cutaway, set the Source Monitor In point at 05;00 and Out point at 09;24. Set the Timeline In point at 06;10 and Out point at 07;13. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the village church cutaway, set the Source Monitor In point at 10;00 and Out point at 14;24. Set the Timeline In point at 07;14 and Out point at 09;03. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the pub cutaway, send Reconstruction_1.avi to the Source Monitor. Set the Source Monitor In point at 04;11 and Out point at 04;17. Set the Timeline In point at 09;04 and Out point at 12;00. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. The last cutaway shot here is part of the reconstruction reel and has been used because your camera person was unable (or forgot) to film a cutaway shot of the pub. This does sometimes happen and then it's down to you, the editor in charge, to get the piece on air with as few errors as possible. To do this you may find yourself scavenging footage from any of the other clips. In this case you have used just seven frames of Reconstruction_1.avi, but using the Premiere Pro feature, Fit to Fill , you are able to match the clip to the duration of the dialogue, saving your camera person from a production meeting dressing down! Review your edit decisions and use the Trim tool or the Undo command to alter edit points that you feel need adjustments. As always, being an editor is about experimentation, so don't be afraid to try something out of the box, you never know where it might lead. Once you are happy with your edit decisions, render any clips on the Timeline that display a red line above them. You should end up with a Timeline that looks something like the following screenshot; save your project before moving on to the next section. Objective Complete - Mini Debriefing In this task you have learned how to piece together cutaway shots to match the voice-over, creating an effective J-cut, as seen in the way the dialog seamlessly blends between the pub cutaway shot and the news reporter finishing his last sentence. You also learned how to scavenge source material from other reels in order to find the necessary shot to match the dialog. Classified Intel The last set of time suggestions given in this task allow the pub cutaway shot to run over the top of the newsperson saying "And now, much to the surprise…". This is an editorial decision that you can make on whether or not this cutaway should run over the dialog. It is simply a matter of taste, but you are the editor and the final decision is yours! In this article, we learned how to extract audio fat and create a J-cut. Resources for Article : Further resources on this subject: Responsive Design with Media Queries [Article] Creating a Custom HUD [Article] Top features you'll want to know about [Article]
Read more
  • 0
  • 0
  • 1840

article-image-understanding-express-routes
Packt
10 Jul 2013
10 min read
Save for later

Understanding Express Routes

Packt
10 Jul 2013
10 min read
(For more resources related to this topic, see here.) What are Routes? Routes are URL schema, which describe the interfaces for making requests to your web app. Combining an HTTP request method (a.k.a. HTTP verb) and a path pattern, you define URLs in your app. Each route has an associated route handler, which does the job of performing any action in the app and sending the HTTP response. Routes are defined using an HTTP verb and a path pattern. Any request to the server that matches a route definition is routed to the associated route handler. Route handlers are middleware functions, which can send the HTTP response or pass on the request to the next middleware in line. They may be defined in the app file or loaded via a Node module. A quick introduction to HTTP verbs The HTTP protocol recommends various methods of making requests to a Web server. These methods are known as HTTP verbs. You may already be familiar with the GET and the POST methods; there are more of them, about which you will learn in a short while. Express, by default, supports the following HTTP request methods that allow us to define flexible and powerful routes in the app: GET POST PUT DELETE HEAD TRACE OPTIONS CONNECT PATCH M-SEARCH NOTIFY SUBSCRIBE UNSUBSCRIBE GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS, CONNECT, and PATCH are part of the Hyper Text Transfer Protocol (HTTP) specification as drafted by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). M-SEARCH, NOTIFY, SUBSCRIBE, and UNSUBSCRIBE are specified by the UPnP Forum. There are some obscure HTTP verbs such as LINK, UNLINK, and PURGE, which are currently not supported by Express and the underlying Node HTTP library. Routes in Express are defined using methods named after the HTTP verbs, on an instance of an Express application: app.get(), app.post(), app.put(), and so on. We will learn more about defining routes in a later section. Even though a total of 13 HTTP verbs are supported by Express, you need not use all of them in your app. In fact, for a basic website, only GET and POST are likely to be used. Revisiting the router middleware This article would be incomplete without revisiting the router middleware. The router middleware is very special middleware. While other Express middlewares are inherited from Connect, router is implemented by Express itself. This middleware is solely responsible for empowering Express with Sinatra-like routes. Connect-inherited middlewares are referred to in Express from the express object (express.favicon(), express.bodyParser(), and so on). The router middleware is referred to from the instance of the Express app (app.router)  To ensure predictability and stability, we should explicitly add router to the middleware stack: app.use(app.router); The router middleware is a middleware system of its own. The route definitions form the middlewares in this stack. Meaning, a matching route can respond with an HTTP response and end the request flow, or pass on the request to the next middleware in line. This fact will become clearer as we work with some examples in the upcoming sections. Though we won't be directly working with the router middleware, it is responsible for running the whole routing show in the background. Without the router middleware, there can be no routes and routing in Express. Defining routes for the app we know how routes and route handler callback functions look like. Here is an example to refresh your memory: app.get('/', function(req, res) { res.send('welcome'); }); Routes in Express are created using methods named after HTTP verbs. For instance, in the previous example, we created a route to handle GET requests to the root of the website. You have a corresponding method on the app object for all the HTTP verbs listed earlier. Let's create a sample application to see if all the HTTP verbs are actually available as methods in the app object: var http = require('http'); var express = require('express'); var app = express(); // Include the router middleware app.use(app.router); // GET request to the root URL app.get('/', function(req, res) { res.send('/ GET OK'); }); // POST request to the root URL app.post('/', function(req, res) { res.send('/ POST OK'); }); // PUT request to the root URL app.put('/', function(req, res) { res.send('/ PUT OK'); }); // PATCH request to the root URL app.patch('/', function(req, res) { res.send('/ PATCH OK'); }); // DELETE request to the root URL app.delete('/', function(req, res) { res.send('/ DELETE OK'); }); // OPTIONS request to the root URL app.options('/', function(req, res) { res.send('/ OPTIONS OK'); }); // M-SEARCH request to the root URL app['m-search']('/', function(req, res) { res.send('/ M-SEARCH OK'); }); // NOTIFY request to the root URL app.notify('/', function(req, res) { res.send('/ NOTIFY OK'); }); // SUBSCRIBE request to the root URL app.subscribe('/', function(req, res) { res.send('/ SUBSCRIBE OK'); }); // UNSUBSCRIBE request to the root URL app.unsubscribe('/', function(req, res) { res.send('/ UNSUBSCRIBE OK'); }); // Start the server http.createServer(app).listen(3000, function() { console.log('App started'); }); We did not include the HEAD method in this example, because it is best left for the underlying HTTP API to handle it, which it already does. You can always do it if you want to, but it is not recommended to mess with it, because the protocol will be broken unless you implement it as specified. The browser address bar isn't capable of making any type of request, except GET requests. To test these routes we will have to use HTML forms or specialized tools. Let's use Postman, a Google Chrome plugin for making customized requests to the server. We learned that route definition methods are based on HTTP verbs. Actually, that's not completely true, there is a method called app.all() that is not based on an HTTP verb. It is an Express-specific method for listening to requests to a route using any request method: app.all('/', function(req, res, next) { res.set('X-Catch-All', 'true'); next(); }); Place this route at the top of the route definitions in the previous example. Restart the server and load the home page. Using a browser debugger tool, you can examine the HTTP header response added to all the requests made to the home page, as shown in the following screenshot: Something similar can be achieved using a middleware. But the app.all() method makes it a lot easier when the requirement is route specified. Route identifiers So far we have been dealing exclusively with the root URL (/) of the app. Let's find out how to define routes for other parts of the app. Routes are defined only for the request path. GET query parameters are not and cannot be included in route definitions. Route identifiers can be string or regular expression objects. String-based routes are created by passing a string pattern as the first argument of the routing method. They support a limited pattern matching capability. The following example demonstrates how to create string-based routes: // Will match /abcd app.get('/abcd', function(req, res) { res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); }); // Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); }); // Will match /abxyzcd app.get('/ab*cd', function(req, res) { res.send('ab*cd'); }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); }); The characters ?, +, *, and () are subsets of their Regular Expression counterparts.   The hyphen (-) and the dot (.) are interpreted literally by string-based route identifiers. There is another set of string-based route identifiers, which is used to specify named placeholders in the request path. Take a look at the following example: app.get('/user/:id', function(req, res) { res.send('user id: ' + req.params.id); }); app.get('/country/:country/state/:state', function(req, res) { res.send(req.params.country + ', ' + req.params.state); } The value of the named placeholder is available in the req.params object in a property with a similar name. Named placeholders can also be used with special characters for interesting and useful effects, as shown here: app.get('/route/:from-:to', function(req, res) { res.send(req.params.from + ' to ' + req.params.to); }); app.get('/file/:name.:ext', function(req, res) { res.send(req.params.name + '.' + req.params.ext.toLowerCase()); }); The pattern-matching capability of routes can also be used with named placeholders. In the following example, we define a route that makes the format parameter optional: app.get('/feed/:format?', function(req, res) { if (req.params.format) { res.send('format: ' + req.params.format); } else { res.send('default format'); } }); Routes can be defined as regular expressions too. While not being the most straightforward approach, regular expression routes help you create very flexible and powerful route patterns. Regular expression routes are defined by passing a regular expression object as the first parameter to the routing method. Do not quote the regular expression object, or else you will get unexpected results. Using regular expression to create routes can be best understood by taking a look at some examples. The following route will match pineapple, redapple, redaple, aaple, but not apple, and apples: app.get(/.+app?le$/, function(req, res) { res.send('/.+ap?le$/'); }); The following route will match anything with an a in the route name: app.get(/a/, function(req, res) { res.send('/a/'); }); You will mostly be using string-based routes in a general web app. Use regular expression-based routes only when absolutely necessary; while being powerful, they can often be hard to debug and maintain. Order of route precedence Like in any middleware system, the route that is defined first takes precedence over other matching routes. So the ordering of routes is crucial to the behavior of an app. Let's review this fact via some examples. In the following case, http://localhost:3000/abcd will always print "abc*" , even though the next route also matches the pattern: app.get('/abcd', function(req, res) { res.send('abcd'); }); app.get('/abc*', function(req, res) { res.send('abc*'); }); Reversing the order will make it print "abc*": app.get('/abc*', function(req, res) { res.send('abc*'); }); app.get('/abcd', function(req, res) { res.send('abcd'); }); The earlier matching route need not always gobble up the request. We can make it pass on the request to the next handler, if we want to. In the following example, even though the order remains the same, it will print "abc*" this time, with a little modification in the code. Route handler functions accept a third parameter, commonly named next, which refers to the next middleware in line. We will learn more about it in the next section: app.get('/abc*', function(req, res, next) { // If the request path is /abcd, don't handle it if (req.path == '/abcd') { next(); } else { res.send('abc*'); } }); app.get('/abcd', function(req, res) { res.send('abcd'); }); So bear it in mind that the order of route definition is very important in Express. Forgetting this will cause your app behave unpredictably. We will learn more about this behavior in the examples in the next section.
Read more
  • 0
  • 0
  • 3367

article-image-making-your-store-look-amazing
Packt
10 Jul 2013
6 min read
Save for later

Making Your Store Look Amazing

Packt
10 Jul 2013
6 min read
(For more resources related to this topic, see here.) Looks are everything on the web. If your store doesn't look enticing and professional to your customers then everything else is a waste. This article looks at how to make your VirtueMart look stunning. There are many different approaches to creating a hot-looking store. The one that is best for you or your client will depend upon your budget and your skill set. The sections in this article will cater to all budgets and skill sets. For example, we will cover the very simple task of finding and installing a free Joomla! template or installing a VirtueMart theme. Then we will look at the pros and cons of using two different professional frameworks namely Warp and Gantry. In the middle of all this, we will also look at the stunningly versatile Artisteer design software that won't quite give you the perfect professional job but does a very fine job of letting you choose just about every aspect of your design without any CSS/coding skills. Removing the Joomla! branding at the footer With each version of Joomla! and VirtueMart being better than the last one in terms of looks and performance, it is not unheard of to launch your store with the default looks of Joomla! and VirtueMart. The least you will probably want to do is remove the Powered by Joomla!® link at the footer of your store. This will make your store appear entirely your own and perhaps have a minor benefit to SEO as well by removing the outbound link. Getting ready Log in to your Joomla! control panel. This section was tested using the Beez_20 template but should work on any template where the same message appears. We will also be using the Firefox web browser search function but again, this is almost identical in other browsers. Identify the message to be removed on the frontend of your site as shown in the following screenshot: How to do it... This is going to be nice and easy so let's get started and perform the following steps: Navigate to Extensions | Template Manager from the main Joomla! drop-down menu as shown in the following screenshot: Now click on the Templates link (it is the one next door to the Styles link) as shown in the following screenshot: Scroll down until you see Beez_20 details and Files click on it as shown in the following screenshot: Now scroll down and click on Edit main page template . Next press Ctrl + F on your keyboard to bring up the Firefox search bar and enter <div id="footer"> as your search term. Firefox will present you with the following code: Delete everything between <p> and </p> both inclusive. Click on Save & Close . How it works... Check your Joomla! home page. We now have a nice clean and empty footer. We can add Joomla! and VirtueMart modules or just leave it empty. Installing a VirtueMart template In this section we will look at how to install a theme to make your store look great with a couple of clicks. There are a few things to consider first. Is your website just a store? That is, are all your pages going to be VirtueMart pages? If the answer is yes then this is definitely the section for you. Alternatively you might just have a few shop pages in amongst an extensive Joomla! based content site. If this is the case then you might be better off installing a Joomla! template and then setting VirtueMart to use that. If this describes your situation then the next section, Installing a Joomla! template is more appropriate for you. And there is a third option as well. You have content pages and a large number of VirtueMart pages. In this situation some experimentation and planning is required. You will either need to choose a Joomla! template that you are happy with for everything or a Joomla! template and a VirtueMart theme which look good together. Or you could use two templates. This last scenario is covered in the Creating and installing a template with Artisteer design software section. Getting ready Find a template which is either free or paid and download the files from the template provider's site (they will be in the form of a single compressed archive) on your computer. How to do it... Installing a VirtueMart template has never been as easy as it is in VirtueMart 2. Perform the following steps for the same: Navigate to Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button and you are done! How it works... The VirtueMart template is now installed. Take a look at your shiny new store. Installing a Joomla! Template As there is clearly something of a supply problem when it comes to VirtueMart-specific free templates, this section will look at installing a regular Joomla! template and using it in your VirtueMart store. Installing a Joomla! template is a very easy thing to do. But if you have never done it before read on. Getting ready Check the resources appendix for a choice of places to get free and paid templates. Download your chosen template on your desktop. It should be in the form of a ZIP file. Log in to your Joomla! admin area and read on. How to do it... This simple section is in two steps. First we upload the template then we set it as the active template. Select Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button. Now select Extensions | Template Manager . Click on the checkbox of the template you just installed and then click on Make Default . How it works... So what we did was to install the template through the usual Joomla! installation mechanism and once the template was installed we simply told Joomla! to use it. That's it. You can now go and assign all your modules to your new template.
Read more
  • 0
  • 0
  • 1007
article-image-improving-snake-game
Packt
08 Jul 2013
41 min read
Save for later

Improving the Snake Game

Packt
08 Jul 2013
41 min read
The game Two new features were added to this second version of the game. First, we now keep track of the highest score achieved by a player, saving it through local storage. Even if the player closes the browser application, or turns off the computer, that value will still be safely stored in the player's hard drive, and will be loaded when the game starts again. Second, we use session storage to save the game state every time the player eats a fruit in the game, and whenever the player kills the snake. This is used as an extra touch of awesomeness, where after the player loses, we display a snapshot of all the individual level ups the player achieved in that game, as well as a snapshot of when the player hit a wall or run the snake into itself, as shown in the following screenshot: At the end of each game, an image is shown of each moment when the player acquired a level up, as well as a snapshot of when the player eventually died. This images are created through the canvas API (calling the toDataURL function), and the data that composes each image is saved throughout the game, and stored using the web storage API. With a feature such as this in place, we make the game much more fun, and potentially much more social. Imagine how powerful it would be if the player could post, not only his or her high score to their favorite social network website, but also pictures of their game at key moments. Of course, only the foundation of this feature is implemented in this article (in other words, we only take the snapshots of these critical moments in the game). Adding the actual functionality to send that data to a real social network application is left as an exercise for the reader. A general description and demonstration of each of the APIs used in the game are given in the following sections. For an explanation of how each piece of functionality was incorporated into the final game, look at the code section. For the complete source code for this game, check out the book's page from Packt Publishing's website. Web messaging Web messaging allows us to communicate with other HTML document instances, even if they're not in the same domain. For example, suppose our snake game, hosted at http://snake.fun-html5-games.com, is embedded into a social website through iframe (let's say this social website is hosted at http://www.awesome-html5-games.net). When the player achieves a new high score, we want to post that data from the snake game directly into the host page (the page with iframe from which the game is loaded). With the web messaging API, this can be done natively, without the need for any server-side scripting whatsoever. Before web messaging, documents were not allowed to communicate with documents in other domains mostly because of security. Of course, web applications can still be vulnerable to malicious external applications if we just blindly take messages from any application. However, the web messaging API provides some solid security measures to protect the page receiving the message. For example, we can specify the domains that the message is going to, so that other domains cannot intercept the message. On the receiving end, we can also check the origin from whence the message came, thus ignoring messages from any untrusted domains. Finally, the DOM is never directly exposed through this API, providing yet another layer of security. How to use it Similar to web workers, the way in which two or more HTML contexts can communicate through the web messaging API is by registering an event handler for the on-message event, and sending messages out by using the postMessage function: code1 The first step to using the web messaging API is to get a reference to some document with whom we wish to communicate. This can be done by getting the contentWindow property of an iframe reference, or by opening a new window and holding on to that reference. The document that holds this reference is called the parent document, since this is where the communication is initiated. Although a child window can communicate with its parent, this can only happen when and for as long as this relationship holds true. In other words, a window cannot communicate with just any window; it needs a reference to it, either through a parent-child relationship, or through a child-parent relationship. Once the child window has been referenced, the parent can fire messages to its children through the postMessage function. Of course, if the child window hasn't defined a callback function to capture and process the incoming messages, there is little purpose in sending those messages in the first place. Still, the parent has no way of knowing if a child window has defined a callback to process incoming messages, so the best we can do is assume (and hope) that the child window is ready to receive our messages. The parameters used in the postMessage function are fairly similar to the version used in web workers. That is, any JavaScript value can be sent (numbers, strings, Boolean values, object literals, and arrays, including typed arrays). If a function is sent as the first parameter of postMessage (either directly, or as part of an object), the browser will raise a DATA_CLONE_ERR: DOM Exception 25 error. The second parameter is a string, and represents the domain that we allow our message to be received by. This can be an absolute domain, a forward slash (representing the same origin domain as the document sending the message), or a wild card character (*), representing any domain. If the message is received by a domain that doesn't match the second parameter in postMessage, the entire message fails. When receiving the message, the child window first registers a callback on the message event. This function is passed a MessageEvent object, which contains the following attributes: event.data: It returns the data of the message event.origin: It returns the origin of the message, for server-sent events and cross-document messaging event.lastEventId: It returns the last event ID string, for server-sent events event.sourceReturns: It is the WindowProxy of the source window, for cross-document messaging event.portsReturns: It is the MessagePort array sent with the message, for cross-document messaging and channel messaging Source: http://www.w3.org/TR/webmessaging/#messageevent As an example of the sort of things we could use this feature for in the real world, and in terms of game development, imagine being able to play our snake game, but where the snake moves through a couple of windows. How creative is that?! Of course, in terms of being practical, this may not be the best way to play a game, but I find it hard to argue with the fact that this would indeed be a very unique and engaging presentation of an otherwise common game. With the help of the web messaging API, we can set up a snake, where the snake is not constrained to a single window. Imagine the possibilities when we combine this clever API with another very powerful HTML5 feature, which just happens to lend itself incredibly well to games – web sockets. By combining web messaging with web sockets, we could play a game of snake, not only across multiple windows, but also with multiple players at the same time. Perhaps each player would control the snake when it got inside a given window, and all players could see all windows at the same time, even though they are each using a separate computer. The possibilities are endless, really. Surprisingly, the code used to set up a multi-window port of snake is incredibly simple. The basic setup is the same, we have a snake that only moves in one direction at a time. We also have one or more windows where the snake can go. If we store each window in an array, we can calculate which screen the snake needs to be rendered in, given its current position. Finding out which screen the snake is supposed to be in, given its world position, is the trickiest part. For example, imagine that each window is 200 pixels wide. Now, suppose there are three windows opened. Each window's canvas is only 200 pixels wide as well, so when the snake is at position 350, it would be printed too far to the right in all of the canvases. So what we need to do is first determine the total world width (canvas width multiplied by the total number of canvases), calculate which window the snake is at (position/canvas width), then convert the position from world space down to canvas space, given the canvas the snake is in. First, lets define our structures in the parent document. The code for this is as follows: code2 When this script loads, we'll need a way to create new windows, where the snake will be able to move about. This can easily be done with a button that spawns a new window when clicked, then adding that window to our array of frames, so that we can iterate through that array, and tell every window where the snake is. The code for this is as follows: code3 Now, the real magic happens in the following method. All that we'll do is update the snake's position, then tell each window where the snake is. This will be done by converting the snake's position from world coordinates to canvas coordinates (since every canvas has the same width, this is easy to do for every canvas), then telling every window where the snake should be rendered within a canvas. Since that position is valid for every window, we also tell each window individually whether or not they should render the information we're sending them. Only the window that we calculate the snake is in, is told to go ahead and render. code4 That's really all there is to it. The code that makes up all the other windows is the same for all of them. In fact, we only open a bunch of windows pointing to the exact same script. As far as each window is concerned, they are the only window opened. All they do is take a bunch of data through the messaging API, then render that data if the shouldDraw flag is set. Otherwise, they just clear their canvas, and sit tight waiting for further instructions from their parent window. code5 Web storage Before HTML5 came along, the only way web developers had to store data on the client was through cookies. While limited in scope, cookies did what they were meant to, although they had several limitations. For one thing, whenever a cookie was saved to the client, every HTTP request after that included the data for that cookie. This meant that the data was always explicitly exposed, and each of those HTTP requests were heavily laden with extra data that didn't belong there. This is especially inefficient when considering web applications that may need to store relatively large amounts of data. With the new web storage API, these issues have been addressed and satisfied. There are now three different options for client storage, all of which solve a different problem. Keep in mind, however, that any and all data stored in the client is still exposed to the client in plain text, and is therefore not meant for a secure storage solution. These three storage solutions are session storage, local storage, and the IndexedDB NoSQL data store. Session storage allows us to store key-value data pairs that persist until the browser is closed (in other words, until the session finishes). Local storage is similar to session storage in every way, except that the duration that the data persists is longer. Even when a session is closed, data stored in a local storage still persists. That data in local storage is only cleared when the user specifically tells the browser to do so, or when the application itself deletes data from the storage. Finally, IndexedDB is a robust data store that allows us to store custom objects (not including objects that contains functions), then query the database for those objects. Of course, with much robustness comes great complexity. Although having a dedicated NoSQL database built in right into the browser may sound exciting, but don't be fooled. While using IndexedDB can be a fascinating addition to the world of HTML, it is also by no means a trivial task for beginners. Compared to local storage and session storage, IndexedDB has somewhat of a steep learning curve, since it involves mastering some complex database concepts. As mentioned earlier, the only real difference between local storage and session storage is the fact that session storage clears itself whenever the browser closes down. Besides that, everything about the two is exactly the same. Thus, learning how to use both will be a simple experience, since learning one also means learning the other. However, knowing when to use one over the other might take a bit more thinking on your part. For best results, try to focus on the unique characteristics and needs of your own application before deciding which one to use. More importantly, realize that it is perfectly legal to use both storage systems in the same application. The key is to focus on a unique feature, and decide what storage API best suits those specific needs. Both the local storage and session storage objects are instances of the class Storage. The interface defined by the storage class, through which we can interact with these storage objects, is defined as follows (source: Web Storage W3C Candidate Recommendation, December 08, 2011, http://www.w3.org/TR/webstorage/): getItem(key): It returns the current value associated with the given key. If the given key does not exist in the list associated with the object then this method must return null. setItem(key, value): It first checks if a key/value pair with the given key already exists in the list associated with the object. If it does not, then a new key/value pair must be added to the list, with the given key and with its value set to value. If the given key does exist in the list, then it must have its value updated to value. If it couldn't set the new value, the method must throw a QuotaExceededError exception. (Setting could fail if, for example, the user has disabled storage for the site, or if the quota has been exceeded.) removeItem(key): It causes the key/value pair with the given key to be removed from the list associated with the object, if it exists. If no item with that key exists, the method must do nothing. clear(): It automatically causes the list associated with the object to be emptied of all key/value pairs, if there are any. If there are none, then the method must do nothing. key(n): It returns the name of the nth key in the list. The order of keys is user-agent defined, but must be consistent within an object so long as the number of keys doesn't change. (Thus, adding or removing a key may change the order of the keys, but merely changing the value of an existing key must not.) If n is greater than or equal to the number of key/value pairs in the object, then this method must return null. The supported property names on a Storage object are the keys of each key/value pair currently present in the list associated with the object. length: It returns the number of key/value pairs currently present in the list associated with the object. Local storage The local storage mechanism is accessed through a property of the global object, which on browsers is the window object. Thus, we can access the storage property explicitly through window.localStorage, or implicitly as simply localStorage. code28 Since only DOMString values are allowed to be stored in localStorage, any other values other than strings are converted into a string before being stored in localStorage. That is, we can't store arrays, objects, functions, and so on in localStorage. Only plain JavaScript strings are allowed. code6 Now, while this might seem like a limitation to the storage API, this is in fact done by design. If your goal is to store complex data types for later use, localStorage wasn't necessarily designed to solve this problem. In those situations, we have a much more powerful and convenient storage solution, which we'll look at soon (that is, IndexedDB). However, there is a way to store complex data (including arrays, typed arrays, objects, and so on) in localStorage. The key lies in the wonderful JSON data format. Modern browsers have the very handy JSON object available in the global scope, where we can access two important functions, namely JSON.stringify and JSON.parse. With these two methods, we can serialize complex data, store that in localStorage, then unserialize the data retrieved from the storage, and continue using it in the application. code7 While this is a nice little trick, you will notice what can be a major limitation: JSON stringify does not serialize functions. Also, if you pay close attention to the way that JSON.stringify works, you will realize that>Person, the result will be a simple object literal with no constructor or prototype information. Still, given that localStorage was never intended to fill the role of object persistence (but rather, simple key-value string pairs), this should be seen as nothing more than a limited, yet very neat trick. Session storage Since the sessionStorage interface is identical to that of localStorage, there is no reason to repeat all of the information just described. For a more in-depth discussion about sessionStorage, look at the two previous sections, and replace the word "local" with "session". Everything mentioned above that applies to local storage is also true for session storage. Again, the only difference between the two is that any data saved on sessionStorage is erased when the session with the client ends (that is, whenever the browser is shut down). Some examples of how to use sessionStorage will be shown below. In the example, we will attempt to store a value in the sessionStorage if that value doesn't already exist. Remember, when we set a key-value pair to the storage, if that key already exists in the storage, then whatever value was associated with that key will be overwritten. If the key doesn't exist, it gets created automatically. code8 Note that we can also query the sessionStorage object for a specific key using the in operator, which returns a Boolean value shown as follows: code9 Finally, although we can check the total amount of keys in the storage through sessionStorage.length, that by itself may not be very useful if we don't know what all the different keys are. Thankfully, the sessionStorage.key function allows us to get a specific key, through which we can then get a hold of the value stored with that key. code10 Thus, we can query sessionStorage for a key at a given position, and receive the string key representing that key. Then, with the key we can get a hold of the value stored with that key. Note, however, that the order in which items are stored within the sessionStorage object is totally arbitrary. While some browsers may keep the list of stored items sorted alphabetically by key value, this is clearly specified in the HTML5 spec as a decision to be left up to browser makers. As exciting as the web storage API might seem so far, there are cases when our needs might be such that serializing and unserializing data, as we use local or session storage, might not be quite sufficient. For example, imagine we have a few hundred (or perhaps, several thousand) similar records stored in local storage (say we're storing enemy description cards that are part of an RPG game). Think about how you would do the following using local storage: Retrieve, in alphabetical order, the first five records stored Delete all records stored that contain a particular characteristic (such as an enemy that doesn't survive in water, for example) Retrieve up to three records stored that contain a particular characteristic (for example, the enemy has a Hit Point score of 42,000 or more) The point is this: any querying that we may want to make against the data stored in local storage or session storage, must be handled by our own code. In other words, we'd be spending a lot of time and effort writing code just to help us get to some data. Let alone the fact that any complex data stored in local or session storage is converted to literal objects, and any and all functions that were once part of those objects are now gone, unless we write even more code to handle some sort of custom unserializing. In case you have not guessed it by now, IndexedDB solves these and other problems very beautifully. At its heart, IndexedDB is a NoSQL database engine that allows us to store whole objects and index them for fast insertions, deletions, and retrievals. The database system also provides us with a powerful querying engine, so that we can perform very advanced computations on the data that we have persisted. The following figure shows some of the similarities between IndexedDB and a traditional relational database. In relational databases, data is stored as a group of rows within a specific table structure. In IndexedDB, on the other hand, data is grouped in broadly-defined buckets known as data stores. The architecture of IndexedDB is somewhat similar to the popular relational database systems used in most web development projects today. One core difference is that, whereas relational databases store data in a database, which is a collection of related tables, an IndexedDB system groups data in databases, which is a collection of data stores. While conceptually similar, in practice these two architectures are actually quite different. Note If you come from a relational database background, and the concept of databases, tables, columns, and rows makes sense to you, then you're well on your way to becoming an IndexedDB expert. As you'll see, there are some significant distinctions between both systems and methodologies. While you might be tempted to simply replace the words data store with tables, know that the difference between the two concepts extends beyond a name difference. One key feature of data stores is that they don't have any specific schema associated with them. In relational databases, a table is defined by its very particular structure. Each column is specified ahead of time, when the table is first created. Then, every record saved in such a table follows the exact same format. In NoSQL databases (which IndexedDB is a type of), a data store can hold any object, with whatever format they may have. Essentially, this concept would be the same as having a relational database table that has a different schema for each record in it. IDBFactory To get started with IndexedDB, we first need to create a database. This is done through an implementation of IDBFactory, which in the browser, is the window.indexedDB object. Deleting a database is also done through the indexedDB object, as we'll see soon. In order to open a database (or create one if it doesn't exist yet), we simply call the indexedDB.open method, passing in a database name, along with a version number. If no version number is supplied, the default version number of one will be used as shown in the following code snippet: code11 As you'll soon notice, every method for asynchronous requests in IndexedDB (such as indexedDB.open, for example), will return a request object of type IDBRequest, or an implementation of it. Once we have that request object, we can set up callback functions on its properties, which get executed as the various events related to them are fired, as shown in the following code snippet: code12 IDBOpenDBRequest As mentioned in the previous section, once we make an asynchronous request to the IndexedDB API, the immediately returned object will be of type IDBRequest. In the particular case of an open request, the object that is returned to us is of type IDBOpenDBRequest. Two events that we might want to listen to on this object were shown in the preceding code snippet (onerror and onsuccess). There is also a very important event, wherein we can create an object store, which is the foundation of this storage system. This event is the onupgradeneeded (that is, on upgrade needed) event. This will be fired when the database is first created and, as you might expect, whenever the version number used to open the database is higher than the last value used when the database was opened, as shown in the following code: code13 The call to createObjectStore made on the database object takes two parameters. The first is a string representing the name of the object store. This store can be thought of as a table in the world of relational databases. Of course, instead of inserting records into columns from a table, we insert whole objects into the data store. The second parameter is an object defining properties of the data store. One important attribute that this object must define is the keyPath object, which is what makes each object we store unique. The value assigned to this property can be anything we choose. Now, any objects that we persist in this data store must have an attribute with the same name as the one assigned to keyPath. In this example, our objects will need to have an attribute of myKey. If a new object is persisted, it will be indexed by the value of this property. Any additional objects stored that have the same value for myKey will replace any old objects with that same key. Thus, we must provide a unique value for this object every time we want a unique object persisted. Alternatively, we can let the browser provide a unique value for this key for us. Again, comparing this concept to a relational database, we can think of the keyPath object as being the same thing as a unique ID for a particular element. Just as most relational database systems will support some sort of auto increment, so does IndexedDB. To specify that we want auto-incremented values, we simply add the flag to the object store properties object when the data store is first created (or upgraded) as shown in the following code snippet: code14 Now we can persist an object without having to provide a unique value for the property myKey. As a matter of fact, we don't even need to provide this attribute at all as part of any objects we store here. IndexedDB will handle that for us. Take a look at the following diagram: Using Google Chrome's developer tools, we can see all of the databases and data stores we have created for our domain. Note that the primary object key, which has whatever name we give it during the creation of our data store, has IndexedDB-generated values, which, as we have specified, are incremented over the last value. With this simple, yet verbose boilerplate code in place, we can now start using our databases and data stores. From this point on, the actions we take on the database will be done on the individual data store objects, which are accessed through the database objects that created them. IDBTransaction The last general thing we need to remember when dealing with IndexDB, is that every interaction we have with the data store is done inside transactions. If something goes wrong during a transaction, the entire transaction is rolled back, and nothing takes effect. Similarly, if the transaction is successful, IndexedDB will automatically commit the transaction for us, which is a pretty handy bonus. To use transaction, we need to get a reference to our database, then request a transaction for a particular data store. Once we have a reference to a data store, we can perform the various functions related to the data store, such as putting data into it, reading data from it, updating data, and finally, deleting data from a data store. code15 To store an item in our data store we need to follow a couple of steps. Note that if anything goes wrong during this transaction, we simply catch whatever error is thrown by the browser, and execution continues uninterrupted because of the try/catch block. The first step to persisting objects in IndexedDB is to start a transaction. This is done by requesting a transaction object from the database we have opened earlier. A transaction is always related to a particular data store. Also, when requesting a transaction, we can specify what type of transaction we'd like to start. The possible types of transactions in IndexedDB are as follows: readwrite This transaction mode allows for objects to be stored into the data store, retrieved from it, updated, and deleted. In other words, readwrite mode allows for full CRUD functionality. readonly This transaction mode is similar to readwrite, but clearly restricts the interactions with the data store to only reading. Anything that would modify the data store is not allowed, so any attempt to create a new record (in other words, persisting a new object into the data store), update an existing object (in other words, trying to save an object that was already in the data store), or delete an object from the data store will result in the transaction failing, and an exception being raised. versionchange This transaction mode allows us to create or modify an object store or indexes used in the data store. Within a transaction of this mode, we can perform any action or operation, including modifying the structure of the database. Getting elements Simply storing data into a black box is not at all useful if we're not able to retrieve that data at a later point in time. With IndexedDB, this can be done in several different ways. More commonly, the data store where we persist the data is set up with one or more indexes, which keep the objects organized by a particular field. Again, for those accustomed to relational databases, this would be similar to indexing/applying a key to a particular table column. If we want to get to an object, we can query it by its unique ID, or we can search the data store for objects that fit particular characteristics, which we can do through indexed values of that object. To create an index on a data store, we must specify our intentions during the creation of the data store (inside the onupgradeneeded callback when the store is first created, or inside a transaction mode versionchange). The code for this is as follows: code16 In the preceding example, we create an index for the task attribute of our objects. The name of this index can be anything we want, and commonly is the same name as the object property to which it applies. In our case, we simply named it taskIndex. The possible settings we can configure are as follows: unique – if true, an object being stored with a duplicate value for the same attribute is rejected multiEntry – if true, and the indexed attribute is an array, each element will be indexed Note that zero or more indexes can be created for a data store. Just like any other database system, indexing your database/data store can really boost the performance of the storage container. However, just adding indexes for the fun it provides is not a good idea, as the size of your data store will grow accordingly. A good data store design is one where the specific context of the data store with respect to the application is taken into account, and each indexed field is carefully considered. The phrase to keep in mind when designing your data stores is the following: measure it twice, cut it once. Although any object can be saved in a data store (as opposed to a relational database, where the data stored must carefully follow the table structure, as defined by the table's schema), in order to optimize the performance of your application, try to build your data stores with the data that it will store in mind. It is true that any data can be smacked into any data store, but a wise developer considers the data being stored very carefully before committing it to a database. Once the data store is set up, and we have at least one meaningful index, we can start to pull data out of the data store. The easiest way to retrieve objects from a data store is to use an index, and query for a specific object, as shown in the following code: code17 The preceding function attempts to retrieve a single saved object from our data store. The search is made for an object with its task property that matches the task name supplied to the function. If one is found, it will be retrieved from the data store, and passed to the store object's request through the event object passed in to the callback function. If an error occurs in the process (for example, if the index supplied doesn't exist), the onerror event is triggered. Finally, if no objects in the data store match the search criteria, the resulting property passed in through the request parameter object will be null. Now, to search for multiple items, we can take a similar approach, but instead we request an IndexedDBCursor object. A cursor is basically a pointer to a particular result from a result set of zero or more objects. We can use the cursor to iterate through every object in the result set, until the current cursor points at no object (null), indicating that there are no more objects in the result set. code18 You will note a few things with the above code snippet. First, any object that goes into our IndexedDB data store is stripped of its DNA, and only a simple hash is stored in its stead. Thus, if the prototype information of each object we retrieve from the data store is important to the application, we will need to manually reconstruct each object from the data that we get back from the data store. Second, observe that we can filter the subset of the data store that we would like to take out of it. This is done with an IndexedDB Key Range object, which specifies the offset from which to start fetching data. In our case, we specified a lower bound of zero, meaning that the lowest primary key value we want is zero. In other words, this particular query requests all of the records in the data store. Finally, remember that the result from the request is not a single result or an array of results. Instead, all of the results are returned one at a time in the form of a cursor. We can check for the presence of a cursor altogether, then use the cursor if one is indeed present. Then, the way we request the next cursor is by calling the continue() function on the cursor itself. Another way to think of cursors is by imagining a spreadsheet application. Pretend that the 10 objects returned from our request each represent a row in this spreadsheet. So IndexedDB will fetch all 10 of those objects to memory, and send a pointer to the first result through the event.target.result property in the onsuccess callback. By calling cursor.continue(), we simply tell IndexedDB to now give us a reference to the next object in the result set (or, in other words, we ask for the next row in the spreadsheet). This goes on until the tenth object, after which no more objects exist in the result set (again, to go along with the spreadsheet metaphor, after we fetch the last row, the next row after that is null – it doesn't exist). As a result, the data store will call the onsuccess callback, and pass in a null object. If we attempt to read properties in this null reference, as though we were working with a real object returned from the cursor, the browser will throw a null pointer exception. Instead of trying to reconstruct an object from a cursor one property at a time, we could abstract this functionality away in a generic form. Since objects being persisted into the object store can't have any functions, we're not allowed to keep such functionality inside the object itself. However, thanks to JavaScript's ability to build an object from a reference to a constructor function, we can create a very generic object builder function as follows: code19 Deleting elements To remove specific elements from a data store, the same principles involved in retrieving data apply. In fact, the entire process looks fairly identical to retrieving data, only we call the delete function on the object store object. Needless to say, the transaction used in this action must be readwrite, since readonly limits the object so that no changes can be done to it (including deletion). The first way to delete an object is by passing the object's primary key to the delete function. This is shown as follows: code20 The difficulty with this first approach is that we need to know the ID of the object. In some cases, this would involve a prior transaction request where we'd retrieve the object based on some easier to get data. For example, if we want to delete all tasks with the attribute of complete set to true, we'd need to query the data store for those objects first, then use the IDs associated with each result, and use those values in the transaction where the objects are deleted. A second way to remove data from the data store is to simply call clear() on the object store object. Again, the transaction must be set to readwrite. Doing this will obliterate every last object in the data store, even if they're all of different types as shown in the following code snippet: code21 Finally, we can delete multiple records using a cursor. This is similar to the way we retrieve objects. As we iterate through the result set using the cursor, we can simply delete the object at whatever position the cursor is currently on. Upon deletion, the reference from the cursor object is set to null as shown in the following code snippet: code22 This is pretty much the same routine as fetching data. The only detail is that we absolutely need to supply an object's key. The key is the value stored in the object's keyPath attribute, which can be user-provided, or auto-generated. Fortunately for us, the cursor object returns at least two references to this key through the cursor.primaryKey property, as well as through the object's own property that references that value (in our case, we chose the keyPath attribute to be named myKey). The two upgrades we added to this second version of the game are simple, yet they add a lot of value to the game. We added a persistent high score engine, so users can actually keep track of their latest record, and have a sticky record of past successes. We also added a pretty nifty feature that takes a snapshot of the game board each time the player scores, as well as whenever the player ultimately dies out. Once the player dies, we display all of the snapshots we had collected throughout the game, allowing the player to save those images, and possibly share it with his or her friends. Saving the high score The first thing you probably noticed about the previous version of this game was that we had a placeholder for a high score, but that number never changed. Now that we know how to persist data, we can very easily take advantage of this, and persist a player's high score through various games. In a more realistic scenario, we'd probably send the high score data to a backend server, where every time the game is served, we can keep track of the overall high score, and every user playing the game would know about this global score. However, in our situation, the high score is local to a browser only, since none of the persistence APIs (local and session storage, as well as IndexedDB) share data across other browsers, or natively to a remote server. Since we want the high score to still exist in a player's browser even a month from now, after the computer has been powered off (along with the browser, of course) multiple times, storing this high score data on sessionStorage would be silly. We could store this single number either in IndexedDB or in localStorage. Since we don't care about any other information associated with this score (such as the date when the score was achieved, and so on), all we're storing really is just the one number. For this reason, I think localStorage is a much better choice, because it can all be done in as few as 5 lines of code. Using IndexedDB would work, but would be like using a cannon to kill a mosquito: code23 This function is pretty straight forward. The two values we pass it are the actual score to set as the new high score (this value will be both saved to localStorage, as well as displayed to the user), and the HTML element where the value will be shown. First, we retrieve the existing value saved under the key high-score, and convert it to a number. We could have used the function parseInt(), but multiplying a string by a number does the same thing, but with a slightly faster execution. Next, we check if that value evaluated to something real. In other words, if there was no high-score value saved in local storage, then the variable score would have been evaluated to undefined multiplied by one, which is not a number. If there is a value saved with the key high-score, but that value is not something that can be converted into a number (such as a string of letters and such), we know that it is not a valid value. In this case, we set the incoming score as the new high score. This would work out in the case where the current persisted value is invalid, or not there (which would be the case the very first time the game loads). Next, once we have a valid score retried from local storage, we check if the new value is higher than the old, persisted value. If we have a higher score, we persist that value, and display it to the screen. If the new value is not higher than the existing value, we don't persist anything, but display the saved value, since that is the real high score at the time. Taking screenshots of the game This feature is not as trivial as saving the user's high score, but is nonetheless very straightforward to implement. Since we don't care about snapshots that we captured more than one game ago, we'll use sessionStorage to save data from the game, in real time as the player progresses. Behind the scenes, all we do to take these snapshots is save the game state into sessionStorage, then at the end of the game we retrieve all of the pieces that we'd been saving, and reconstruct the game at those points in time into an invisible canvas. We then use the canvas.toDataURL() function to extract that data as an image: code24 Each time the player eats a fruit, we call this function, passing it a reference to the snake (our hero in this game), and the fruit (the goal of this game) objects. What we do is really quite simple: we create an array representing the state of the snake and of the fruit at each event that we capture. Each element in this array is a string representing the serialized array that keeps track of where the fruit was, and where each body part of the snake was located as well. First, we check if this object currently exists in sessionStorage. For the first time we start the game, this object will not yet exist. Thus, we create an object that references those two objects, namely the snake and the fruit object. Next, we stringify the buffers keeping track of the locations of the elements we want to track. Each time we add a new event, we simply append to those two buffers. Of course, if the user closes down the browser, that data will be erased by the browser itself, since that's how sessionStorage works. However, we probably don't want to hold on to data from a previous game, so we also need a way to clear out our own data after each game. code25 Easy enough. All we need is to know the name of the key that we use to hold each element. For our purposes, we simply call the snapshots of the snake eating "eat", and the buffer with the snapshot of the snake dying "die". So before each game starts, we can simply call clearEvent() with those two global key values, and the cache will be cleared a new each time. Next, as each event takes place, we simply call the first function we defined, sending it the appropriate data as shown in the following code snippet: code26 Finally, whenever we wish to display all of these snapshots, we just need to create a separate canvas with the same dimensions as the one used in the game (so that the buffers we saved don't go out of bounds), and draw the buffers to that canvas. The reason we need a separate canvas element is because we don't want to draw on the same canvas that the player can see. This way, the process of producing these snapshots is more seamless and natural. Once each state is drawn, we can extract each image, resize it, and display it back to the user as shown in the following code: code27 Observe that we simply draw the points representing the snake and the fruit into that canvas. All of the other points in the canvas are ignored, meaning that we generate a transparent image. If we want the image to have an actual background color (even if it is just white), we can either call fillRect() over the entire canvas surface before drawing the snake and the fruit, or we can traverse each pixel in the pixelData array from the rendering context, and set the alpha channel to 100 percent opaque. Even if we set a color to each pixel by hand, but leave off the alpha channel, we'd have colorful pixels, but 100 percent transparent. Summary In this article we took a few extra steps into the fascinating world of 2D rendering using the long-awaited canvas API. We took advantage of the canvas' ability to export images to make our game more engaging, and potentially more social. We also made the game more engaging and social by adding a persistence layer on top of the game, whereby we were able to save a player's high score. Two other new powerful features of HTML5, web messaging and IndexedDB, were explored in this article, although there were no uses for these features in this version of the game. The web messaging API provides a mechanism for two or more windows to communicate directly through message passing. The exciting bit is that these windows (or HTML contexts) do not need to be in the same domain. Although this could sound like a security issue, there are several systems in place to ensure that cross-document and cross-domain messaging is secure and efficient. The web storage interface brings with it three distinct solutions for long term data persistence on the client. These are session storage, local storage, and IndexedDB. While IndexedDB is a full-blown, built-in, fully transactional and asynchronous NoSQL object store, local and session storage provide a very simple key-value pair storage for simpler needs. All three of these systems introduce great benefits and gains over the traditional cookie-based data storage, including the fact that the total amount of data that can be persisted in the browser is much greater, and none of the data saved in the user's browser ever travels back and forth between the server and the client through HTTP requests. Resources for Article :   Further resources on this subject: Interface Designing for Games in iOS [Article] Unity 3D Game Development: Don't Be a Clock Blocker [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 3687

article-image-installing-drupal
Packt
04 Jul 2013
14 min read
Save for later

Installing Drupal

Packt
04 Jul 2013
14 min read
(For more resources related to this topic, see here.) Assumptions To get Drupal up and running, you will need all of the following: A domain A web host Access to the web host's filesystem or You need a local testing environment, which takes care of the first three things For building sites, either a web host or a local testing environment will meet your needs. A site built on a web-accessible domain can be shared via the Internet, whereas sites built on local test machines will need to be moved to a web host before they can be used for your course.  In these instructions, we are assuming the use of phpMyAdmin, an open source, browser-based tool, for administering your database. A broad range of similar tools exist, and these general instructions can be used with most of these other tools. Information on phpMyAdmin is available at http://www.phpmyadmin.net; information on other browser-based database administration tools can be found at http://en.wikipedia.org/wiki/PhpMyAdmin#Similar_products. The domain The domain is the address on the Web from where people can access your site. If you are building this site as part of your work, you will probably be using the domain associated with your school or organization. If you are hosting this on your own server, you can buy a domain for under US $10.00 a year. Enter purchase domain name in Google, and you will have a plethora of options. The web host Your web host provides you with the server space on which to run your site. Within many schools, your website will be hosted by your school. In other environments, you might need to arrange for your own web host by using a hosting company. In selecting a web host, you need to be sure that they run software that meets or exceeds the recommended software versions. Web server Drupal is developed and tested extensively in an Apache environment. Drupal also runs on other web servers, including Microsoft IIS and Nginx. PHP version Drupal 7 will run on PHP 5.2.5 or higher; however, PHP 5.3 is recommended. The Drupal 8 release will require PHP 5.3.10. MySQL version Drupal 7 will run on MySQL 5.0.15 or higher, and requires the PHP Data Objects ( PDO ) extension for PHP. Drupal 7 has also been tested with MariaDB as a drop-in replacement, and Version 5.1.44 or greater is recommended. PDO is a consistent way for programmers to write code that interacts with the database. You can find out more about PDO and how to install it at http://drupal.org/requirements/pdo. Drupal can technically use any database that PDO supports, but MySQL is by far the most tested and best supported. Third-party modules are required to use Drupal with other database systems. You can find these modules listed at http://drupal.org/project/modules/?f[0]=im_vid_3%3A13158&f[1]=drupal_core%3A103&f[2]=bs_project_sandbox%3A0. FTP and shell access to your web host Your web host should also offer FTP access to your web server. You will need FTP (or SFTP) access in order to upload the Drupal codebase to your web space. Shell access, or SSH access, is not essential for basic site maintenance. However, SSH access can simplify maintaining your site, so contracting with a web host that provides SSH access is recommended. A local testing environment Alternatively, you can set up a local testing environment for your site. This allows you to set up Drupal and other applications on your computer. A local testing environment can be a great tool for learning a piece of software. Fortunately, open source tools can automate the process of setting up your testing environment. PC users can use XAMPP (http://www.apachefriends.org) to set up a local testing environment; Mac users can use MAMP (http://www.mamp.info). If you are working in a local testing environment set up via XAMPP or MAMP, you have all the pieces you need to start working with Drupal: your domain, your web host, the ability to move files into your web directory, and phpMyAdmin. Setting up a local environment using MAMP (Mac only) While Apple's operating system includes most of the programs required to run Drupal, setting up a testing environment can be tricky for inexperienced users. Installing MAMP allows you to create a preconfigured local environment quickly and easily using the following steps: Download the latest version of MAMP from http://www.mamp.info/en/index.html. Note that the paid version of the program will download as well. Feel free to pay for the software if you wish, but the free version will be sufficient for our needs. Navigate to where you downloaded the .zip file, and double-click to unzip it. Once it is unzipped, double click on the .pkg file that was contained in the .zip file. Follow the directions in the wizard until you reach the Installation Type screen. If you want to use only the free version of the program, click on the Customize button: In the Custom Install on "Macintosh HD" window, uncheck the MAMP PRO option and click on the Install button to install the application: Navigate to /Applications/MAMP and open the MAMP application. The Apache and MySQL servers will start, and the start page will open in your default web browser. If the start page opens, MAMP is installed correctly. Setting up a local environment using XAMPP (Windows only) Download the latest version of XAMPP from http://www.apachefriends.org/en/xampp-windows.html#641. Download the .zip version. Navigate to where you downloaded the file, right-click, and select Extract All... . Enter C: as the destination and click on Extract . Navigate to C:xampp and double-click the xampp-control application to start XAMPP Control Panel Application : Click on the Start buttons next to Apache and MySql . Open a web browser, and enter http://localhost or http://127.0.0.1 in the address bar, and you should see the following start page: Navigate to http://localhost/security/index.php, and enter a password for MySQL's root user. Make sure to remember this password or write it down in your notebook because we will need it later. Configuring your local environment for Drupal Now that we have the programs required to run Drupal (Apache, MySQL, and PHP), we need to modify some of their settings to match Drupal's system requirements. PHP configuration As mentioned before, Drupal 7 requires Version 5.2.5 or higher, and as of the writing of this book MAMP includes Version 5.4.4 (or you can switch to Version 5.2.17) and XAMPP includes Version 5.4.7. PHP configuration settings are found in the program's php.ini file. For MAMP, the php.ini file is located in /Applications/MAMP/bin/php/[php version number]/conf, where the php version number is either 5.4.4 or 5.2.17. For XAMPP, the php.ini file is located in C:xamppphp. Open the file in a text editor (not a word processor), find the Resource Limits section of the file and edit the values to match the following values: max_execution_time = 60;max_input_time = 120;memory_limit = 128M;error_reporting = E_ALL & ~E_NOTICE The last line is optional and is used if you want to display error messages in the browser, instead of only in the logs. MySQL configuration As mentioned before, Drupal 7 requires MySQL Version 5.0.15 or higher. MAMP includes Version 5.5.25 and XAMPP includes Version 5.5.27. MySQL's configuration settings are contained in a my.cnf or my.ini file. MAMP does not use a my.cnf file by default, so we need to copy the my-medium.cnf file from the /Applications/MAMP/Library/support-files directory to the /Applications/MAMP/conf folder. After copying the file, rename it to my.cnf. For XAMPP, the my.ini file is located in the C:xamppmysqlbin directory. Open the my.cnf or my.ini file in a text editor, find the following settings and edit them to match the following values: # * Fine Tuning#key_buffer = 16Mkey_buffer_size = 32Mmax_allowed_packet = 16Mthread_stack = 512Kthread_cache_size = 8max_connections = 300## * Query Cache Configuration#query_cache_type = 1query_cache_limit = 15Mquery_cache_size = 46Mjoin_buffer_size = 5M# Sort buffer size for ORDER BY and GROUP BY queries, data# gets spun out to disc if it does not fitsort_buffer_size = 10Minnodb_flush_method = O_DIRECTinnodb_file_per_table = 1innodb_flush_log_at_trx_commit = 2innodb_log_buffer_size = 4Minnodb_additional_mem_pool_size = 20M# num cpu's/cores *2 is a good base line for innodb_thread_concurrencyinnodb_thread_concurrency = 4 After you have made the edits, you have to stop and restart the servers for the changes to take effect. Once you have restarted the servers, we are ready to install Drupal! The most effective way versus the easy way There are many different ways to install Drupal. People familiar with working via the command line can install Drupal very quickly without an FTP client or any web-based tools to create and administer databases. The instructions in this book are geared towards people who would rather not use the command line. These instructions attempt to get you through the technical pieces as painlessly as possible, to speed up the process of building a site that supports teaching and learning. Installing Drupal - the quick version The following steps will get you up and running with your Drupal site. This quick-start version gives an overview of the steps required for most setups. A more detailed version follows immediately after this section. Once you are familiar with the setup process, installing a Drupal site takes between five to ten minutes. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. Using phpMyAdmin, create a database on your server. Write down the name of the database. Using phpMyAdmin, create a user on the database using the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; You will have created the databasename in step 3; write down the username and password values, as you will need them to complete the install. Upload the Drupal codebase to your web folder. Navigate to the URL of your site. Follow the instructions of the install wizard. You will need your databasename (created in step 3), as well as the username and password for your database user (created in step 4). Installing Drupal - the detailed version This version goes over each step in more detail and includes screenshots. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. The Drupal codebase (and all modules and themes) are compressed into a tarball, or a file that is first tarred, and then gzipped. Such compressed files end in .tar.gz. On Macs and Linux machines, tar.gz files can be extracted automatically using tools that come preinstalled with the operating system. On PC's, you can use 7-zip, an open source compression utility available at http://www.7-zip.org. In your web browser, navigate to your system's URL for phpMyAdmin. If you are using a different tool for creating and managing your database, use that tool to create your database and database user. As shown in the following screenshot, create the database on your server. Click on the Create button to create your database. Store your database name in a safe place. You will need to know your database name to complete your installation. To create your database user, click on the SQL tab as shown in the following screenshot. In the text area, enter the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; For databasename, use the name of the database you created in step 4. Replace the username and password with a username and password of your choice. Once you have entered the correct values, click on the Go button to create the user with rights on your database: Store the username and the password of your database user in a safe place. You will need them to complete the installation. Create and/or locate the directory from where you want Drupal to run. In this example, we are running Drupal from within a folder named drupal7; this means that our site will be available at http://ourdomain.org/drupal7. Running Drupal in a subfolder can make things a little trickier. If at all possible, copy the Drupal files directly into your web root. Using your FTP client, upload the Drupal codebase to your web folder: Navigate to the URL of your site. The automatic install wizard will appear on your screen: Click the Save and continue button with the Standard option selected. Click the Save and continue button with the English (built-in) option selected. To complete the Set up database screen, you will need the database name (created in step 4) and the database username and password (created in step 6). Select MySQL, MariaDB, or equivalent as the Database type and then enter these values in their respective text boxes as seen in the following screenshot: Most installs will not need to use any of settings under ADVANCED OPTIONS . However, if your database is located on a server other than localhost, you will need to adjust the settings as shown in the next screenshot. In most basic hosting setups, your database is accessible at localhost . To verify the name or location of your database host, you can use phpMyAdmin (as shown in the screenshot under step 4) or contact an administrator for your web server. For the vast majority of installs, none of the advanced options will need to be adjusted. Click on the Save and continue button. You will see a progress meter as Drupal installs itself on your web server. On the Configure site screen, you can enter some general information about your site, and create the first user account. The first user account has full rights over every aspect of your site. When you have finished with the settings on this page, click on the Save and continue button. When the install is finished, you will see the following splash screen: Additional details on installing Drupal are available in the handbook at http://drupal.org/documentation/install. Enabling core modules For a full description of the modules included in Drupal core, see http://drupal.org/node/1283408. To see the modules included in Drupal core, navigate to Modules or admin/modules. As shown in the following screenshot, the Standard installation profile enables the most commonly used core modules. (For clarity, we have divided the screenshot of the single screen in two parts.) Assigning rights to the authenticated user role Within your Drupal site, you can use roles to assign specific permissions to groups of users. Anonymous users are all people visiting the site who are not site members; all site members (that is, all people with a username and password) belong to the authenticated user role. To assign rights to specific roles, navigate to People | Permissions | Roles or admin/people/permissions/roles. As shown in the preceding screenshot, click on the edit permissions link for authenticated users. The Comment module: Authenticated users can see comments and post comments. These rights have the comments going into a moderation queue for approval, as we haven't checked the Skip comment approval box. The Node module: Authenticated users can see published content. The Search module: Authenticated users can search the site. The User module: Authenticated users can change their own username. Once these options have been selected, click on the Save permissions button at the bottom of the page. Summary In this article, we installed the core Drupal codebase, enabled some core modules, and assigned rights to the authenticated user role. We are now ready to start building a feature-rich site that will help support teaching and learning. In the next article, we will take a look around your new site and begin to get familiar with how to make your site do what you want. Resources for Article : Further resources on this subject: Creating Content in Drupal 7 [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Introduction to Drupal Web Services [Article]
Read more
  • 0
  • 0
  • 1612