Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-so-what-zeptojs
Packt
08 Oct 2013
7 min read
Save for later

So, what is Zepto.js?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) One of the most influential JavaScript libraries in the last decade of web development is jQuery, a comprehensive set of functions that make Document Object Model (DOM) selection and manipulation consistent across a range of browsers, freeing web developers from having to handle all these themselves, as well as providing a friendlier interface to the DOM itself. Zepto.js is self-described as an aerogel framework—a JavaScript library that attempts to offer the most of the features as the jQuery API, yet only taking up a fraction of the size (9k versus 93k in the default, compressed current versions Zepto.js v1.01 and jQuery v1.10 respectively). In addition, Zepto.js has a modular assembly, so you can make it even smaller if you don't need the functionality of extra modules. Even the new, streamlined jQuery 2.0 weighs in at a heavyweight 84k. But why does this matter? At a first glance, the difference between the two libraries seems slight, especially in today's world where large files are normally described in terms of gigabytes and terabytes. Well, there are two good reasons why you'd prefer a smaller file size. Firstly, even the newest mobile devices on the market today have slower connections than you'll find on most desktop machines. Also, due to the constrained memory requirements on smartphones, mobile phone browsers tend to have limited caching compared to their bigger desktop cousins, so a smaller helper library means more chance of keeping your actual JavaScript code in the cache and thus preventing your app from slowing down on the device. Secondly, a smaller library helps in response time—although 90k versus 8k doesn't sound like a huge difference, it means fewer network packets; as your application code that relies on the library can't execute until the library's code is loaded, using the smaller library can shave off precious milliseconds in that ever-so-important time to first-page-load time, and will make your web page or application seem more responsive to users. Having said all that, there are a few downsides on using Zepto.js that you should be aware about before deciding to plump for it instead of jQuery. Most importantly, Zepto.js currently makes no attempt to support Internet Explorer. Its origins as a library to replace jQuery on mobile phones meant that it mainly targeted WebKit browsers, primarily iOS. As the library has got more mature, it has expanded to cover Firefox, but general IE support is unlikely to happen (at the time of writing, there is a patch waiting to go into the main trunk that would enable support for IE10 and up, but anything lower than Version 10 is probably never going to be supported). In this guide we'll show you how to include jQuery as a fallback in case a user is running on an older, unsupported browser if you do decide to use Zepto.js on browsers that it supports and want to maintain some compatibility with Internet Explorer. The other pitfall that you need to be aware of is that Zepto.js only claims to be a jQuery-like library, not a 100 percent compatible version. In the majority of web application development, this won't be an issue, but when it comes to integrating plugins and operating at the margins of the libraries, there will be some differences that you will need to know to prevent possible errors and confusions, and we'll be showing you some of them later in this guide. In terms of performance, Zepto.js is a little slower than jQuery, though this varies by browser (take a look at http://jsperf.com/zepto-vs-jquery-2013/ to see the latest benchmark results). In general, it can be up to twice as slow for repeated operations such as finding elements by class name or ID. However, on mobile devices, this is still around 50,000 operations per second. If you really require high-performance from your mobile site, then you need to examine whether you can use raw JavaScript instead—the JavaScript function getElementsByClassName() is almost one hundred times faster than Zepto.js and jQuery in the preceding benchmark. Writing plugins Eventually, you'll want to make your own plugins. As you can imagine, they're fairly similar in construction to jQuery plugins (so they can be compatible). But what can you do with them? Well, consider them as a macro system for Zepto.js; you can do anything that you'd do in normal Zepto.js operations, but they get added to the library's namespace so you can reuse them in other applications. Here is a plugin that will take a Zepto.js collection and turn all the text in it to Helvetica font-family at a user-supplied font-size (in pixels for this example). (function($){ $.extend($.fn, { helveticaize: function( options ){ $.each(this, function(){ $(this).css({"font-family":"Helvetica", "font-size": options['size']+'px'}); }); return this; } }) })(Zepto || jQuery) Then, to make all links on a page Helvetica, you can call $("a").helveticaize(). The most important part of this code is the use of the $.extend method. This adds the helveticaize property/function to the $.fn object, which contains all of the functions that Zepto.js provides. Note that you could potentially use this to redefine methods such as find(), animate(), or any other function you've seen so far. As you can imagine, this is not recommended—if you need different functionality, call $.extend and create a new function with a name like custom_find instead. In addition, you could pass multiple new functions to $.fn with a call to $.extend, but the convention for jQuery and Zepto.js is that you only provide as few functions as possible (ideally one) and offer different functionality through passed parameters (that is, through options). The reason for this is that your plugin may have to live alongside many other plugins, all of which share the same namespace in $.fn. By only setting one property, you hopefully reduce the chance of overriding a method that another plugin has defined. In the actual definition of the method that's being added, it iterates through the objects in the collection, setting the font and size (if present) for all the objects in the collection. But at the rest of the method it returns this. Why? Well, if you remember, part of the power of Zepto.js is that methods are chainable, allowing you to build up complex selectors and operations in one line. And thanks to helveticaize() returning this (which will be a collection), this newly-defined method is just as chainable as all the default methods provided. This isn't a requirement of plugin methods but, where possible, you should make your plugin methods return a collection of some sort to prevent breaking a chain (and if you can't, for some reason, make sure to spell that out in your plugin's documentation). Finally, at the end, the (Zepto || jQuery) part will immediately invoke this definition on either the Zepto object or jQuery object. In this way, you can create plugins that work with either framework depending on whether they're present, with the caveat, of course, that your method must work in both frameworks. Summary In this article, we learned what Zepto.js actually is, what you can do with it, and why it's so great. We also learned how to extend Zepto.js with plugins. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 3964

article-image-creating-new-ios-social-project
Packt
08 Oct 2013
8 min read
Save for later

Creating a New iOS Social Project

Packt
08 Oct 2013
8 min read
Creating a New iOS Social Project In this article, by Giuseppe Macri, author of Integrating Facebook iOS SDK with Your Application, we will learn about: With this article, we start our coding journey. We are going to build our social application from the group up. In this article we will learn about: Creating a Facebook App ID: It is a key used with our APIs to communicate with the Facebook Platform. Downloading the Facebook SDK: iOS SDK can be downloaded from two different channels. We will look into both of them. Creating a new XCode Project: I will give a brief introduction on how to create a new XCode project and description of the IDE environment. Importing Facebook iOS SDK into our XCode project: I will go through the import of the Facebook SDK into our XCode project step-by-step. Getting familiar with Storyboard to build a better interface: This is a brief introduction on the Apple tool to build our application interface. Creating a Facebook App ID In order to communicate with the Facebook Platform using their SDK, we need an identifier for our application. This identifier, also known as Facebook App ID, will give access to the Platform; at the same time, we will be able to collect a lot of information about its usage, impressions, and ads. To obtain a Facebook App ID, we need a Facebook account. If you don't have one, you can create a Facebook account via the following page at https://www.facebook.com: The previous screenshot shows the new Facebook account sign up form. Fill out all the fields and you will be able to access the Facebook Developer Portal. Once we are logged into Facebook, we need to visit the Developer Portal. You can find it at https://developers.facebook.com/. I already mentioned the important role of Developer Portal in developing our social application. The previous screenshot shows the Facebook Developer Portal. The main section, the top part, is dedicated to the current SDKs. On the top-blue bar, click on the Apps link, and it will redirect us to the Facebook App Dashboard. The previous screenshot shows the Facebook App Dashboard. To the left, we have a list of apps; on the center of the page, we can see the details of the currently selected app from our list. The page shows the application's setting and analytics (Insights). In order to create a new Facebook App ID, you can click on Create New App on the top-right part of the App Dashboard. The previous screenshot shows the first step in order to create a Facebook App ID. When providing the App Name, be sure the name does not already exist or violate any copyright laws; otherwise, Facebook will remove your app. App Namespace is something that we need if we want to define custom objects and/or actions in the Open Graph structure. The App Namespace topic is not part of this book. Web hosting is really useful when creating a social web application. Facebook, in partnership with other providers, can create a web hosting for us if needed. This part is not going to be discussed in this book; therefore, do not check this option for your application. Once all the information is provided, we can move on to the next step. Please fill out the form, and move forward to the next one. On the top of the page, we can see both App ID and App Secret. These are the most important pieces of information about our new social applicaton. App ID is a piece of information that we can share unlike App Secret. At the center of our new Facebook Application Page, we have basic information fields. Do not worry about Namespace, App Domains, and Hosting URL; these fields are for web applications. Sandbox Mode only allows developers to use the current application. Developers are specified through the Developer Roles link on the left side bar. Moving down, select the type of app. For our goal, select Native iOS App. You can select multiple types and create multiplatform social applications. Once you have checked Native iOS App, you will be prompted with the following form: The only field we need to provide for now is the Bundle ID. The Bundle ID is something related to XCode settings. Be sure that the Facebook Bundle ID will match our XCode Social App Bundle Identifier. The format for the bundle identifier is always something like com.MyCompany.MyApp. iPhone/iPad App Store IDs are the App Store identifiers of your application if you have published your app in the App Store. If you didn't provide any of them after you saved your changes, you will receive a warning message; however, don't worry, our new App ID is now ready to be used. Save your changes and get ready to start our developing journey. Downloading the Facebook iOS SDK The iOS Facebook SDK can be downloaded through two different channels: Facebook Developer Portal: For downloading the installation package GitHub: For downloading the SDK source code Using Facebook Developer Portal, we can download the iOS SDK as the installation package. Visit https://developers.facebook.com/ios/ as shown in the following screenshot and click on Download the SDK to download the installation package. The package, once installed, will create a new FacebookSDK folder within our Documents folder. The previous screenshot shows the content of the iOS SDK installation package. Here, we can see four elements: FacebookSDK.framework: This is the framework that we will import in our XCode social project LICENSE: It contains information about licensing and usage of the framework README: It contains all the necessary information about the framework installation Samples: It contains a useful set of sample projects that uses the iOS framework's features With the installation package, we only have the compiled files to use, with no original source code. It is possible to download the source code using the GitHub channel. To clone git repo, you will need a Git client, either Terminal or GUI. The iOS SDK framework git repo is located at https://github.com/facebook/facebook-ios-sdk.git. I prefer the Terminal client that I am using in the following command: git clone https://github.com/facebook/facebook-ios-sdk.git After we have cloned the repo, the target folder will look as the following screenshot: The previous picture shows the content of the iOS SDK GitHub repo. Two new elements are present in this repo: src and scripts. src contains the framework source code that needs to be compiled. The scripts folder has all the necessary scripts needed to compile the source code. Using the GitHub version allows us to keep the framework in our social application always up-to-date, but for the scope of this book, we will be using the installation package. Creating a new XCode project We created a Facebook App ID and downloaded the iOS Facebook SDK. It's time for us to start our social application using XCode. The application will prompt the welcome dialog if Show this window when XCode launches is enabled. Choose the Create a new XCode project option. If the welcome dialog is disabled, navigate to File | New | Project…. Choosing the type of project to work with is the next step as shown in the following screenshot: The bar to the left defines whether the project is targeting a desktop or a mobile device. Navigate to iOS | Application and choose the Single View Application project type. The previous screenshot shows our new project's details. Provide the following information for your new project: Product Name: This is the name of our application Organization Name: I will strongly recommend filling out this part even if you don't belong to an organization because this field will be part of our Bundle Identifier Company Identifier: It is still optional, but we should definitely fill it out to respect the best-practice format for Bundle Identifier Class Prefix: This prefix will be prepended to every class we are going to create in our project Devices: We can select the target device of our application; in this case, it is an iPhone but we could also have chosen iPad or Universal Use Storyboards: We are going to use storyboards to create the user interface for our application Use Automatic Reference Counting: This feature enables reference counting in the Objective C Garbage Collector Include Unit Tests: If it is selected, XCode will also create a separate project target to unit-test our app; this is not part of this book Save the new project. I will strongly recommend checking the Create a local git repository for this project option in order to keep track of changes. Once the project is under version control, we can also decide to use GitHub as the remote host to store our source code.
Read more
  • 0
  • 0
  • 1419

article-image-introducing-feature-introjs
Packt
07 Oct 2013
5 min read
Save for later

Introducing a feature of IntroJs

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) API IntroJs includes functions that let the user to control and change the execution of the introduction. For example, it is possible to make a decision for an unexpected event that happens during execution, or to change the introduction routine according to user interactions. Later on, all available APIs in IntroJs will be explained. However, these functions will extend and develop in the future. IntroJs includes these API functions: start goToStep exit setOption setOptions oncomplete onexit onchange onbeforechange introJs.start() As mentioned before, introJs.start() is the main function of IntroJs that lets the user to start the introduction for specified elements and get an instance of the introJS class. The introduction will start from the first step in specified elements. This function has no arguments and also returns an instance of the introJS class. introJs.goToStep(stepNo) Jump to the specific step of the introduction by using this function. As it is clear, introductions always start from the first step; however, it is possible to change the configuration by using this function. The goToStep function has an integer argument that accepts the number of the step in the introduction. introJs().goToStep(2).start(); //starts introduction from step 2 As the example indicates, first, the default configuration changed by using the goToStep function from 1 to 2, and then the start() function will be called. Hence, the introduction will start from the second step. Finally, this function will return the introJS class's instance. introJs.exit() The introJS.exit() function lets the user exit and close the running introduction. By default, the introduction ends when the user clicks on the Done button or goes to the last step of the introduction. introJs().exit() As it shows, the exit() function doesn't have any arguments and returns an instance of introJS. introJs.setOption(option, value) As mentioned before, IntroJs has some default options that can be changed by using the setOption method. This function has two arguments. The first one is useful to specify the option name and the second one is to set the value. introJs().setOption("nextLabel", "Go Next"); In the preceding example, nextLabel sets to Go Next. Also, it is possible to change other options by using the setOption method. introJs.setOptions(options) It is possible to change an option using the setOption method. However, to change more than one option at once, it is possible to use setOptions instead. The setOptions method accepts different options and values in the JSON format. introJs().setOptions({ skipLabel: "Exit", tooltipPosition: "right" }); In the preceding example, two options are set at the same time by using JSON and the setOptions method. introJs.oncomplete(providedCallback) The oncomplete event is raised when the introduction ends. If a function passes as an oncomplete method, it will be called by the library after the introduction ends. introJs().oncomplete(function() { alert("end of introduction"); }); In this example, after the introduction ends, the anonymous function that is passed to the oncomplete method will be called and alerted with the end of introduction message. introJs.onexit(providedCallback) As mentioned before, the user can exit the running introduction using the Esc key or by clicking on the dark area in the introduction. The onexit event notices when the user exits from the introduction. This function accepts one argument and returns the instance of running introJS. introJs().onexit(function() { alert("exit of introduction"); }); In the preceding example, we passed an anonymous function to the onexit method with an alert() statement. If the user exits the introduction, the anonymous function will be called and an alert with the message exit of introduction will appear. introJs.onchange(providedCallback) The onchange event is raised in each step of the introduction. This method is useful to inform when each step of introduction is completed. introJs().onchange(function(targetElement) { alert("new step"); }); You can define an argument for an anonymous function (targetElement in the preceding example), and when the function is called, you can access the current target element that is highlighted in the introduction with that argument. In the preceding example, when each introduction's step ends, an alert with the new step message will appear. introJs.onbeforechange(providedCallback) Sometimes, you may need to do something before each step of introduction. Consider that you need to do an Ajax call before the user goes to a step of the introduction; you can do this with the onbeforechange event. introJs().onbeforechange(function(targetElement) { alert("before new step");}); We can also define an argument for an anonymous function (targetElement in the preceding example), and when this function is called, the argument gets some information about the currently highlighted element in the introduction. So using that argument, you can know which step of the introduction will be highlighted or what's the type of target element and more. In the preceding example, an alert with the message before new step will appear before highlighting each step of the introduction. Summary In this article we learned about the API functions, their syntaxes, and how they are used. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 5250
Visually different images

article-image-gamified-websites-framework
Packt
07 Oct 2013
15 min read
Save for later

Gamified Websites: The Framework

Packt
07 Oct 2013
15 min read
(For more resources related to this topic, see here.) Business objectives Before we can go too far down the road on any journey, we first have to be clear about where we are trying to go. This is where business objectives come into the picture. Although games are about fun, and gamification is about generating positive emotion without losing sight of the business objectives, gamification is a serious business. Organizations spend millions of dollars every year on information technology. Consistent and steady investment in information technology is expected to bring a return on that investment in the way of improved business process flow. It's meant to help the organization run smoother and easier. Gamification is all about "improving" business processes. Organizations try to improve the process itself, wherever possible, whereas technology only facilitates the process. Therefore, gamification efforts will be scrutinized under similar microscope and success metrics that information technology efforts will. The fact that customers, employees, or stakeholders are having more fun with the organization's offering is not enough. It will have to meet a business objective. The place to start with defining business objectives is with the business process that the organization is looking to improve. In our case, the process we are planning to improve is e-learning. We are looking at the process of K-12 aged persons learning "thinking". How does that process look right now? Image source: http://www.moddb.com/groups/critical-thinkers-of-moddb/images/critical-thinking-skills-explained In a full-blown e-learning situation, we would be looking to gamify as much of this process as possible. For our purpose, we will focus on the areas of negotiation and cooperation. According to the Negotiate and Cooperate phase of the Critical Thinking Process, learners consider different perspectives and engage in discussions with others. This gives us a clear picture of what some of our objectives might be. They might be, among others: Increasing engagement in discussion with others Increasing the level of consideration of different perspectives Note that these objectives are measurable. We will be able to test whether the increases/improvements we are looking for are actually happening over time. With a set of measurable objectives, we can turn our attention to the next step, that is target behaviors, in our Gamification Design Framework. Target behaviors Now that we are clear about what we are trying to accomplish with our system, we will focus on the actions we are hoping to incentivize: our target behaviors. One of the big questions around gamification efforts is can it really cause behavioral change. Will employees, customers, and stakeholders simply go back to doing things the way they are used to once the game is over? Will they figure out a way to "cheat" the system? The only way to meet long-term organizational objectives in a systematic way is the application to not only cause change for the moment, but lasting change over time. Many gamification applications fail in long-term behavior change, and here's why. Psychologists have studied the behavior change life cycle at length. . The study revealed that people go through five distinct phases when changing a behavior. Each phase presents a different set of challenges. The five phases of the behavioral life cycle are as follows: Awareness: Before a person will take any action to change a behavior, he/she must first be aware of their current behavior and how it might need to change. Buy in: After a person becomes aware that they need to change, they must agree that they actually need to change and make the necessary commitment to do so. Learn: But what actually does a person need to do to change? It cannot be assumed that he/she knows how to change. They must learn the new behavior. Adopt: Now that he/she has learned the necessary skills, they have to actually implement them. They need to take the new action. Maintain: Finally, after adopting a new behavior, it can only become a lasting change with constant practice. Image source: http://www.accenture.com/us-en/blogs/technology-labs-blog/archive/2012/03/28/gamification-and-the-behavior-change-lifecycle.aspx) How can we use this understanding to establish our target behaviors? Keep in mind that our objectives are to increase interaction through discussion and increase consideration for other perspectives. According to our understanding of changing behavior around our objectives, we need our users to: Become aware of their discussion frequency with other users Become aware that other perspectives exist Commit to more discussions with other users Commit to considering other users' perspectives Learn how to have more discussions with other users Learn about other users' perspectives Have more discussions with other users Actually consider other users' perspectives Continue to have more discussions with other users on a consistent basis Continue to consider other users' perspectives over time This outlines the list of activities that needs to be performed for our systems to meet our objectives. Of course, some of our target behaviors will be clear. In other cases, it will require some creativity on our part to get users to take these actions. So what are some possible actions that we can have our users take to move them along the behavior change life cycle? Check their discussion thread count Review the Differing Point of View section Set a target discussion amount for a particular time period Set a target number of Differing Points of View to review Watch a video (or some instructional material) on how to use the discussion area Watch a video (or some instructional material) on the value of viewing other perspectives Participate in the discussion groups Read through other users' discussions posts Participate in the discussion groups over time Read through other users' perspectives over time Some of these target behaviors are relatively straightforward to implement. Others will require more thought. More importantly, we have now identified the target behaviors we want our users to take. This will guide the rest of our development efforts. Players Although the last few sections have been about the serious side of things, such as objectives and target behaviors, we still have gamification as the focal point. Hence, from this point on we will refer to our users as players. We must keep in mind that although we have defined the actions that we want our players to take, the strategies to motivate them to take that action vary from player to player. Gamification is definitely not a one-size-fits-all process. We will have to look at each of our target behaviors from the perspective of our players. We must take their motivations into consideration, unless our mechanics are pretty much trial and error. We will need an approach that's a little more structured. According to the Bartle's Player Motivations theory, players of any game system fall into one of the following four categories: Killers: These are people motivated to participate in a gaming scenario with the primary purpose of winning the game by "acting on" other players. This might include killing them, beating, and directly competing with other players in the game. Achievers: These, on the other hand, are motivated by taking clear actions against the system itself to win. They are less motivated by beating an opponent than by achieving things to win. Socializers: These have very different motivations for participating in a game. They are motivated more by interacting and engaging with other players. Explorers: Like socializers, explorers enjoy interaction and engagement, but less with other players than with the system itself. The following diagram outlines each player motivation type and what game mechanic might best keep them engaged. Image source: http://frankcaron.com/Flogger/?p=1732 As we define our activity loops, we need to make sure that we include each of the four types of players and their motivations. Activity loops Gamified systems, like other systems, are simply a series of actions. The player acts on the system and the system responds. We refer to how the user interacts with the system as activity loops. We will talk about two types of activity loops, engagement loops and progression loops, to describe our player interactions. Engagement loops describe how a player engages the system. They outline what a player does and how the system responds. Activity will be different for players depending on their motivations, so we must also take into consideration why the player is taking the action he is taking. A progression loop describes how the player engages the system as a whole. It outlines how he/she might progress through the game itself. Whereas engagement loops discuss what the player does on a detailed level, progression loops outline the movement of the player through the system. For example, when a person drives a car, he/she is interacting with the car almost constantly. This interaction is a set of engagement loops. All the while, the car is going somewhere. Where the car is going describes its progression loops. Activity loops tend to follow the Motivation, Action, Feedback pattern. The players are sufficiently motivated to take an action. When the players take the action and they get a feedback from the system, the feedback hopefully motivates the players enough to take another action. They take that action and get more feedback. In a perfect world, this cycle would continue indefinitely and the players would never stop playing our gamified system. Our goal is to get as close to this continuous activity loop as we possibly can. Progression loops We have spent the last few pages looking at the detailed interactions that a player will have with the system in our engagement loops. Now it's time to turn our attention to the other type of activity loop, the progression loop. Progression loops look at the system at a macro level. They describe the player's journey through the system. We usually think about levels, badges, and/or modes when we are thinking about progression loops We answer questions such as: where have you been, where are you now, and where are you going. This can all be summed up into codifying the player's mastery level. In our application, we will look at the journey from the vantage point of a novice, an expert, and a master. Upon joining the game, players will begin at novice level. At novice level we will focus on: Welcome On-boarding and getting the user acclimated to using the system Achievable goals In the Welcome stage, we will simply introduce the user to the game and encourage him/her to try it out. Upon on-boarding, we need to make the process as easy as possible and give back positive feedback as soon as possible. Once the user is on board, we will outline the easiest way to get involved and begin the journey. At the expert level, the player is engaging regularly in the game. However, other players would not consider this player a leader in the game. Our goal at this level is to present more difficult challenges. When the player reaches a challenge that is appearing too difficult, we can include surprise alternatives along the way to keep him/her motivated until they can break through the expert barrier to master level. The game and other players recognize masters. They should be prominently displayed within the game and might tend to want to help others at novice and expert levels. These options should become available at later stages in the game. Fun After we have done the work of identifying our objectives, defining target behaviors, scoping our players, and laying out the activities of our system, we can finally think about the area of the system where many novice game designers start: the fun. Other gamification practitioners will avoid, or at least disguise, the fun aspect of the gamification design process. It is important that we don't over or under emphasize the fun in the process. For example, chefs prepare an entire meal with spices, but they don't add all spices together. They use the spices in a balanced amount in their cooking to bring flavor to their dishes. Think of fun as an array of spices that we can apply to our activity loops. Marc Leblanc has categorized fun into eight distinct categories. We will attempt to sprinkle just enough of each, where appropriate, to accomplish the desired amount of fun. Keep in mind that what one player will experience as fun will not be the same for another. One size definitely does not fit all in this case. Sensation: A pleasurable experience Narrative: An unfolding story Challenge: An obstacle course Fantasy: Make believe Fellowship: A social framework Discovery: Exploring uncharted territory Expression: Player is given a platform Submission: Mindless activity So how can we sparingly introduce the above dimensions of fun in our system? Action to take Dimension of fun Check their discussion thread count Challenge Review a differing point of the View section Discovery Set a target discussion  amount for a particular time period Challenge Set a target number of "Differing Points of View" to review Challenge Watch a video (or some instructional material) on the how to use the discussion area Challenge Watch a video (or some instructional material) on the value of viewing other perspectives Challenge Participate in the discussion groups Fellowship Expression Read through other users' discussions posts Discovery Participate in the discussion groups over time Fellowship Expression Read through other users' perspectives over time Discovery Tools We are finally at the stage from where we can begin implementation. At this point, we can look at the various game elements (tools) to implement our gamified system. If we have followed the framework upto this point, the mechanics and elements should become apparent. We are not simply adding leader boards or a point system for the sake of it. We can tie all the tools we use back to our previous work. This will result in a Gamification Design Matrix for our application. But before we go there, let's stop and take a look at some tools we have at our disposal. There are a myriad of tools, mechanics, and strategies at our disposal. New ones are being designed everyday. Here are a few of the most common mechanics that we will encounter when designing our gamified system: Achievements: These are specific objectives that a player meets. Avatars: These are visual representations of a player's role, persona, or character in a game. Badges: These are visual elements used to recognize a particular accomplishment. They give players a sense of pride that they can show off to others. Boss fight: This is an exceptionally difficult challenge in a game scenario, usually at the end of a level to demonstrate enough skill level to move up to the next level. Leaderboards: These show rankings of players publicly. They recognize an accomplishment like a badge, but they are visible for all to see. We see this almost every day, in every way from sports team rankings to sales rep monthly results. Points: These are rather straightforward. Players accumulate points and take various actions in the system. Quests/Mission: These are specialized challenges in a game scenario having narrative and objective as characteristics. Reward: This is anything used to extrinsically motivate the user to take a particular action. Team: This is a group of players playing as a single unit. Virtual assets: These are elements in the game that have some value and can be acquired or used to acquire other assets, whether tangible or virtual. Now it's time to turn and take off our gamification design hat and put on our developer hat. Let's start by developing some initial mockups of what our final site might look like using the design we have outlined previously. Many people develop mockups using graphics tools such as Photoshop or Gimp. At this stage, we will be less detailed in our mockups and simply use pencil sketches or a mockup tool such as Balsamiq. Login screen This is a mock-up of the basic login screen in our application. Players are accustomed to a basic login and password scenario we provide here. Account creation screen First time players will have to create an account initially. This is the mock-up of our signup page. Main Player Screen This captures the main elements of our system when a player is fully engaged with the system. Main Player Post Response Screen We have outlined the key functionality of our gamified system via mock-ups. Mock-ups are a means of visually communicating to our team what we are building and why we are building it. Visual mock-ups also give us an opportunity to uncover issues in our design early in the process. Summary Most gamified applications will fail due to a poorly designed system. Hence, we have introduced a Gamification Design Framework to guide our development process. We know that our chances of developing a successful system increase tremendously if we: Define clear business objectives Establish target behaviors Understand our players Work through the activity loops Remember the fun Optimize the tools Resources for Article: Further resources on this subject: An Introduction to PHP-Nuke [Article] Installing phpMyAdmin [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 1280

article-image-planning-your-store
Packt
07 Oct 2013
11 min read
Save for later

Planning Your Store

Packt
07 Oct 2013
11 min read
Defining the catalogue The type of products you are selling will determine the structure of your store. Different types of products will have different requirements in terms of the information presented to the customer, and the data that you will need to collect in order to fulfill an order. Base product definition Every product needs to have the following fields which are added by default: Title Stock Keeping Unit (SKU) Price (in the default store currency) Status (a flag indicating if the product is live on the store) This is the minimum you need to define a product in Drupal Commerce—everything else is customized for your store. You can define multiple Product Types (Product Entity Bundles), which can contain different fields depending on your requirements. Physical products If you are dealing with physical products, such as books, CDs, or widgets, you may want to consider these additional fields: Product images Description Size Weight Artist/Designer/Author Color You may want to consider setting up multiple Product Types for your store. For example, if you are selling CDs, you may want to have a field for Artist which would not be relevant for a T-shirt (where designer may be a more appropriate field). Whenever you imagine having distinct pieces of data available, adding them as individual fields is well worth doing at the planning stage so that you can use them for detailed searching and filtering later. Digital downloads If you are selling a digital product such as music or e-books, you will need additional fields to contain the actual downloadable file. You may also want to consider including: Cover image Description Author/Artist Publication date Permitted number of downloads Tickets Selling tickets is a slightly more complex scenario since there is usually a related event associated with the product. You may want to consider including: Related event (which would include date, venue, and so on) Ticket Type / Level / Seat Type Content access and subscriptions Selling content access and subscriptions through Drupal Commerce usually requires associating the product with a Drupal role. The customer is buying membership of the role which in turn allows them to see content that would usually be restricted. You may want to consider including: Associated role(s) Duration of membership Initial cost (for example, first month free) Renewal cost (for example, £10/month ) Customizing products The next consideration is whether products can be customized at the point of purchase. Some common examples of this are: Specifying size Specifying color Adding a personal message (for example, embossing) Selecting a specific seat (in the event example) Selecting a subscription duration Specifying language version of an e-book Gift wrapping or gift messaging It is important to understand what additional user input you will need from the customer to fulfill the order over and above the SKU and quantity. When looking at these options, also consider whether the price changes depending on the options that the customer selects. For example: Larger sizes cost more than smaller sizes Premium for "red" color choice Extra cost for adding an embossed message Different pricing for different seating levels Monthly subscription is cheaper if you commit to a longer duration Classifying products Now that you have defined your Product Types, the next step is to consider the classification of products using Drupal's in-built Taxonomy system. A basic store will usually have a catalog taxonomy vocabulary where you can allocate a product to one or more catalog sections, such as books, CDs, clothing, and so on. The taxonomy can also be hierarchical, however, individual vocabularies for the classification of your products is often more workable, especially when providing the customer with a faceted search or filtering facility later. The following are examples of common taxonomy vocabulary: Author/Artist/Designer Color Size Genre Manufacturer/Brand It is considered best practice to define a taxonomy vocabulary rather than have a simple free text field. This provides consistency during data entry. For example, a free text field for size may end up being populated with S, Small, Sm, all meaning the same thing. A dropdown taxonomy selector would ensure that the value entered was the same for every product. Do not be tempted to use List type fields to provide dropdown menus of choices. List fields are necessarily the reserve of the developer and using them excludes the less technical site owner or administrator from managing them. Pricing Drupal Commerce has a powerful pricing engine, which calculates the actual selling price for the customer, depending on one or more predefined rules. This gives enormous flexibility in planning your pricing strategy. Currency Drupal Commerce allows you to specify a default currency for the store, but also allows you to enter multiple price fields or calculate a different price based on other criteria, such as the preferred currency of the customer. If you are going to offer multiple currencies, you need to consider how the currency exchange will work; do you want to enter a set price for each product and currency you offer, or a base price in the default currency and calculate the other currencies based on a conversion rate? If you use a conversion rate, how often is it updated? Variable pricing Prices do not have to be fixed. Consider scenarios where the prices for your store will vary over time, or situations based on other factors such as volume-based discounts. Will some preferred customers get a special price deal on one or more products? Customers You cannot complete an order without a customer and it is important to consider all of their needs during the planning process. By default, a customer profile in Drupal Commerce contains an address type field which works to the Name and Address Standard (xNAL) format, collecting international addresses in a standard way. However, you may want to extend this profile type to collect more information about the customer. For example: Telephone number Delivery instructions E-mail opt-in permission Do any of the following apply? Is the store open to public or open by invitation only? Do customers have to register before they can purchase? Do customers have to enter an e-mail address in order to purchase? Is there a geographical limit to where products can be sold/shipped? Can a customer access their account online? Can a customer cancel an order once it is placed? What are the time limits on this? Can a customer track the progress of their order? Taxes Many stores are subject to Sales tax or Value Added Tax(VAT) on products sold. However, these taxes often vary depending on the type of product sold and the final destination of the physical goods. During your planning you should consider the following: What are the sales tax / VAT rules for the store? Are there different tax rules depending on the shipping destination? Are there different tax rules depending on the type of product? If you are in a situation where different types of products in your store will incur different rates of taxes, then it is a very good idea to set up different Product Types so that it's easy to distinguish between them. For example, in the UK, physical books are zero rated for VAT, whereas, the same book in digital format will have 20% VAT added. Payments Drupal Commerce can connect to many different payment gateways in order create a transaction for an order. While many of the popular payment gateways, such as PayPal and Sage Pay, have fully functional payment gateway modules on Drupal.org, it's worth checking if the one you want is available because creating a new one is no small undertaking. The following should also be considered: Is there a minimum spend limit? Will there be multiple payment options? Are there surcharges for certain payment types? Will there be account customers that do not have to enter a payment card? How will a customer be refunded if they cancel or return their order? Shipping Not every product will require shipping support, but for physical products, shipping can be a complex area. Even a simple product store can have complex shipping costs based on factors such as weight, destination, total spend, and special offers. Ensure the following points are considered during your planning: Is shipping required? How is the cost calculated? By value/weight/destination? Are there geographical restrictions? Is express delivery an option? Can the customer track their order? Stock With physical products and some virtual products such as event tickets, stock control may be a requirement. Stock control is a complex area and beyond the scope of this book, but the following questions will help uncover the requirements: Are stock levels managed in another system, for example, MRP? If the business has other sales channels, is there dedicated stock for the online store? When should stock levels be updated (at the point of adding to the cart or at the point of completing the order)? How long should stock be reserved? What happens when a product is out of stock? Can a customer order an out-of-stock product (back order)? What happens if a product goes out of stock during the customer checkout process? If stock is controlled by an external system, how often should stock levels be updated in the e-store? Legal compliance It is important to understand the legal requirements of the country where you operate your store. It is beyond the scope of this book to detail the legal requirements of every country, but some examples of e-commerce regulation that you should research and understand are included here: PCI-DSS Compliance—Worldwide The Privacy and Electronic Communications (EC Directive) (also known as the EU cookie law)—European Union Distance Selling Regulations—UK Customer communication Once the customer has placed their order, how much communication will there be? A standard expectation of the customer will be to receive a notification that their order has been placed, but how much information should that e-mail contain? Should the e-mail be plain text or graphical? Does the customer receive an additional e-mail when the order is shipped? If the product has a long lead time, should the customer receive interim updates? What communication should take place if a customer cancels their order? Back office In order for the store to run efficiently, it is important to consider the requirements of the back office system. This will often be managed by a different group of people to those specifying the e-store. Identify the different types of users involved in the order fulfillment process. These roles may include: Sales order processing Warehouse and order handling Customer service for order enquiries Product managers These roles may all have different information available to them when trying to locate the order or product they need, so it's important for the interface to cater to different scenarios: Does the website need to integrate with a third-party system for management of orders? How are order status codes updated on the website so that customers can track progress? In a batch, manually or automatically? User experience How will the customer find the product that they are looking for? Well-structured navigation? Search by SKU? Free text search? Faceted search? The source of product data When you are creating a store with more than a trivial number of products, you will probably want to work on a method of mass importing the product data. Find out where the product data will be coming from, and in what format it will be delivered. You may want to define your Product Types taking into account the format of the data coming in—especially if the incoming data format is fixed. You may also want to define different methods of importing taxonomy terms from the supplied data. Summary Once you have gone through all of these checklists with the business stakeholders, you should have enough information to start your Drupal Commerce build. Drupal Commerce is very flexible, but it is crucial that you understand the outcome that you are trying to achieve before you start installing modules and setting up Product Types. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Introduction to Drupal Web ServicesIntroduction to Drupal Web Services [Article] Drupal Site Configuration: Performance, Maintenance, Logging and Errors and Reports [Article]
Read more
  • 0
  • 0
  • 904

article-image-different-strategies-make-responsive-websites
Packt
04 Oct 2013
9 min read
Save for later

Different strategies to make responsive websites

Packt
04 Oct 2013
9 min read
(For more resources related to this topic, see here.) The Goldilocks approach In 2011, and in response to the dilemma of building several iterations of the same website by targeting every single device, the web-design agency, Design by Front, came out with an official set of guidelines many designers were already adhering to. In essence, the Goldilocks approach states that rather than rearranging our layouts for every single device, we shouldn't be afraid of margins on the left and right of our designs. There's a blurb about sizing around the width of our body text (which they state should be around 66 characters per line, or 33 em's wide), but the important part is that they completely destroyed the train of thought that every single device needed to be explicitly targeted—effectively saving designers countless hours of time. This approach became so prevalent that most CSS frameworks, including Twitter Bootstrap 2, adopted it without realizing that it had a name. So how does this work exactly? You can see a demo at http://goldilocksapproach.com/demo; but for all you bathroom readers out there, you basically wrap your entire site in an element (or just target the body selector if it doesn't break anything else) and set the width of that element to something smaller than the width of the screen while applying a margin: auto. The highlighted element is the body tag. You can see the standard and huge margins on each side of it on larger desktop monitors. As you contract the viewport to a generic tablet-portrait size, you can see the width of the body is decreased dramatically, creating margins on each side again. They also do a little bit of rearranging by dropping the sidebar below the headline. As you contract the viewport more to a phone size, you'll notice that the body of the page occupies the full width of the page now, with just some small margins on each side to keep text from butting up against the viewport edges. Okay, so what are the advantages and disadvantages? Well, one advantage is it's incredibly easy to do. You literally create a wrapping element and every time the width of the viewport touches the edges of that element, you make that element smaller and tweak a few things. But, the huge advantage is that you aren't targeting every single device, so you only have to write a small amount of code to make your site responsive. The downside is that you're wasting a lot of screen real-estate with all those margins. For the sake of practice, create a new folder called Goldilocks. Inside that folder create a goldilocks.html and goldilocks.css file. Put the following code in your goldilocks.html file: <!DOCTYPE html> <html> <head> <title>The Goldilocks Approach</title> <link rel="stylesheet" href="goldilocks.css"> </head> <body> <div id="wrap"> <header> <h1>The Goldilocks Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> <p> Lorem ipsum... </p> </header> </article> </section> </div> </body> </html> We're creating an incredibly simple page with a header, sidebar, and content area to demonstrate how the Goldilocks approach works. In your goldilocks.css file, put the following code: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } h1, h2 { line-height: 1.2; } h1 { font-size: 30px; } h2 { font-size: 20px; } #wrap { width: 900px; margin: auto; } section { overflow: hidden; } aside { float: left; margin-right: 20px; width: 280px; } article { float: left; width: 600px; } @media (max-width: 900px) { #wrap { width: 500px; } aside { width: 180px; } article { width: 300px; } } @media (max-width: 500px) { #wrap { width: 96%; margin: 0 2%; } aside, article { width: 100%; margin-top: 10px; } } Did you notice how the width of the #wrap element becomes the max-width of the media query? After you save and refresh your page, you'll be able to expand/contract to your heart's content and enjoy your responsive website built with the Goldilocks approach. Look at you! You just made a site that will serve any device with only a few media queries. The fewer media queries you can get away with, the better! Here's what it should look like: The preceding screenshot shows your Goldilocks page at desktop width. At tablet size, it looks like the following: On a mobile site, you should see something like the following screenshot: The Goldilocks approach is great for websites that are graphic heavy as you can convert just three mockups to layouts and have completely custom, graphic-rich websites that work on almost any device. It's nice if you are of the type who enjoys spending a lot of time in Photoshop and don't mind putting in the extra work of recreating a lot of code for a more textured website with a lot of attention to detail. The Fluid approach Loss of real estate and a substantial amount of extra work for slightly prettier (and heavier) websites is a problem that most of us don't want to deal with. We still want beautiful sites, and luckily with pure CSS, we can replicate a huge amount of elements in flexible code. A common, real-world example of replacing images with CSS is to use CSS to create buttons. Where Goldilocks looks at your viewport as a container for smaller, usually pixel-based containers, the Fluid approach looks at your viewport as a 100 percent large container. If every element inside the viewport adds up to around 100 percent, you've effectively used the real estate you were given. Duplicate your goldilocks.html file, then rename it to fluid.html. Replace the mentions of "Goldilocks" with "Fluid": <!DOCTYPE html> <html> <head> <title>The Fluid Approach</title> <link rel="stylesheet" href="fluid.css"> </head> <body> <div id="wrap"> <header> <h1>The Fluid Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> </header> <p> Lorem ipsum... </p> </article> </section> </div> </body> </html> We're just duplicating our very simple header, sidebar, and article layout. Create a fluid.css file and put the following code in it: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } aside { float: left; width: 24%; margin-right: 1%; } article { float: left; width: 74%; margin-left: 1%; } Wow! That's a lot less code already. Save and refresh your browser, then expand/contract your viewport. Did you notice how we're using all available space? Did you notice how we didn't even have to use media queries and it's already responsive? Percentages are pretty cool. Your first fluid, responsive, web design We have a few problems though: On large monitors, when that layout is full of text, every paragraph will fit on one line. That's horrible for readability. Text and other elements butt up against the edges of the design. The sidebar and article, although responsive, don't look great on smaller devices. They're too small. Luckily, these are all pretty easy fixes. First, let's make sure the layout of our content doesn't stretch to 100 percent of the width of the viewport when we're looking at it in larger resolutions. To do this, we use a CSS property called max-width. Append the following code to your fluid.css file: #wrap { max-width: 980px; margin: auto; } What do you think max-width does? Save and refresh, expand and contract. You'll notice that wrapping div is now centered in the screen at 980 px width, but what happens when you go below 980 px? It simply converts to 100 percent width. This isn't the only way you'll use max-width, but we'll learn a bit more in the Gotchas and best practices section. Our second problem was that the elements were butting up against the edges of the screen. This is an easy enough fix. You can either wrap everything in another element with specified margins on the left and right, or simply add some padding to our #wrap element shown as follows: #wrap { max-width: 980px; margin: auto; padding: 0 20px; } Now our text and other elements are touching the edges of the viewport. Finally, we need to rearrange the layout for smaller devices, so our sidebar and article aren't so tiny. To do this, we'll have to use a media query and simply unassign the properties we defined in our original CSS: @media (max-width: 600px) { aside, article { float: none; width: 100%; margin: 10px 0; } } We're removing the float because it's unnecessary, giving these elements a width of 100 percent, and removing the left and right margins while adding some margins on the top and bottom so that we can differentiate the elements. This act of moving elements on top of each other like this is known as stacking. Simple enough, right? We were able to make a really nice, real-world, responsive, fluid layout in just 28 lines of CSS. On smaller devices, we stack content areas to help with readability/usability: It's up to you how you want to design your websites. If you're a huge fan of lush graphics and don't mind doing extra work or wasting real estate, then use Goldilocks. I used Goldilocks for years until I noticed a beautiful site with only one breakpoint (width-based media query), then I switched to Fluid and haven't looked back. It's entirely up to you. I'd suggest you make a few websites using Goldilocks, get a bit annoyed at the extra effort, then try out Fluid and see if it fits. In the next section we'll talk about a somewhat new debate about whether we should be designing for larger or smaller devices first. Summary In this article, we have taken a look at how to build a responsive website using the Goldilocks approach and the Fluid approach. Resources for Article : Further resources on this subject: Creating a Web Page for Displaying Data from SQL Server 2008 [Article] The architecture of JavaScriptMVC [Article] Setting up a single-width column system (Simple) [Article]
Read more
  • 0
  • 0
  • 1581
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-our-first-bot-webbot
Packt
03 Oct 2013
9 min read
Save for later

Creating our first bot, WebBot

Packt
03 Oct 2013
9 min read
(For more resources related to this topic, see here.)   With the knowledge you have gained, we are now ready to develop our first bot, which will be a simple bot that gathers data (documents) based on a list of URLs and datasets (field and field values) that we will require. First, let's start by creating our bot package directory. So, create a directory called WebBot so that the files in our project_directory/lib directory look like the following: '-- project_directory|-- lib | |-- HTTP (our existing HTTP package) | | '-- (HTTP package files here) | '-- WebBot | |-- bootstrap.php| |-- Document.php | '-- WebBot.php |-- (our other files)'-- 03_webbot.php As you can see, we have a very clean and simple directory and file structure that any programmer should be able to easily follow and understand. The WebBot class Next, open the file WebBot.php file and add the code from the project_directory/lib/WebBot/WebBot.php file: In our WebBot class, we first use the __construct() method to pass the array of URLs (or documents) we want to fetch, and the array of document fields are used to define the datasets and regular expression patterns. Regular expression patterns are used to populate the dataset values (or document field values). If you are unfamiliar with regular expressions, now would be a good time to study them. Then, in the __construct() method, we verify whether there are URLs to fetch or not. If there , we set an error message stating this problem. Next, we use the __formatUrl() method to properly format URLs we fetch data. This method will also set the correct protocol: either HTTP or HTTPS ( Hypertext Transfer Protocol Secure ). If the protocol is already set for the URL, for example http://www.[dom].com, we ignore setting the protocol. Also, if the class configuration setting conf_force_https is set to true, we force the HTTPS protocol again unless the protocol is already set for the URL. We then use the execute() method to fetch data for each URL, set and add the Document objects to the array of documents, and track document statistics. This method also implements fetchdelay logic that will delay each fetch by x number of seconds if set in the class configuration settings conf_delay_between_fetches. We also include the logic that only allows distinct URL fetches, meaning that, if we have already fetched data for a URL we won't fetch it again; this eliminates duplicate URL data fetches. The Document object is used as a container for the URL data, and we can use the Document object to use the URL data, the data fields, and their corresponding data field values. In the execute() method, you can see that we have performed a HTTPRequest::get() request using the URL and our default timeout value—which is set with the class configuration settings conf_default_timeout. We then pass the HTTPResponse object that is returned by the HTTPRequest::get() method to the Document object. Then, the Document object uses the data from the HTTPResponse object to build the document data. Finally, we include the getDocuments() method, which simply returns all the Document objects in an array that we can use for our own purposes as we desire. The WebBot Document class Next, we need to create a class called Document that can be used to store document data and field names with their values. To do this we will carry out the following steps: We first pass the data retrieved by our WebBot class to the Document class. Then, we define our document's fields and values using regular expression patterns. Next, add the code from the project_directory/lib/WebBot/Document.php file. Our Document class accepts the HTTPResponse object that is set in WebBot class's execute() method, and the document fields and document ID. In the Document __construct() method, we set our class properties: the HTTP Response object, the fields (and regular expression patterns), the document ID, and the URL that we use to fetch the HTTP response. We then check if the HTTP response successful (status code 200), and if it isn't, we set the error with the status code and message. Lastly, we call the __setFields() method. The __setFields() method parses out and sets the field values from the HTTP response body. For example, if in our fields we have a title field defined as $fields = ['title' => '<title>(.*)</title>'];, the __setFields() method will add the title field and pull all values inside the <title>*</title> tags into the HTML response body. So, if there were two title tags in the URL data, the __setField() method would add the field and its values to the document as follows: ['title'] => [ 0 => 'title x', 1 => 'title y' ] If we have the WebBot class configuration variable—conf_include_document_field_raw_values—set to true, the method will also add the raw values (it will include the tags or other strings as defined in the field's regular expression patterns) as a separate element, for example: ['title'] => [ 0 => 'title x', 1 => 'title y', 'raw' => [ 0 => '<title>title x</title>', 1 => '<title>title y</title>' ] ] The preceding code is very useful when we want to extract specific data (or field values) from URL data. To conclude the Document class, we have two more methods as follows: getFields(): This method simply returns the fields and field values getHttpResponse(): This method can be used to get the HTTPResponse object that was originally set by the WebBot execute() method This will allow us to perform logical requests to internal objects if we wish. The WebBot bootstrap file Now we will create a bootstrap.php file (at project_directory/lib/WebBot/) to load the HTTP package and our WebBot package classes, and set our WebBot class configuration settings: <?php namespace WebBot; /** * Bootstrap file * * @package WebBot */ // load our HTTP package require_once './lib/HTTP/bootstrap.php'; // load our WebBot package classes require_once './lib/WebBot/Document.php'; require_once './lib/WebBot/WebBot.php'; // set unlimited execution time set_time_limit(0); // set default timeout to 30 seconds WebBotWebBot::$conf_default_timeout = 30; // set delay between fetches to 1 seconds WebBotWebBot::$conf_delay_between_fetches = 1; // do not use HTTPS protocol (we'll use HTTP protocol) WebBotWebBot::$conf_force_https = false; // do not include document field raw values WebBotWebBot::$conf_include_document_field_raw_values = false; We use our HTTP package to handle HTTP requests and responses. You have seen in our WebBot class how we use HTTP requests to fetch the data, and then use the HTTP Response object to store the fetched data in the previous two sections. That is why we need to include the bootstrap file to load the HTTP package properly. Then, we load our WebBot package files. Because our WebBot class uses the Document class, we load that class file first. Next, we use the built-in PHP function set_time_limit() to tell the PHP interpreter that we want to allow unlimited execution time for our script. You don't necessarily have to use unlimited execute time. However, for testing reasons, we will use unlimited execution time for this example. Finally, we set the WebBot class configuration settings. These settings are used by the WebBot object internally to make our bot work as we desire. We should always make the configuration settings as simple as possible to help other developers understand. This means we should also include detailed comments in our code to ensure easy usage of package configuration settings. We have set up four configuration settings in our WebBot class. These are static and public variables, meaning that we can set them from anywhere after we have included the WebBot class, and once we set them they will remain the same for all WebBot objects unless we change the configuration variables. If you do not understand the PHP keyword static, now would be a good time to research this subject. The first configuration variable is conf_default_timeout. This variable is used to globally set the default timeout (in seconds) for all WebBot objects we create. The timeout value tells the HTTPRequest class how long it continue trying to send a request before stopping and deeming it as a bad request, or a timed-out request. By default, this configuration setting value is set to 30 (seconds). The second configuration variable—conf_delay_between_fetches—is used to set a time delay (in seconds) between fetches (or HTTP requests). This can be very useful when gathering a lot of data from a website or web service. For example, say, you had to fetch one million documents from a website. You wouldn't want to unleash your bot with that type of mission without fetch delays because you could inevitably cause—to that website—problems due to massive requests. By default, this value is set to 0, or no delay. The third WebBot class configuration variable—conf_force_https—when set to true, can be used to force the HTTPS protocol. As mentioned earlier, this will not override any protocol that is already set in the URL. If the conf_force_https variable is set to false, the HTTP protocol will be used. By default, this value is set to false. The fourth and final configuration variable—conf_include_document_field_raw_values—when set to true, will force the Document object to include the raw values gathered from the ' regular expression patterns. We've discussed configuration settings in detail in the WebBot Document Class section earlier in this article. By default, this value is set to false. Summary In this article you have learned how to get started with building your first bot using HTTP requests and responses. Resources for Article : Further resources on this subject: Installing and Configuring Jobs! and Managing Sections, Categories, and Articles using Joomla! [Article] Search Engine Optimization in Joomla! [Article] Adding a Random Background Image to your Joomla! Template [Article]
Read more
  • 0
  • 0
  • 4721

article-image-using-events-interceptors-and-logging-services
Packt
03 Oct 2013
19 min read
Save for later

Using Events, Interceptors, and Logging Services

Packt
03 Oct 2013
19 min read
(For more resources related to this topic, see here.) Understanding interceptors Interceptors are defined as part of the EJB 3.1 specification (JSR 318), and are used to intercept Java method invocations and lifecycle events that may occur in Enterprise Java Beans (EJB) or Named Beans from Context Dependency Injection (CDI). The three main components of interceptors are as follows: The Target class: This class will be monitored or watched by the interceptor. The target class can hold the interceptor methods for itself. The Interceptor class: This interceptor class groups interceptor methods. The Interceptor method: This method will be invoked according to the lifecycle events. As an example, a logging interceptor will be developed and integrated into the Store application. Following the hands-on approach of this article, we will see how to apply the main concepts through the given examples without going into a lot of details. Check the Web Resources section to find more documentation about interceptors. Creating a log interceptor A log interceptor is a common requirement in most Java EE projects as it's a simple yet very powerful solution because of its decoupled implementation and easy distribution among other projects if necessary. Here's a diagram that illustrates this solution: Log and LogInterceptor are the core of the log interceptor functionality; the former can be thought of as the interface of the interceptor, it being the annotation that will decorate the elements of SearchManager that must be logged, and the latter carries the actual implementation of our interceptor. The business rule is to simply call a method of class LogService, which will be responsible for creating the log entry. Here's how to implement the log interceptor mechanism: Create a new Java package named com.packt.store.log in the project Store. Create a new enumeration named LogLevel inside this package. This enumeration will be responsible to match the level assigned to the annotation and the logging framework: package com.packt.store.log; public enum LogLevel { // As defined at java.util.logging.Level SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST; public String toString() { return super.toString(); } } We're going to create all objects of this section—LogLevel, Log, LogService, and LogInterceptor—into the same package, com.packt.store.log. This decision makes it easier to extract the logging functionality from the project and build an independent library in the future, if required. Create a new annotation named Log. This annotation will be used to mark every method that must be logged, and it accepts the log level as a parameter according to the LogLevel enumeration created in the previous step: package com.packt.store.log; @Inherited @InterceptorBinding @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD, ElementType.TYPE}) public @interface Log { @Nonbinding LogLevel value() default LogLevel.FINEST; } As this annotation will be attached to an interceptor, we have to add the @InterceptorBinding decoration here. When creating the interceptor, we will add a reference that points back to the Log annotation, creating the necessary relationship between them. Also, we can attach an annotation virtually to any Java element. This is dictated by the @Target decoration, where we can set any combination of the ElementType values such as ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER, and TYPE (mapping classes, interfaces, and enums), each representing a specific element. The annotation being created can be attached to methods and classes or interface definitions. Now we must create a new stateless session bean named LogService that is going to execute the actual logging: @Stateless public class LogService { // Receives the class name decorated with @Log public void log(final String clazz, final LogLevel level, final String message) { // Logger from package java.util.logging Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); } } Create a new class, LogInterceptor, to trap calls from classes or methods decorated with @Log and invoke the LogService class we just created—the main method must be marked with @AroundInvoke—and it is mandatory that it receives an InvocationContext instance and returns an Object element: @Log @Interceptor public class LogInterceptor implements Serializable { private static final long serialVersionUID = 1L; @Inject LogService logger; @AroundInvoke public Object logMethod(InvocationContext ic) throws Exception { final Method method = ic.getMethod(); // check if annotation is on class or method LogLevel logLevel = method.getAnnotation(Log.class)!= null ? method.getAnnotation(Log.class).value() : method.getDeclaringClass().getAnnotation(Log.class).value(); // invoke LogService logger.log(ic.getClass().getCanonicalName(),logLevel, method.toString()); return ic.proceed(); } } As we defined earlier, the Log annotation can be attached to methods and classes or interfaces by its @Target decoration; we need to discover which one raised the interceptor to retrieve the correct LogLevel value. When trying to get the annotation from the class shown in the method.getDeclaringClass().getAnnotation(Log.class) line, the engine will traverse through the class' hierarchy searching for the annotation, up to the Object class if necessary. This happens because we marked the Log annotation with @Inherited. Remember that this behavior only applies to the class's inheritance, not interfaces. Finally, as we marked the value attribute of the Log annotation as @Nonbinding in step 3, all log levels will be handled by the same LogInterceptor function. If you remove the @Nonbinding line, the interceptor should be further qualified to handle a specific log level, for example @Log(LogLevel.INFO), so you would need to code several interceptors, one for each existing log level. Modify the beans.xml (under /WEB-INF/) file to tell the container that our class must be loaded as an interceptor—currently, the file is empty, so add all the following lines: <beans xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd"> <interceptors> <class>com.packt.store.log.LogInterceptor</class> </interceptors> </beans> Now decorate a business class or method with @Log in order to test what we've done. For example, apply it to the getTheaters() method in SearchManager from the project Store. Remember that it will be called every time you refresh the query page: @Log(LogLevel.INFO) public List<Theater> getTheaters() { ... } Make sure you have no errors in the project and deploy it to the current server by right-clicking on the server name and then clicking on the Publish entry. Access the theater's page, http://localhost:7001/theater/theaters.jsf, refresh it a couple of times, and check the server output. If you have started your server from Eclipse, it should be under the Console tab: Nov 12, 2012 4:53:13 PM com.packt.store.log.LogService log INFO: public java.util.List com.packt.store.search.SearchManager.getTheaters() Let's take a quick overview of what we've accomplished so far; we created an interceptor and an annotation that will perform all common logging operations for any method or class marked with such an annotation. All log entries generated from the annotation will follow WebLogic's logging services configuration. Interceptors and Aspect Oriented Programming There are some equivalent concepts on these topics, but at the same time, they provide some critical functionalities, and these can make a completely different overall solution. In a sense, interceptors work like an event mechanism, but in reality, it's based on a paradigm called Aspect Oriented Programming (AOP). Although AOP is a huge and complex topic and has several books that cover it in great detail, the examples shown in this article make a quick introduction to an important AOP concept: method interception. Consider AOP as a paradigm that makes it easier to apply crosscutting concerns (such as logging or auditing) as services to one or multiple objects. Of course, it's almost impossible to define the multiple contexts that AOP can help in just one phrase, but for the context of this article and for most real-world scenarios, this is good enough. Using asynchronous methods A basic programming concept called synchronous execution defines the way our code is processed by the computer, that is, line-by-line, one at a time, in a sequential fashion. So, when the main execution flow of a class calls a method, it must wait until its completion so that the next line can be processed. Of course, there are structures capable of processing different portions of a program in parallel, but from an external viewpoint, the execution happens in a sequential way, and that's how we think about it when writing code. When you know that a specific portion of your code is going to take a little while to complete, and there are other things that could be done instead of just sitting and waiting for it, there are a few strategies that you could resort to in order to optimize the code. For example, starting a thread to run things in parallel, or posting a message to a JMS queue and breaking the flow into independent units are two possible solutions. If your code is running on an application server, you should know by now that thread spawning is a bad practice—only the server itself must create threads, so this solution doesn't apply to this specific scenario. Another way to deal with such a requirement when using Java EE 6 is to create one or more asynchronous methods inside a stateless session bean by annotating either the whole class or specific methods with javax.ejb.Asynchronous. If the class is decorated with @Asynchronous, all its methods inherit the behavior. When a method marked as asynchronous is called, the server usually spawns a thread to execute the called method—there are cases where the same thread can be used, for instance, if the calling method happens to end right after emitting the command to run the asynchronous method. Either way, the general idea is that things are explicitly going to be processed in parallel, which is a departure from the synchronous execution paradigm. To see how it works, let's change the LogService method to be an asynchronous one; all we need to do is decorate the class or the method with @Asynchronous: @Stateless @Asynchronous public class LogService { … As the call to its log method is the last step executed by the interceptor, and its processing is really quick, there is no real benefit in doing so. To make things more interesting, let's force a longer execution cycle by inserting a sleep method into the method of LogService: public void log(final String clazz,final LogLevel level,final String message) { Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); try { Thread.sleep(5000); log.log(Level.parse(level.toString()), "reached end of method"); } catch (InterruptedException e) { e.printStackTrace(); } } Using Thread.sleep() when running inside an application server is another classic example of a bad practice, so keep away from this when creating real-world solutions. Save all files, publish the Store project, and load the query page a couple of times. You will notice that the page is rendered without delay, as usual, and that the reached end of method message is displayed after a few seconds in the Console view. This is a pretty subtle scenario, so you can make it harsher by commenting out the @Asynchronous line and deploying the project again—this time when you refresh the browser, you will have to wait for 5 seconds before the page gets rendered. Our example didn't need a return value from the asynchronous method, making it pretty simple to implement. If you need to get a value back from such methods, you must declare it using the java.util.concurrent.Future interface: @Asynchronous public Future<String> doSomething() { … } The returned value must be changed to reflect the following: return new AsyncResult<String>("ok"); The javax.ejb.AsyncResult function is an implementation of the Future interface that can be used to return asynchronous results. There are other features and considerations around asynchronous methods, such as ways to cancel a request being executed and to check if the asynchronous processing has finished, so the resulting value can be accessed. For more details, check the Creating Asynchronous methods in EJB 3.1 reference at the end of this article. Understanding WebLogic's logging service Before we advance to the event system introduced in Java EE 6, let's take a look at the logging services provided by Oracle WebLogic Server. By default, WebLogic Server creates two log files for each managed server: access.log: This is a standard HTTP access log, where requests to web resources of a specific server instance are registered with details such as HTTP return code, the resource path, response time, among others <ServerName.log>: This contains the log messages generated by the WebLogic services and deployed applications of that specific server instance These files are generated in a default directory structure that follows the pattern $DOMAIN_NAME/servers/<SERVER_NAME>/logs/. If you are running a WebLogic domain that spawns over more than one machine, you will find another log file named <DomainName>.log in the machine where the administration server is running. This file aggregates messages from all managed servers of that specific domain, creating a single point of observation for the whole domain. As a best practice, only messages with a higher level should be transferred to the domain log, avoiding overhead to access this file. Keep in mind that the messages written to the domain log are also found at the managed server's specific log file that generated them, so there's no need to redirect everything to the domain log. Anatomy of a log message Here's a typical entry of a log file: ####<Jul 15, 2013 8:32:54 PM BRT> <Alert> <WebLogicServer> <sandbox-lap> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <> <1373931174624> <BEA-000396> <Server shutdown has been requested by weblogic.> The description of each field is given in the following table: Text Description #### Fixed, every log message starts with this sequence <Jul 15, 2013 8:32:54 PM BRT> Locale-formatted timestamp <Alert> Message severity <WebLogicServer> WebLogic subsystem-other examples are WorkManager, Security, EJB, and Management <sandbox-lap> Physical machine name <AdminServer> WebLogic Server name <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> Thread ID <weblogic> User ID <>  Transaction ID, or empty if not in a transaction context <>  Diagnostic context ID, or empty if not applicable; it is used by the Diagnostics Framework to correlate messages of a specific request <1373931174624> Raw time in milliseconds <BEA-000396> Message ID <Server shutdown has been requested by weblogic.> Description of the event The Diagnostics Framework presents functionalities to monitor, collect, and analyze data from several components of WebLogic Server. Redirecting standard output to a log file The logging solution we've just created is currently using the Java SE logging engine—we can see our messages on the console's screen, but they aren't being written to any log file managed by WebLogic Server. It is this way because of the default configuration of Java SE, as we can see from the following snippet, taken from the logging.properties file used to run the server: # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler You can find this file at $JAVA_HOME/jre/lib/logging.properties. So, as stated here, the default output destination used by Java SE is the console. There are a few ways to change this aspect: If you're using this Java SE installation solely to run WebLogic Server instances, you may go ahead and change this file, adding a specific WebLogic handler to the handlers line as follows: handlers= java.util.logging.ConsoleHandler,weblogic.logging.ServerLoggingHandler Tampering with Java SE files is not an option (it may be shared among other software, for instance); you can duplicate the default logging.properties file into another folder $DOMAIN_HOME being a suitable candidate, add the new handler, and instruct WebLogic to use this file at startup by adding this argument to the following command line: -Djava.util.logging.config.file=$DOMAIN_HOME/logging.properties You can use the administration console to set the redirection of the standard output (and error) to the log files. To do so, perform the following steps: In the left-hand side panel, expand Environment and select Servers. In the Servers table, click on the name of the server instance you want to configure. Select Logging and then General. Find the Advanced section, expand it, and tick the Redirect stdout logging enabled checkbox: Click on Save to apply your changes. If necessary, the console will show a message stating that the server must be restarted to acquire the new configuration. If you get no warnings asking to restart the server, then the configuration is already in use. This means that both WebLogic subsystems and any application deployed to that server is automatically using the new values, which is a very powerful feature for troubleshooting applications without intrusive actions such as modifying the application itself—just change the log level to start capturing more detailed messages! Notice that there are a lot of other logging parameters that can be configured, and three of them are worth mentioning here: The Rotation group (found in the inner General tab): The rotation feature instructs WebLogic to create new log files based on the rules set on this group of parameters. It can be set to check for a size limit or create new files from time to time. By doing so, the server creates smaller files that we can easily handle. We can also limit the number of files retained in the machine to reduce the disk usage. If the partition where the log files are being written to reaches 100 percent of utilization, WebLogic Server will start behaving erratically. Always remember to check the disk usage; if possible, set up a monitoring solution such as Nagios to keep track of this and alert you when a critical level is reached. Minimum severity to log (also in the inner General tab): This entry sets the lower severity that should be logged by all destinations. This means that even if you set the domain level to debug, the messages will be actually written to the domain log only if this parameter is set to the same or lower level. It will work as a gatekeeper to avoid an overload of messages being sent to the loggers. HTTP access log enabled (found in the inner HTTP tab): When WebLogic Server is configured in a clustered environment, usually a load-balancing solution is set up to distribute requests between the WebLogic managed servers; the most common options are Oracle HTTP Server (OHS) or Apache Web Server. Both are standard web servers, and as such, they already register the requests sent to WebLogic in their own access logs. If this is the case, disable the WebLogic HTTP access log generation, saving processing power and I/O requests to more important tasks. Integrating Log4J to WebLogic's logging services If you already have an application that uses Log4J and want it to write messages to WebLogic's log files, you must add a new weblogic.logging.log4j.ServerLoggingAppender appender to your lo4j.properties configuration file. This class works like a bridge between Log4J and WebLogic's logging framework, allowing the messages captured by the appender to be written to the server log files. As WebLogic doesn't package a Log4J implementation, you must add its JAR to the domain by copying it to $DOMAIN_HOME/tickets/lib, along with another file, wllog4j.jar, which contains the WebLogic appender. This file can be found inside $MW_HOME/wlserver/server/lib. Restart the server, and it's done! If you're using a *nix system, you can create a symbolic link instead of copying the files—this is great to keep it consistent when a path changing these specific files must be applied to the server. Remember that having a file inside $MW_HOME/wlserver/server/lib doesn't mean that the file is being loaded by the server when it starts up; it is just a central place to hold the libraries. To be loaded by a server, a library must be added to the classpath parameter of that server, or you can add it to the domain-wide lib folder, which guarantees that it will be available to all nodes of the domain on a specific machine. Accessing and reading log files If you have direct access to the server files, you can open and search them using a command-line tool such as tail or less, or even use a graphical viewer such as Notepad. But when you don't have direct access to them, you may use WebLogic's administration console to read their content by following the steps given here: In the left-hand side pane of the administration console, expand Diagnostics and select Log Files. In the Log Files table, select the option button next to the name of the log you want to check and click on View: The types displayed on this screen, which are mentioned at the start of the section, are Domain Log, Server Log, and HTTP Access. The others are resource-specific or linked to the diagnostics framework. Check the Web resources section at the end of this article for further reference. The page displays the latest contents of the log file; the default setting shows up to 500 messages in reverse chronological order. The messages at the top of the window are the most recent messages that the server has generated. Keep in mind that the log viewer does not display messages that have been converted into archived log files.
Read more
  • 0
  • 0
  • 5293

Packt
03 Oct 2013
8 min read
Save for later

Quick start – using Foundation 4 components for your first website

Packt
03 Oct 2013
8 min read
(For more resources related to this topic, see here.)   Step 1 – using the Grid The base building block that Foundation 4 provides is the Grid. This component allows us to easily put the rest of elements in the page. The Grid also avoids the temptation of using tables to put elements in their places. Tables should be only used to show tabular data. Don't use them with any other meaning. Web design using tables is considered a really terrible practice. Defining a grid, intuitively, consists of defining rows and columns. There are basically two ways to do this, depending on which kind of layout you want to create. They are as follows: If you want a simple layout which evenly splits contents of the page, you should use Block Grid . To use Block Grid, we must have the default CSS package or be sure to have selected Block Grid from a custom package. If you want a more complex layout, with different sized elements and not necessarily evenly distributed, you should use the normal Grid. This normal Grid contains up to 12 grid columns, to put elements into. After picking the general layout of your page, you should decide if you want your grid structure to be the same for small devices, such as smartphones or tablets, as it will be on large devices, such as desktop computers or laptops. So, our first task is to define a grid structure for our page as follows: Select how we want to distribute our elements. We choose Block Grid, the simpler one. Consequently, we define a <ul> element with several <li> elements inside it. Select if we want different structure for large and small screens. We Yes .This is important to determine which Foundation 4 CSS class our elements will belong to. As result, we have the following code: <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> The key concept here is the class large-block-grid-4. There are two important classes related to Block Grid: Small-block-grid : If the element belongs to this class, the resulting Block Grid will keep its spacing and configuration for any screen size Large-block-grid : With this, the behavior will change between large and small screen sizes. So, large forces the responsive behavior. You can also use both classes together. In that case, large overrides the behavior of small class. The number 4 at the end of the class name is just the number of grid columns. The complete code of our page, so far is as follows: <!DOCTYPE html> <!--[if IE 8]><html class="no-js lt-ie9" lang="en" ><![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]--> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" /> <title>My First Foundation Web</title> <link rel="stylesheet" href="css/foundation.css" /> <script src ="js/vendor/custom.modernizr.js"></script> <style> img { width: 300px; border: 1px solid #ddd; } </style> </head> <body> <!-- Grid --> <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> <!-- end grid --> <script> document.write('<script src =' + ('__proto__' in {} ? 'js/vendor/zepto' : 'js/vendor/jquery') + .js><\/script>') </script> <script src ="js/foundation.min.js"></script> <script> $(document).foundation(); </script> </body> </html> We have created a simple HTML file with a basic grid that contains 4 images in a list, using the Block Grid facility. Each image spans 4 grid columns. The following screenshot shows how our page looks: Not very fancy, uh? Don't worry, we will add some nice features in the following steps. We have way more options to choose from, for the grid arrangements of our pages. Visit http://foundation.zurb.com/docs/components/grid.html for more information. Step 2 – the navigation bar Adding a basic top navigation bar to our website with Foundation 4 is really easy. We just follow steps: Create an HTML nav element by inserting the following code: <nav class="top-bar"></nav> Add a title for the navigation bar (optional) by inserting the following code: <nav class="top-bar"> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> </nav> Add some navigation elements inside the nav element <nav class="top-bar"> <!--Title area --> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> <!-- Here starts nav Section --> <section class="top-bar-section"> <!-- Left Nav Section --> <ul class="left"> <li class="divider"></li> <li class="has-dropdown"><a href="#">Options</a> <!-- First submenu --> <ul class="dropdown"> <li class="has-dropdown"><a href="#">Option 1a</a> <!-- Second submenu --> <ul class="dropdown"> <li><label>2nd Options list</label></li> <li><a href="#">Option 2a</a></li> <li><a href="#">Option 2b</a></li> <li class="has-dropdown"> <a href="#">Option 2c</a> <!-- Third submenu --> <ul class="dropdown"> <li><label>3rd Options list</label></li> <li><a href="#">Option 3a</a></li> <li><a href="#">Option 3b</a></li> <li><a href="#">Option 3c</a></li> </ul> </li> </ul> </li> <!-- Visual separation between elements --> <li class="divider"></li> <li><a href="#">Option 2b</a></li> <li><a href="#">Option 2c</a></li> </ul> </li> <li class="divider"></li> </ul> </section> </nav> Interesting parts in the preceding code are as follows: <li class="divider">: It creates a visual separation between the elements of a list <li class="has-dropdown">: It shows a drop-down element when is clicked. <ul class="dropdown">: It indicates that the list is a drop-down menu. Apart from that, the left class, used to specify where we want the buttons on the bar. We would use right class to put them on the right side of the screen, or both classes, if we want several buttons in different places. Our navigation bar looks like the following screenshot: Of course, this navigation bar shows responsive behavior, thanks to the class toggle-topbar-menu-icon. This class forces the buttons to collapse in narrower screens. And it looks like the following screenshot: Now we know how to add a top navigation bar to our page. We just add the navigation bar's code before the grid, and the result is shown in the following screenshot: Summary In this article we learnt how to use 2 of the many UI elements provided to us by Foundation 4, the grid and the navigation bar. The screenshots provided help in giving a better idea of how the elements look, when incorporated in our website. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] Building HTML5 Pages from Scratch [Article] Video conversion into the required HTML5 Video playback [Article]
Read more
  • 0
  • 0
  • 916

article-image-routes-and-model-binding-intermediate
Packt
01 Oct 2013
6 min read
Save for later

Routes and model binding (Intermediate)

Packt
01 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting ready This section builds on the previous section and assumes you have the TodoNancy and TodoNancyTests projects all set up. How to do it... The following steps will help you to handle the other HTTP verbs and work with dynamic routes: Open the TodoNancy Visual Studio solution. Add a new class to the NancyTodoTests project, call it TodosModulesTests, and fill this test code for a GET and a POST route into it: public class TodosModuleTests { private Browser sut; private Todo aTodo; private Todo anEditedTodo; public TodosModuleTests() { TodosModule.store.Clear(); sut = new Browser(new DefaultNancyBootstrapper()); aTodo = new Todo { title = "task 1", order = 0, completed = false }; anEditedTodo = new Todo() { id = 42, title = "edited name", order = 0, completed = false }; } [Fact] public void Should_return_empty_list_on_get_when_no_todos_have_been_posted() { var actual = sut.Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } [Fact] public void Should_return_201_create_when_a_todo_is_posted() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)); Assert.Equal(HttpStatusCode.Created, actual.StatusCode); } [Fact] public void Should_not_accept_posting_to_with_duplicate_id() { var actual = sut.Post("/todos/", with => with.JsonBody(anEditedTodo)) .Then .Post("/todos/", with => with.JsonBody(anEditedTodo)); Assert.Equal(HttpStatusCode.NotAcceptable, actual.StatusCode); } [Fact] public void Should_be_able_to_get_posted_todo() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo) ) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(aTodo, actualBody[0]); } private void AssertAreSame(Todo expected, Todo actual) { Assert.Equal(expected.title, actual.title); Assert.Equal(expected.order, actual.order); Assert.Equal(expected.completed, actual.completed); } } The main thing to notice new in these tests is the use of actual.Body.DesrializeJson<Todo[]>(), which takes the Body property of the BrowserResponse type, assumes it contains JSON formatted text, and then deserializes that string into an array of Todo objects. At the moment, these tests will not compile. To fix this, add this Todo class to the TodoNancy project as follows: public class Todo { public long id { get; set; } public string title { get; set; } public int order { get; set; } public bool completed { get; set; } } Then, go to the TodoNancy project, and add a new C# file, call it TodosModule, and add the following code to body of the new class: public static Dictionary<long, Todo> store = new Dictionary<long, Todo>(); Run the tests and watch them fail. Then add the following code to TodosModule: public TodosModule() : base("todos") { Get["/"] = _ => Response.AsJson(store.Values); Post["/"] = _ => { var newTodo = this.Bind<Todo>(); if (newTodo.id == 0) newTodo.id = store.Count + 1; if (store.ContainsKey(newTodo.id)) return HttpStatusCode.NotAcceptable; store.Add(newTodo.id, newTodo); return Response.AsJson(newTodo) .WithStatusCode(HttpStatusCode.Created); }; } The previous code adds two new handlers to our application. One handler for the GET "/todos/" HTTP and the other handler for the POST "/todos/" HTTP. The GET handler returns a list of todo items as a JSON array. The POST handler allows for creating new todos. Re-run the tests and watch them succeed. Now let's take a closer look at the code. Firstly, note how adding a handler for the POST HTTP is similar to adding handlers for the GET HTTP. This consistency extends to the other HTTP verbs too. Secondly, note that we pass the "todos"string to the base constructor. This tells Nancy that all routes in this module are related to /todos. Thirdly, notice the this.Bind<Todo>() call, which is Nancy's data binding in action; it deserializes the body of the POST HTTP into a Todo object. Now go back to the TodosModuleTests class and add these tests for the PUT and DELETE HTTP as follows: [Fact] public void Should_be_able_to_edit_todo_with_put() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)) .Then .Put("/todos/1", with => with.JsonBody(anEditedTodo)) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(anEditedTodo, actualBody[0]); } [Fact] public void Should_be_able_to_delete_todo_with_delete() { var actual = sut.Post("/todos/", with => with.Body(aTodo.ToJSON())) .Then .Delete("/todos/1") .Then .Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } After watching these tests fail, make them pass by adding this code to the constructor of TodosModule: Put["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; var updatedTodo = this.Bind<Todo>(); store[p.id] = updatedTodo; return Response.AsJson(updatedTodo); }; Delete["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; store.Remove(p.id); return HttpStatusCode.OK; }; All tests should now pass. Take a look at the routes to the new handlers for the PUT and DELETE HTTP. Both are defined as "/{id}". This will match any route that starts with /todos/ and then something more that appears after the trailing /, such as /todos/42 and the {id} part of the route definition is 42. Notice that both these new handlers use their p argument to get the ID from the route in the p.id expression. Nancy lets you define very flexible routes. You can use any regular expression to define a route. All named parts of such regular expressions are put into the argument for the handler. The type of this argument is DynamicDictionary, which is a special Nancy type that lets you look up parts via either indexers (for example, p["id"]) like a dictionary, or dot notation (for example, p.id) like other dynamic C# objects. There's more... In addition to the handlers for GET, POST, PUT, and DELETE, which we added in this recipe, we can go ahead and add handler for PATCH and OPTIONS by following the exact same pattern. Out of the box, Nancy automatically supports HEAD and OPTIONS for you. To handle the HEAD HTTP request, Nancy will run the corresponding GET handler but only return the headers. To handle OPTIONS, Nancy will inspect which routes you have defined and respond accordingly. Summary In this article we saw how to handle the other HTTP verbs apart from GET and how to work with dynamic routes. We will also saw how to work with JSON data and how to do model binding. Resources for Article: Further resources on this subject: Displaying MySQL data on an ASP.NET Web Page [Article] Layout with Ext.NET [Article] ASP.Net Site Performance: Speeding up Database Access [Article]
Read more
  • 0
  • 0
  • 2819
article-image-getting-started-omnet
Packt
30 Sep 2013
5 min read
Save for later

Getting Started with OMNeT++

Packt
30 Sep 2013
5 min read
(For more resources related to this topic, see here.) What this book will cover This book will show you how you can get OMNeT++ up and running on your Windows or Linux operating system. This book will then take you through the components that make up an OMNeT++ network simulation. The components include models written in the NED (Network Description) language, initialization files, C++ source files, arrays, queues, and then configuring and running a simulation. This book will show you how these components make up a simulation using different examples, which can all be found online. At the end of the book, I will be focusing on a method to debug your network simulation using a particular type of data visualization known as a sequence chart, and what the visualization means. What is OMNeT++? OMNeT++ stands for Objective Modular Network Testbed in C++. It's a component-based simulation library written in C++ designed to simulate communication networks. OMNeT++ is not a network simulator but a framework to allow you to create your own network simulations. The need for simulation Understanding the need for simulation is a big factor in deciding if this book is for you. Have a look at this table of real network versus simulated network comparison. A real network A network simulation The cost of all the hardware, servers, switches and so on has to be borne. The cost of a single standalone machine with OMNeT++ installed (which is free). It takes a lot of time to set up big specialist networks used for business or academia It takes time to learn how to create simulations, though once you know how it's done, it's much easier to create new ones. Making changes to a pre-existing network takes planning, and if a change is made in error, it may cause the network to fail. Making changes to a simulated network of a real pre-existing network doesn't pose any risk. The outcome of the simulation can be analyzed to determine how the real network will be affected. You get the real thing, so what you observe from the real network is actually happening. If there is a bug in the simulation software, it could cause the simulation to act incorrectly. As you can see, there are benefits of using both real networks and network simulations when creating and testing your network. The point I want to convey though, is that network simulations can make network design cheaper and less costly. Examples of simulation in the industry After looking into different industries, we can see that there is obviously a massive need for simulation where the aim is to solve real-world problems from how a ticketing system should work in a hospital to what to do when a natural disaster strikes. Simulation allows us to forecast potential problems without having to first live through those problems. Different uses of simulation in the industry are as follows: Manufacturing: The following are the uses under manufacturing: To show how labor management will work, such as worker efficiency, and how rotas and various other factors will affect production To show what happens when a component fails on a production line Crowd Management: The following are the uses under crowd management: To show the length of queues at theme parks and how that will affect business To show how people will get themselves seated at an event in a stadium Airports: The following are the uses for airports: Show the effects of flight delays on air-traffic control Show how many bags can be processed at any one time on a baggage handling system, and what happens when it fails Weather Forecasting: The following are the uses under weather forecasting: To predict forthcoming weather To predict the effect of climate change on the weather That's just to outline a few, but hopefully you can see how and where simulation is useful. Simulating your network will allow you to test the network against myriads of network attacks, and test all the constraints of the network without damaging it in real life. What you will learn After reading this book you will know the following things: How to get a free copy of OMNeT++ How to compile and install OMNeT++ on Windows and Linux What makes up an OMNeT++ network simulation How to create network topologies with NED How to create your own network simulations using the OMNeT++ IDE How to use pre-existing libraries in order to make robust and realistic network simulations without reinventing the wheel Learning how to create and run network simulations is definitely a big goal of the book. Another goal of this book is to teach you how you can learn from the simulations you create. That's why this book will also show you how to set up your simulations, and to collect data of the events that occur during the runtime of the simulation. Once you have collected data from the simulation, you will learn how to debug your network by using the Data Visualization tools that come with OMNeT++. Then you will be able to grasp what you learned from debugging the simulated network and apply it to the actual network you would like to create. Summary You should now know that this book is intended for people who want to get network simulations up and running with OMNeT++ as soon as possible. You'll know by now, roughly, what OMNeT++ is, the need for simulation, and therefore OMNeT++. You'll also know what you can expect to learn from this book. Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Fedora 8 — More than a Linux Distribution [Article] Linux Shell Scripting – various recipes to help you [Article]
Read more
  • 0
  • 0
  • 2279

article-image-plugins-and-extensions
Packt
30 Sep 2013
11 min read
Save for later

Plugins and Extensions

Packt
30 Sep 2013
11 min read
(For more resources related to this topic, see here.) In this modern world of JavaScript, Ext JS is the best JavaScript framework that includes a vast collection of cross-browser utilities, UI widgets, charts, data object stores, and much more. When developing an application, we mostly look for the best functionality support and components that offer it to the framework. But we usually face situations wherein the framework lacks the specific functionality or component that we need. Fortunately, Ext JS has a powerful class system that makes it easy to extend an existing functionality or component, or build new ones altogether. What is a plugin? An Ext JS plugin is a class that is used to provide additional functionalities to an existing component. Plugins must implement a method named init, which is called by the component and is passed as the parameter at the initialization time, at the beginning of the component's lifecycle. The destroy method is invoked by the owning component of the plugin, at the time of the component's destruction. We don't need to instantiate a plugin class. Plugins are inserted in to a component using the plugin's configuration option for that component. Plugins are used not only by components to which they are attached, but also by all the subclasses derived from that component. We can also use multiple plugins in a single component, but we need to be aware that using multiple plugins in a single component should not let the plugins conflict with each other. What is an extension? An Ext JS extension is a derived class or a subclass of an existing Ext JS class, which is designed to allow the inclusion of additional features. An Ext JS extension is mostly used to add custom functionalities or modify the behavior of an existing Ext JS class. An Ext JS extension can be as basic as the preconfigured Ext JS classes, which basically supply a set of default values to an existing class configuration. This type of extension is really helpful in situations where the required functionality is repeated at several places. Let us assume we have an application where several Ext JS windows have the same help button at the bottom bar. So we can create an extension of the Ext JS window, where we can add this help button and can use this extension window without providing the repeated code for the button. The advantage is that we can easily maintain the code for the help button in one place and can get the change in all places. Differences between an extension and a plugin The Ext JS extensions and plugins are used for the same purpose; they add extended functionality to Ext JS classes. But they mainly differ in terms of how they are written and the reason for which they are used. Ext JS extensions are extension classes or subclasses of Ext JS classes. To use these extensions, we need to instantiate these extensions by creating an object. We can provide additional properties, functions, and can even override any parent member to change its behavior. The extensions are very tightly coupled to the classes from which they are derived. The Ext JS extensions are mainly used when we need to modify the behavior of an existing class or component, or we need to create a fully new class or component. Ext JS plugins are also Ext JS classes, but they include the init function. To use the plugins we don't need to directly instantiate these classes; instead, we need to register the plugins in the plugins' configuration option within the component. After adding, the options and functions will become available to the component itself. The plugins are loosely coupled with the components they are plugged in, and they can be easily detachable and interoperable with multiple components and derived components. Plugins are used when we need to add features to a component. As plugins must be attached to an existing component, creating a fully new component, as done in the extensions, is not useful. Choosing the best option When we need to enhance or change the functionality of an existing Ext JS component, we have several ways to do that, each of which has both advantages and disadvantages. Let us assume we need to develop an SMS text field having a simple functionality of changing the text color to red whenever the text length exceeds the allocated length for a message; this way the user can see that they are typing more than one message. Now, this functionality can be implemented in three different ways in Ext JS, which is discussed in the following sections. By configuring an existing class We can choose to apply configuration to the existing classes. For example, we can create a text field by providing the required SMS functionality as a configuration within the listener's configuration, or we can provide event handlers after the text field is instantiated with the on method. This is the easiest option when the same functionality is used only at a few places. But as soon as the functionality is repeated at several places or in several situations, code duplication may arise. By creating a subclass or an extension By creating an extension, we can easily solve the problem as discussed in the previous section. So, if we create an extension for the SMS text field by extending the Ext JS text field, we can use this extension at as many places as we need, and can also create other extensions by using this extension. So, the code is centralized for this extension, and changing one place can reflect in all the places where this extension is used. But there is a problem: when the same functionality is needed for SMS in other subclasses of Ext JS text fields such as Ext JS text area field, we can't use the developed SMS text field extension to take advantage of the SMS functionality. Also, assume a situation where there are two subclasses of a base class, each of which provides their own facility, and we want to use both the features on a single class, then it is not possible in this implementation. By creating a plugin By creating a plugin, we can gain the maximum re-use of a code. As a plugin for one class, it is usable by the subclasses of that class, and also, we have the flexibility to use multiple plugins in a single component. This is the reason why if we create a plugin for the SMS functionality we can use the SMS plugin both in the text field and in the text area field. Also, we can use other plugins, including this SMS plugin, in the class. Building an Ext JS plugin Let us start developing an Ext JS plugin. In this section we will develop a simple SMS plugin, targeting the Ext JS textareafield component. The feature we wish to provide for the SMS functionality is that it should show the number of characters and the number of messages on the bottom of the containing field. Also, the color of the text of the message should change in order to notify the users whenever they exceed the allowed length for a message. Here, in the following code, the SMS plugin class has been created within the Examples namespace of an Ext JS application: Ext.define('Examples.plugin.Sms', { alias : 'plugin.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, init : function(textField) { this.textField = textField; if (!textField.rendered) { textField.on('afterrender', this.handleAfterRender, this); } else { this.handleAfterRender(); } }, handleAfterRender : function() { this.textField.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.textField.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'plugin-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.textField.el.select('.plugin-sms'); } }); In the preceding plugin class, you can see that within this class we have defined a "must implemented" function called init. Within the init function, we check whether the component, on which this plugin is attached, has rendered or not, and then call the handleAfterRender function whenever the rendering is. Within this function, a code is provided, such that when the change event fires off the textareafield component, the handleChange function of this class should get executed; simultaneously, create an HTML <div> element within the handleAfterRender function, where we want to show the message information regarding the characters and message counter. And the handleChange function is the handler that calculates the message length in order to show the colored, warning text, and call the updateMessageInfo function to update the message information text for the characters length and the number of messages. Now we can easily add the following plugin to the component: { xtype : 'textareafield', plugins : ['sms'] } Also, we can supply configuration options when we are inserting the plugin within the plugins configuration option to override the default values, as follows: plugins : [Ext.create('Examples.plugin.Sms', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" })] Building an Ext JS extension Let us start developing an Ext JS extension. In this section we will develop an SMS extension that exactly satisfies the same requirements as the earlier-developed SMS plugin. We already know that an Ext JS extension is a derived class of existing Ext JS class, we are going to extend the Ext JS's textarea field that facilitates for typing multiline text and provides several event handling, rendering and other functionalities. Here is the following code where we have created the Extension class under the SMS view within the Examples namespace of an Ext JS application: Ext.define('Examples.view.sms.Extension', { extend : 'Ext.form.field.TextArea', alias : 'widget.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, afterRender : function() { this.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'extension-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.el.select('.extension-sms'); } }); As seen in the preceding code, the extend keyword is used as a class property to extend the Ext.form.field.TextArea class in order to create the extension class. Within the afterRender event handler, we provide a code so that when the change event fires off the textarea field, we can execute the handleChange function of this class and also create an Html <div> element within this afterRender event handler where we want to show the message information regarding the characters counter and message counter. And from this section, the logic to show the warning, message character counter, and message counter is the same as we used in the SMS plugin. Now we can easily create an instance of this extension: Ext.create('Examples.view.sms.Extension'); Also, we can supply configuration options when we are creating the instance of this class to override the default values: Ext.create('Examples.view.sms.Extension', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" }); The following is the screenshot where we've used the SMS plugin and extension: In the preceding screenshot we have created an Ext JS window and incorporated the SMS extension and SMS plugin. As we have already discussed on the benefit of writing a plugin, we can not only use the SMS plugin with text area field, but we can also use it with text field. Summary We have learned from this article what a plugin and an extension are, the differences between the two, the facilities they offer, how to use them, and take decisions on choosing either an extension or a plugin for the needed functionality. In this article we've also developed a simple SMS plugin and an SMS extension. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Ext JS 4: Working with the Grid Component [Article] Custom Data Readers in Ext JS [Article]
Read more
  • 0
  • 0
  • 1089

article-image-connecting-mongohq-api-restkit
Packt
30 Sep 2013
7 min read
Save for later

Connecting to MongoHq API with RestKit

Packt
30 Sep 2013
7 min read
(For more resources related to this topic, see here.) Let's take a base URL: NSURL *baseURL = [NSURL URLWithString:@"http://example.com/v1/"]; Now: [NSURL URLWithString:@"foo" relativeToURL:baseURL]; // Will give us http://example.com/v1/foo [NSURL URLWithString:@"foo?bar=baz" relativeToURL:baseURL]; // -> http://example.com/v1/foo?bar=baz [NSURL URLWithString:@"/foo" relativeToURL:baseURL]; // -> http://example.com/foo [NSURL URLWithString:@"foo/" relativeToURL:baseURL]; // -> http://example.com/v1/foo [NSURL URLWithString:@"/foo/" relativeToURL:baseURL]; // -> http://example.com/foo/ [NSURL URLWithString:@"http://example2.com/" relativeToURL:baseURL]; // -> http://example2.com/ Having the knowledge of what an object manager is, let's try to apply it in a real-life example. Before proceeding, it is highly recommend that we check the actual documentation on REST API of MongoHQ. The current one is at the following link: http://support.mongohq.com/mongohq-api/introduction.html As there are no strict rules on REST API, every API is different and does a number of things in its own way. MongoHQ API is not an exception. In addition, it is currently in "beta" stage. Some of the non-standard things one can find in it are as follows: The API key should be provided as a parameter with every request. There is an undocumented way of how to provide it in Headers, which is a more common approach. Sometimes, if you get an error with the status code returned as 200 (OK), which is not according to REST standards, the normal way would be to return something in 4xx, which is stated as a client error. Sometimes, while the output of an error message is a JSON string, the HTTP response Content-type header is set as text/plain. To use the API, one will need a valid API Key. You can easily get one for free following a simple guideline recommended by the MongoHQ team: Sign up for an account at http://MongoHQ.com. Once logged in, click on the My Account drop-down menu at the top-right corner and select Account Settings. Look for the section labeled API Token. From there, take your token. We will put the API key into the MongoHQ-API-Token HTTP header. The following screenshot shows where one can find the API token key: API Token on Account Info page So let's set up our configuration using the following steps: You can use the AppDelegate class for putting the code, while I recommend using a separate MongoHqApi class for such App/API logic separation. First, let's set up our object manager with the following code: - (void)setupObjectManager { NSString *baseUrl = @"https://api.mongohq.com"; AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; manager.requestSerializationMIMEType = RKMIMETypeJSON; [RKObjectManager setSharedManager:manager]; } Let's look at the code line by line and set the base URL. Remember not to put a slash (/) at the end, otherwise, you might have a problem with response mapping: NSString *baseUrl = @"https://api.mongohq.com"; Initialize the HTTP client with baseUrl: AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; Set a few properties for our HTTP client, such as the API key in the header: NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; For the real-world app, one can show an Enter Api Key view controller to the user, and use a NSUserDefaults or a keychain to store and retrieve it. And initialize the RKObjectManager with our HTTP client: RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; MongoHQ APIs sometimes return errors in text/plain, thus we explicitly will add text/plain as a JSON content type to properly parse errors: [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; Register JSONRequestOperation to parse JSON in requests: [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; State that we are accepting JSON content type: [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; Configure so that we want the outgoing objects to be serialized into JSON: manager.requestSerializationMIMEType = RKMIMETypeJSON; Finally, set the shared instance of the object manager, so that we can easily re-use it later: [RKObjectManager setSharedManager:manager]; Sending requests with object manager Next, we want to query our databases. Let's first see how a database request will show us the output in JSON. To check this, go to http://api.mongohq.com/databases?_apikey=YOUR_API_KEY in your web browser YOUR_API_KEY. If a JSON-formatter extension (https://github.com/rfletcher/safari-json-formatter) is installed in your Safari browser, you will probably see the output shown in the following screenshot. JSON response from API As we see, the JSON representation of one database is: [ { "hostname": "sandbox.mongohq.com", "name": "Test", "plan": "Sandbox", "port": 10097, "shared": true } ] Therefore, our possible MDatabase class could look like: @interface MDatabase : NSObject @property (nonatomic, strong) NSString *name; @property (nonatomic, strong) NSString *plan; @property (nonatomic, strong) NSString *hostname; @property (nonatomic, strong) NSNumber *port; @end We can also modify the @implementation section to override the description method, which will help us while debugging the application and printing the object: // in @implementation MDatabase - (NSString *)description { return [NSString stringWithFormat:@"%@ on %@ @ %@:%@", self.name, self.plan, self.hostname, self.port]; } Now let's set up a mapping for it: - (void)setupDatabaseMappings { RKObjectManager *manager = [RKObjectManager sharedManager]; Class itemClass = [MDatabase class]; NSString *itemsPath = @"/databases"; RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; NSString *keyPath = nil; NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; [manager addResponseDescriptor:responseDescriptor]; } Let's look at the mapping setup line by line: First, we define a class, which we will use to map to: Class itemClass = [MDatabase class]; And the endpoint we plan to request for getting a list of objects: NSString *itemsPath = @"/databases"; Then we create the RKObjectMapping mapping for our object class: RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; If the names of JSON fields and class properties are the same, we will use an addAttributeMappingsFromArray method and provide the array of properties: [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; The root JSON key path in our case is nil. It means that there won't be one. NSString *keyPath = nil; The mapping will be triggered if a response status code is anything in 2xx: NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); Putting it all together in response descriptor (for a GET request method): RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; Add response descriptor to our shared manager: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager addResponseDescriptor:responseDescriptor]; Sometimes, depending on the architectural decision, it's nicer to put the mapping definition as part of a model object, and later call it like [MDatabase mapping], but for the sake of simplicity, we will put the mapping in line with RestKit configuration. The actual code that loads the database list will look like: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager getObjectsAtPath:@"/databases" parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { NSLog(@"Loaded databases: %@", [mappingResult array]); } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"Error: %@", [error localizedDescription]) }]; As you may have noticed, the method is quite simple to use and it uses block-based APIs for callbacks, which greatly improves the code readability, compared to using delegates, especially if there is more than one network request in a class. A possible implementation of a table view that loads and shows the list of databases will look like the following screenshot: View of loaded Database items Summary In this article, we learned how to set up the RestKit library to work for our web service, we talked about sending requests, getting responses, and how to do object manipulations. We also talked about simplifying the requests by introducing routing. In addition, we discussed how integration with UI can be done and created forms. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [Article] Getting Started on UDK with iOS [Article] Unity iOS Essentials: Flyby Background [Article]
Read more
  • 0
  • 0
  • 3481
article-image-managing-content-must-know
Packt
27 Sep 2013
8 min read
Save for later

Managing content (Must know)

Packt
27 Sep 2013
8 min read
(For more resources related to this topic, see here.) Getting ready Content in Edublogs can take many different forms—posts, pages, uploaded media, and embedded media. The first step needs to be developing an understanding of what each of these types of content are, and how they fit into the Edublogs framework. Pages: Pages are generally static content, such as an About or a Frequently Asked Questions page. Posts: Posts are the content that is continually updated on a blog. When you write an article, it is referred to as a post. Media [uploaded]: Edublogs has a media manager that allows you to upload pictures, videos, audio files, and other files that readers would be able to interact with or download. Media [embedded]: Embedded media is different than internal media in that it is not stored on your Edublogs account. If you record a video and upload it, the video resides on your website and is considered internal to that website. If you want to add a YouTube video, a Prezi presentation, a slideshow, or any content that actually resides on another website, that is considered embedding. How to do it... Posts and pages are very similar. When you click on the Pages link on the left navigation column, if you are just beginning, you will see an empty list or the Sample Page that Edublogs provides. However, this page will show a list of all of the pages that you have written, as shown in the following screenshot: Click on any column header (Title, Author, Comments, and Date) to sort the pages by that criterion. A page can be any of several types: Published (anyone can see), Drafts, Private, Password Protected, or in the Trash. You can filter by those pages as well. You will only see the types of pages that you are currently using. For example, in the following screenshot, I have 3 Draft pages. If I had none, Drafts would not show as an option. When you hover over a page, you are provided with several options, such as Edit, Quick Edit, Trash, and View. View: This option shows you the actual live post, the same way that a reader would see it. Trash: This deletes the page. Edit: This brings you back to the main editing screen, where you can change the actual body of the page. Quick Edit: This allows you to change some of the main options of the post: Title, Slug (the end of the URL to access the page), Author, if the page has a parent, and if it should be published. The following screenshot demonstrates these options: How it works... Everything above about Pages also applies to Posts. Posts, though, have several additional options. It's also more common to use the additional options to customize Posts than Pages. Right away, hovering over Posts, it shows two new links: Categories and Tags. These tools are optional, and serve the dual purpose of aiding the author by providing an organizational structure, and helping the reader to find posts more effectively. A Category is usually very general; on one of my educational blogs, I limit my categories to a few: technology integration, assessment, pedagogy, and lessons. If I happen to write a post that does not fit, I do not categorize it. Tags are becoming ubiquitous in many applications and operating systems. They provide an easy way to browse a store of information thematically. On my educational blog, I have over 160 tags. On one post about Facebook's new advertising system, I added the following tags: Digital Literacy, Facebook, Privacy. Utilizing tags can help you to see trends in your writing and makes it much easier for new readers to find posts that interest them, and regular readers to find old posts that they want to re-reference. Let's take a look at some of the advanced features. When adding or editing a post, the following features are all located on the right-hand side column: Publish: The Publish box is necessary any time you want to remove your Post (or Page) from the draft stage, and allow readers to be able to see it. Most new bloggers simply click on Publish/Update when they are done writing a Post, which works fine. It is limited though. People often find that there are certain times of day that result in higher readership. If you click on Edit next to Publish Immediately, you can choose a date and time to schedule the publication. In addition, the Visibility line also allows you to set a Post as private, password protected, or always at the top of the page (if you have a post you particularly want to highlight, for example). Format: Most of the time, changing the format is not necessary, particularly if you run a normal, text driven blog. However, different formats lend themselves to different types of content. For example, if publishing a picture as a Post, as is often done on the microblogging site Tumblr, choosing Image would format the post more effectively. Categories: Click on + Add New Category, or check any existing categories to append them to the Post. Tags: Type any tags that you want to use, separated by commas (such as writing, blogging, Edublogs). Featured Image: Uploading and choosing a feature image adds a thumbnail image, to provide a more engaging browsing experience for the viewer. All of these features are optional, but they are useful for improving the experience, both for yourself and your readers. There's more... While for most people, the heart of a blog is the actual writing that they do. Media serves help to both make the experience more memorable and engaging, as well as to illustrate a point more effectively than text would alone. Media is anything other than text that a user can interact with; primarily, it is video, audio, or pictures. As teachers know, not everyone learns ideally through a text-based medium; media is an important part of engaging readers just as it is an important part of engaging students. There are a few ways to get media into your posts. The first is through the Media Library. On a free account, space is limited to 32 MB, a relatively small account. Pro accounts get 10 GB of space. Click on Media from the navigation menu on the left; it brings up the library. This will have a list of your media, similar to that which is used for Posts and Pages. To add media, simply click on Add New and choose an image, audio file, or video from your computer. This will then be available to any post or page to use. The following screenshot shows the Media Library page: If you are already in a post, you have even more options. Click on the Add Media button above the text editor, as shown in the following screenshot: Following are some of the options you have to embed media: Insert Media: This allows you to directly upload a file or choose one from the Media Library. Create Gallery: Creating a gallery allows you to create a set of images that users can browse through. Set Featured Image: As described above, set a thumbnail image representative of the post. Insert from URL: This allows you to insert an image by pasting in the direct URL. Make sure you give attribution, if you use someone else's image. Insert Embed Code: Embed code is extremely helpful. Many sites provide embed code (often referred to as share code) to allow people to post their content on other websites. One of the most common examples is adding a YouTube video to a post. The following screenshot is from the Share menu of a YouTube video. Copying the code provided and pasting it into the Insert Embed Code field will put the YouTube video right in the post, as shown in the following screenshot. This is much more effective than just providing a link, because readers can watch the video without ever having to leave the blog. Embedding is an Edublogs Pro feature only. Utilizing media effectively can dramatically improve the experience for your readers. Summary This article on managing content provided details about managing different types of content, in the form of posts, pages, uploaded media, and embedded media. It taught us the different features such as publish, format, categories, tags and features image. Resources for Article : Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 976

article-image-developing-your-mobile-learning-strategy
Packt
27 Sep 2013
27 min read
Save for later

Developing Your Mobile Learning Strategy

Packt
27 Sep 2013
27 min read
(For more resources related to this topic, see here.) What is mobile learning? There have been many attempts at defining mobile learning. Is it learning done on the move, such as on a laptop while we sit in a train? Or is it learning done on a personal mobile device, such as a smartphone or a tablet? The capabilities of mobile devices Anyone can develop mobile learning. You don't need to be a gadget geek or have the latest smartphone or tablet. You certainly don't need to know anything about the make and models of devices on the market. The only thing the learning practitioner really needs is an understanding of the capabilities of the mobile devices that your learners have. This will inform the types of mobile learning interventions that will be best suited to your audience. The following table shows an overview of what a mobile learner might be able to do with each of the device types. The Device uses column on the left should already be setting off lots of great learning ideas in your head! Device uses Feature phone Smartphone Tablet Gaming device Media player Send texts Yes Yes       Mark calls Yes Yes       Take photos Yes Yes Yes Yes Yes Listen to music Yes Yes Yes Yes Yes Social networking Yes Yes Yes Yes Yes Take high res photos   Yes Yes Yes Yes Web searches   Yes Yes Yes Yes Web browsing   Yes Yes Yes Yes Watch online videos   Yes Yes Yes Yes Video calls   Yes Yes Yes Yes Edit photos   Yes Yes Yes Yes Shoot videos   Yes Yes   Yes Take audio recordings   Yes Yes   Yes Install apps   Yes Yes   Yes Edit documents   Yes Yes   Yes Use maps   Yes Yes   Yes Send MMS   Yes Yes     View catch up TV     Yes Yes   Better quality web browsing     Yes Yes   Shopping online     Yes     Trip planning     Yes     Bear in mind that screen size will also impact the type of learning activity that can be undertaken. For example: Feature phone displays are very small, so learning activities for this device type should center on text messaging with a tutor or capturing photos for an assignment. Smartphones are significantly larger so there is a much wider range of learning activities available, especially around the creation of material such as photo and video for assignment or portfolio purposes, and a certain amount of web searching and browsing. Tablets are more akin to the desktop computing environment, although some tasks such as typing are harder and taking photos is bit clumsier due to the larger size of the device. They are great for short learning tasks, assessments, video watching, and much more. Warning – it's not about delivering courses Mobile learning can be many things. What it is not is simply the delivery of e-learning courses, which is traditionally the domain of the desktop computer, on a smaller device. Of course it can be used to deliver educational materials, but what is more important is that it can also be used to foster collaboration, to facilitate communication, to access performance support, and to capture evidence. But if you try to deliver an entire course purely on a mobile, then the likelihood is that no one will use it. Your mobile learning strategy Finding a starting point for your mobile learning design is easier said than done. It is often useful when designing any type of online interaction to think through a few typical user types and build up a picture of who they are and what they want to use the system for. This helps you to visualize who you are designing for. In addition to this, in order to understand how best to utilize mobile devices for learning, you also need to understand how people actually use their mobile devices. For example, learners are highly unlikely to sit at a smartphone and complete a 60 minutes e-learning course or type out an essay. But they are very likely to read an article, do some last minute test preparation or communicate with other learners. Who are your learners? Understanding your users is an important part of designing online experiences. You should take time to understand the types of learners within your own organization and what their mobile usage looks like, as a first step in delivering mobile learning on Moodle. With this in mind, let's look at a handful of typical mobile learners from around the world who could reasonably be expected to be using an educational or workplace learning platform such as Moodle: Maria is an office manager in Madrid, Spain. She doesn't leave home without her smartphone and uses it wherever she is, whether for e-mail, web searching and browsing, reading the news, or social networking. She lives in a country where smartphone penetration has reached almost half of the population, of whom two-third access the internet every day on their mobile. The company she works for has a small learning platform for delivery of work-based learning activities and performance support resources. Fourteen year old Jennifer attends school in Rio de Janeiro, Brazil. Like many of her peers, she carries a smartphone with her and it's a key part of her life. The Brazilian population is one of the most connected in the developing world with nearly half of the population using the Internet, and its mobile phone subscriptions accounting for one-third of the entire subscriptions across Latin America and the Caribbean. Her elementary school uses a learning platform for the delivery of course resources, formative assessments, and submission of student assignments. Nineteen year old Mike works as an apprentice at a large car maker in Sunderland, UK. He spends about one-third of his time in formal education, and his remaining days each week are spent on the production line, getting a thorough grounding in every element of the car manufacturing process. He owns a smartphone and uses it heavily, in a country where nearly half of the population accesses the Internet at least monthly on their smartphone. His employer has a learning platform for delivery of work-based learning and his college also has their own platform where he keeps a training diary and uploads evidence of skills acquisition for later submission and marking. Josh is a twenty year old university student in the United States. In his country, nearly 90 percent of adults now own a mobile phone and half of all adults use their phone to access the Internet, although in his age group this increases to three quarters. Among his student peers across the U.S., 40 percent are already doing test preparation on their mobiles, whether their institution provides the means or not. His university uses a learning platform for delivery of course resources, submission of student assignments, and student collaborative activities. These four particular learners were not chosen at random—there is one important thing that connects them all. The four countries they are from represent not just important mobile markets but, according to the statistics page on Moodle.org, also represent the four largest Moodle territories, together making up over a third of all registered Moodle sites in the world. When you combine those Moodle market statistics with the level of mobile internet usage in each country, you can immediately see why support for mobile learning is so important for Moodle sites. How do your learners use their devices? In 2012, Google published the findings of a research survey which investigated how users behave across computer, tablet, smartphone, and TV screens. Their researchers found that users make decisions about what device to use for a given task depending on four elements that together make up the user's context: location, goal, available time, and attitude. Each of these is important to take into account when thinking about what sort of learning interactions your users could engage in when using their mobile devices, and you should be aiming to offer a range of mobile learning interactions that can lend themselves to different contexts, for example, offering tasks ranging in length from 2 to 20 minutes, and tasks suited to different locations, such as home, work, college, or out in the field. The attitude element is an interesting one, and it's important to allow learners to choose tasks that are appropriate to their mood at the time. Google also found that users either move between screens to perform a single task ( sequential screening ) or use multiple screens at the same time ( simultaneous screening ). In the case of simultaneous screening, they are likely to be performing complementary tasks relating to the same activity on each screen. From a learning point of view, you can design for multi-screen tasks. For example, you may find learners use their computer to perform some complex research and then collect evidence in the field using their smartphone—these would be sequential screening tasks. A media studies student could be watching a rolling news channel on the television while taking photos, video, and notes for an assignment on his tablet or smartphone—these would be simultaneous screening tasks. Understanding the different scenarios in which learners can use multiple screens will open up new opportunities for mobile learning. A key statement from the Google research states that "Smartphones are the backbone of our daily media interactions". However, despite occupying such a dominant position in our lives, the smartphone also accounts for the lowest time per user interaction at an average of 17 minutes, as opposed to 30 minutes for tablet, 39 minutes for computer, and 43 minutes for TV. This is an important point to bear in mind when designing mobile learning: as a rule of thumb you can expect a learner to engage with a tablet-based task for half an hour, and a smartphone-based task for just a quarter of an hour. Google helpfully outlines some important multi-screen lessons. While these are aimed at identifying consumer behaviour and in particular online shopping habits, we can interpret them for use in mobile learning as follows: Understand how people consume digital media and tailor your learning strategies to each channel Learning goals should be adjusted to account for the inherent differences in each device Learners must be able to save their progress between devices Learners must be able to easily find the learning platform (Moodle) on each device Once in the learning platform, it must be easy for learners to find what they are looking for quickly Smartphones are the backbone of your learners' daily media use, so design your learning to be started on smartphone and continued on a tablet or desktop computer Having an understanding of how modern-day learners use their different screens and devices will have a real impact on your learning design. Mobile usage in your organization In 2011, the world reached a technology watershed when it was estimated that one third of the world's seven billion people were online. The growth in online users is dominated by the developing world and is fuelled by mobile devices. There are now a staggering six billion mobile phone subscriptions globally. Mobile technology has quite simply become ubiquitous. And as Google showed us, people use mobile devices as the backbone of their daily media consumption, and most people already use them for school, college, or work regardless of whether they are allowed to. In this section, we will look at how mobiles are used in some of the key sectors in which Moodle is used: in schools, further and higher education, and in the workplace. Mobile usage in school Moodle is widely used throughout primary and secondary education, and mobile usage among school pupils is widespread. The two are natural bedfellows in this sector. For example, in the UK half of all 12 to 15 year olds own a smartphone while 70 percent of 8 to 15 year olds have a games console such as a Nintendo DS or PlayStation in their bedroom. Mobile device use is quite simply rampant among school children. Many primary schools now have policies which allow children to bring mobile phones into school, recognizing that such devices have a role to play in helping pupils feel safe and secure, particularly on the journey to and from school. However, it is a fairly normal practice among this age group for mobiles to be handed in at the start of the school day and collected at the end of the day. For primary pupils, therefore, the use of mobile devices for education will be largely for homework. In secondary schools, the picture is very different. There is not likely to be a device hand-in policy during school hours and a variety of acceptable use policies will be in use. An acceptable use policy may include a provision for using mobiles in lesson time, with a teacher's agreement, for the purposes of supporting learning. This, of course, opens up valuable learning opportunities. Mobile learning in education has been the subject of a number of initiatives and research studies which are all excellent sources of information. These include: Learning2Go, who were pioneers in mobile learning for schools in the UK, distributing hundreds of Windows Mobile devices to Wolverhampton schools between 2003 and 2007, introducing smartphones in 2008 under the Computers for Pupils initiative and the national MoLeNET scheme. Learning Untethered, which was not a formal research project but an exploration that gave Android tablets to a class of fifth graders. It was noted that the overall ''feel'' of the classroom shifted as students took a more active role in discovery, exploration and active learning. The Dudley Handhelds initiative, which provided 300 devices to learners in grade five to ten across six primary schools, one secondary special school, and one mainstream secondary school. These are just a few of the many research studies available, and they are well worth a read to understand how schools have been implementing mobile learning for different age groups. Mobile usage in further and higher education College students are heavy users of mobiles, and there is a roughly half and half split between smartphones and feature phones among the student community. Of the smartphone users, over 80 percent use them for college-related tasks. As we saw from Google's research, smartphones are the backbone of your learners' daily media use for those who have them. So if you don't already provide mobile learning opportunities on your Moodle site, then it is likely that your users are already helping themselves to the vast array of mobile learning sites and apps that have sprung up in recent years to meet the high demand for such services. If you don't provide your students with mobile learning opportunities, you can bet your bottom dollar that someone else is, and it could be of dubious quality or out of date. Despite the ubiquity of the mobile, many schools and colleges continue to ban them, viewing mobiles as a distraction or a means of bullying. They are fighting a rising tide, however. Students are living their lives through their mobile devices, and these devices have become their primary means of communication. A study in late 2012 of nearly 295,000 students found that despite e-mail, IM, and text messaging being the dominant peer-communication tools for students, less than half of 14 to 18 year olds and only a quarter of 11 to 14 year olds used them to communicate with their teachers. Over half of high school students said they would use their smartphone to communicate with their teacher if it was allowed. Unfortunately it rarely is, but this will change. Students want to be able to communicate electronically with their teachers; they want online text articles with classmate collaboration tools; they want to go online on their mobile to get information. Go to where your students are and communicate with them in their native environment, which is via their mobile. Be there for them, engage them, and inspire them. In the years approaching 2010, some higher education institutions started engaging in headline-grabbing "iPad for every student" initiatives. Many institutions adopted a quick-win strategy of making mobile-friendly websites with access to campus information, directories, news and events. It is estimated that in the USA over 90 percent of higher education institutions have mobile-friendly websites. Some of the headline-grabbing initiatives include the following: Seton Hill University was the first to roll out iPads to all full-time students in 2010 and have continued to do so every year since. They are at the forefront of mobile learning in the US University sector and use Moodle as their virtual learning environment (VLE). Abilene Christian University was the first university in the U.S. to provide iPhones or iPod Touches to all new full-time students in 2008, and are regarded as one of the most mobile-friendly campuses in the U.S. The University of Western Sydney in Australia will roll out 11,000 iPads to all faculty and newly-enrolled students in 2013, as well as creating their own mobile apps. Coventry University in the UK is creating a smart campus in which the geographical location of students triggers access to content and experiences through their mobile devices. MoLeNET in the UK was one of the world's largest mobile learning implementations, comprising 115 colleges, 29 schools, 50,000 students, and 4,000 staff from 2007 to 2010. This was a research-led initiative although unfortunately the original website has now been taken down. While some of these examples are about providing mobile devices to new students, the Bring Your Own Device (BYOD) trend is strong in further and higher education. We know that mobile devices form the backbone of students' media consumption and in the U.S. alone, 75 percent of students use their phone to access the Internet. Additionally, 40 percent have signed up to online test preparation sites on their mobiles, heavily suggesting that if an institution doesn't provide mobile learning services, students will go and get it elsewhere anyway. Instead of the glamorous offer of iPads for all, some institutions have chosen to invest heavily in their wireless network infrastructure in support of a BYOD approach. This is a very heavy investment and can be far more expensive than a few thousand iPads. Some BYOD implementations include: King's College London in the UK, which supports 6,000 staff and 23,500 students The University of Tennessee at Knoxville in the U.S., which hosts more than 26,000 students and 5,000 faculty and staff members, with nearly 75,000 smartphones, tablets, and laptops The University of South Florida in the U.S., which supports 40,000 users Sau Paolo State University in Brazil, which has 45,000 students and noted that despite providing desktop machines in the computer labs, half of all students opted to use their own devices instead There are many challenges to BYOD which are not within the scope of this article, but there are also many resources on how to implement a BYOD policy that minimizes such risks. Use the Internet to seek these out. Providing campus information websites on mobiles obviously was not the key rationale behind such technology investments. The real interest is in delivering mobile learning, and this remains an area full of experimentation and research. Google Scholar can be used to chart the rise of mobile learning research and it becomes evident how this really takes off in the second half of the decade, when the first major institutions started investing in mobile technology. It indexes scholarly literature, including journal and conference papers, theses and dissertations, academic articles, pre-prints, abstracts, and technical reports. A year-by-year search reveals the rise of mobile learning research from just over 100 articles in 2000 to over 6,000 in 2012. The following chart depicts the rise of mobile learning in academic research: Mobile usage in apprenticeships A typical apprenticeship will include a significant amount of college-based learning towards a qualification, alongside a major component based in the workplace under the supervision of an employer while the apprentice learns a particular trade. Due to the movement of the student from college to workplace, and the fact that the apprentice usually has to keep a reflective log and capture evidence of their skills acquisition, mobile devices can play a really useful role in apprenticeships. Traditionally, the age group for apprenticeships is 16 to 24 year olds. This is an age group that has never known a world without mobiles and their mobile devices are integrated into the fabric of their daily lives and media consumption. They use social networks, SMS, and instant messaging rather than e-mail, and are more likely to use the mobile internet than any other age group. Statistics from the U.S. reveal that 75 percent of students use their phone to access the Internet. Reflective logs are an important part of any apprenticeship. There are a number of activities in Moodle that can be used for keeping reflective logs, and these are ideal for mobile learning. Reflective log entries tend to be shorter than traditional assignments and lend themselves well to production on a tablet or even a smartphone. Consumption of reflective logs is perfect for both smartphone and tablet devices, as posts tend to be readable in less than 5 minutes. Many institutions use Moodle coupled with an ePortfolio tool such as Mahara or Onefile to manage apprenticeship programs. There are additional Packt Publishing articles on ePortfolio tools such as Mahara, should you wish to investigate a third-party, open source ePortfolio solution. Mobile usage in the workplace BYOD in the workplace is also becoming increasingly common, and, appears to be an unstoppable trend. It may also be discouraged or banned on security, data protection, or distraction grounds, but it is happening regardless. There is an increasing amount of research available on this topic, and some key findings from various studies reveal the scale of the trend: A survey of 600 IT and business leaders revealed that 90 percent of survey respondents had employees using their own devices at work 65 to 75 percent of companies allow some sort of BYOD usage 80 to 90 percent of employees use a personal mobile device for business use If you are a workplace learning practitioner then you need to sit up and take note of these numbers if you haven't done so already. Even if your organization doesn't officially have a BYOD policy, it is most likely that your employees are already using their own mobile devices for business purposes. It's up to your IT department to manage this safely, and again there are many resources and case studies available online to help with this. But as a learning practitioner, whether it's officially supported or not, it's worth asking yourself whether you should embrace it anyway, and provide learning activities to these users and their devices. Mobile usage in distance learning Online distance learning is principally used in higher education (HE), and many institutions have taken to it either as a new stream of revenue or as a way of building their brand globally. Enrolments have rocketed over recent years; the number of U.S. students enrolled in an online course has increased from one to six million in a decade. Online enrolments have also been the greatest source of new enrolments in HE in that time, outperforming general student enrolment dramatically. Indeed, the year 2011 in the US saw a 10 percent growth rate in distance learning enrolment against 2 percent in the overall HE student population. In the 2010 to 2011 academic years, online enrolments accounted for 31 percent of all U.S. HE enrolments. Against this backdrop of phenomenal growth in HE distance learning courses, we also have a new trend of Massive Online Open Courses (MOOCs) which aim to extend enrolment past traditional student populations to the vast numbers of potential students for whom a formal HE program of study may not be an option. The convenience and flexibility of distance learning appeal to certain groups of the population. Distance learners are likely to be older students, with more than 30 years of age being the dominant age group. They are also more likely to be in full-time employment and taking the course to help advance their careers, and are highly likely to be married and juggling home and family commitments with their jobs and coursework. We know that among the 30 to 40 age group mobile device use is very high, particularly among working professionals, who are a major proportion of HE distance learners. However, the MOOC audience is of real interest here as this audience is much more diverse. As many MOOC users find traditional HE programs out of their reach, many of these will be in developing countries, where we already know that users are leapfrogging desktop computing and going straight to mobile devices and wireless connectivity. For these types of courses, mobile support is absolutely crucial. A wide variety of tools exist to support online distance learning, and these are split between synchronous and asynchronous tools, although typically a blend of the two is used. In synchronous learning, all participants are present at the same time. Courses will therefore be organized to a timetable, and will involve tools such as webinars, video conferences, and real-time chat. In asynchronous learning, courses are self-directed and students work to their own schedules, and tools include e-mail, discussion forums, audio recording, video recordings, and printed material. Connecting distance learning from traditional institutions to MOOCs is a recognized need to improve course quality and design, faculty training, course assessment, and student retention. There are known barriers, including motivation, feedback, teacher contact, and student isolation. These are major challenges to the effectiveness of distance learning, and later in this article we will demonstrate how mobile devices can be used to address some of these areas. Case studies The following case studies illustrate two approaches to how an HE institution and a distance learning institution have adopted Moodle to deliver mobile learning. Both institutions were very early movers in making Moodle mobile-friendly, and can be seen as torch bearers for the rest of us. Fortunately, both institutions have also been influential in the approach that Moodle HQ have taken to mobile compatibility, so in using the new mobile features in recent versions of Moodle, we are all able to take advantage of the substantial amount of work that went into these two sites. University of Sussex The University of Sussex is a research-led HE institution on the south coast of England. They use a customized Moodle 1.9 installation called Study Direct, which plays host to 1,500 editing tutors and 15,000 students across 2,100 courses per year, and receives 13,500 unique hits per day. The e-learning team at the University of Sussex contains five staff (one manager, two developers, one user support, and one tutor support) whose remit covers a much wider range of learning technologies beyond the VLE. However, the team has achieved a great deal with limited resources. It has been working towards a responsive design for some years and has helped to influence the direction of Moodle with regards to designing for mobile devices and usability, through speaking at UK Moodle and HE conferences and providing passionate inputs into debates on the Moodle forums on the subject of interface design. Further to this, team member Stuart Lamour is one of the three original developers of the Bootstrap theme for Moodle, which is used throughout this article. The Study Direct site shows what is possible in Moodle, given the time and resources for its development and a focus on user-centered design. The approach has been to avoid going down the native application route for mobile access like many institutions have done, and to instead focus on a responsive, browser-based user experience. The login page is simple and clean. One of the nice things that the University of Sussex has done is to think through the user interactions on its site and clearly identify calls to action, typically with a green button, as shown by the sign in button on the login page in the following screenshot: The team has built its own responsive theme for Moodle. While the team has taken a leading role on development of the Moodle 2 Bootstrap theme, the University of Sussex site is still on Moodle 1.9 so this implementation uses its own custom theme. This theme is fully responsive and looks good when viewed on a tablet or a smartphone, reordering screen elements as necessary for each screen resolution. The course page, shown in the following screenshot, is similarly clear and uncluttered. The editing interface has been customized quite heavily to give tutors a clear and easy way to edit their courses without running the risk of messing up the user interface. The team maintains a useful and informative blog explaining what they have done to improve the user experience, and which is well worth a read. Open University The Open University (OU) in the UK runs one the largest Moodle sites in the world. It is currently using Moodle 2 for the OU's main VLE as well as for its OpenLearn and Qualifications online platforms. Its Moodle implementation regularly sees days with well over one million transactions and over 60,000 unique users, and has seen peak times of 5,000 simultaneous online users. The OU's focus on mobile Moodle goes back to about 2010, so it was an early mover in this area. This means that the OU did not have the benefit of all the mobile-friendly features that now come with Moodle, but had to largely create its own mobile interface from scratch. Anthony Forth gave a presentation at the UK Moodle Moot in 2011 on the OU's approach to mobile interface design for Moodle. He identified that at the time the Open University migrated to Moodle 2 in 2011 it had over 13,000 mobile users per month. The OU chose to survey a group of 558 of these users in detail to investigate their needs more closely. It transpired that the most popular uses of Moodle on mobile devices was for forums, news, resources and study planners, while areas such as wikis and blogs were very low down the list of users' priorities. So the OU's mobile design focused on these particular areas as well as looking at usability in general. The preceding screenshot shows the OU course page with tabbed access to the popular areas such as Planner, News, Forums, and Resources, and then the main content area providing space for latest news, unread forum posts, and activities taking place this week. The site uses a nice, clean, and easy to understand user interface in which a lot of thought has gone into the needs of the student. Summary In this article, we have provided you with a vision of how mobile learning could be put to use on your own organization's Moodle platform. We gave you an understanding of some of the foundation concepts of mobile learning, some insights into how important mobile learning is becoming, and how it is gaining momentum in different sectors. Your learners are already using mobile devices whether in educational institutions or in the workplace, and they use mobile devices as the backbone of their daily online interactions. They want to also use them for learning. Hopefully, we have started you off on a mobile learning path that will allow you to make this happen. Mobile devices are where the future of Moodle is going to be played out, so it makes complete sense to be designing for mobile access right now. Fortunately, Moodle already provides the means for this to happen and provides tools that allow you to set it up for mobile delivery. Resources for Article : Further resources on this subject: Getting Started with Moodle 2.0 for Business [Article] Managing Student Work using Moodle: Part 2 [Article] Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article]
Read more
  • 0
  • 0
  • 1117