Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
Packt
14 Aug 2013
3 min read
Save for later

nopCommerce – The Public-facing Storefront

Packt
14 Aug 2013
3 min read
(For more resources related to this topic, see here.) General site layout and overview When customers navigate to your store, they will be presented with the homepage. The homepage is where we'll begin to review the site layout and structure. Logo : This is your store logo. As with just about every e-commerce site, this serves as a link back to your homepage. Header links : The toolbar holds some of the most frequently used links, such as Shopping cart, Wishlist, and Account. These links are very customer focused, as this area will also show the customer's logged in status once they are registered with your site. Header menu : The menu holds various links to other important pages, such as New products, Search, and Contact us. It also contains the link to the built-in blog site. Left-side menu : The left-side menu serves as a primary navigation area. It contains the Categories and Manufacturers links as well as Tags and Polls. Center : This area is the main content of the site. It will hold category and product information, as well as the main content of the homepage. Right-side menu : The right-side menu holds links to other ancillary pages in your site, such as Contact us, About us, and News. It also holds the Newsletter signup widget. Footer : The footer holds the copyright information and the Powered by nopCommerce license tag. The naming conventions used for these areas are driven by the Cascading Style Sheet (CSS) definitions. For instance, if you look at the CSS for the Header links area, you will see a definition of header-links. nopCommerce uses layouts to define the overall site structure. A layout is a type of page used in ASP.NET MVC to define a common site template, which is then inherited across all the other pages on your site. In nopCommerce, there are several different layout pages used throughout the site. There are two main layout pages that define the core structure: Root head : This is the base layout page. It contains the header of the HTML that is generated and is responsible for loading all the CSS and JavaScript files needed for the site. Root : This layout is responsible for loading the header, footer, and contains the Master Wrapper, which contains all the other content of the page. These two layouts are common for all pages within nopCommerce, which means every page in the site will display the logo, header links, header menu, and footer. They form the foundation of the site structure. The site pages themselves will utilize one of three other layouts that determine the structure inside the Master Wrapper: Three-column : The three-column layout is what the nopCommerce homepage utilizes. It includes the right side, left side, and center areas. This layout is used primarily on the homepage. Two-column : This is the most common layout that customers will encounter. It includes the left side and center areas. This layout is used on all category and product pages as well as all the ancillary pages. One-column : This layout is used in the shopping cart and checkout pages. It includes the center area only. Changing the layout page used by certain pages requires changing the code. For instance, if we open the product page in Visual Studio, we can see the layout page being used: As you can see, the layout defined for this page is _ColumnsTwo.cshtml, the two-column layout. You can change the layout used by updating this property, for instance, to _ColumnsThree.cshtml, to use the three-column layout.
Read more
  • 0
  • 0
  • 850

article-image-creating-courses-blackboard-learn
Packt
14 Aug 2013
10 min read
Save for later

Creating Courses in Blackboard Learn

Packt
14 Aug 2013
10 min read
(For more resources related to this topic, see here.) Courses in Blackboard Learn The basic structure of any learning management system relies on the basic course, or course shell. A course shell holds all the information and communication that goes on within our course and is the central location for all activities between students and instructors. Let's think about our course shell as a virtual house or apartment. A house or apartment is made up of different rooms where we put things that we use in our everyday life. These rooms such as the living room, kitchen, or bedrooms can be compared to content areas within our course shell. Within each of these content areas, there are items such as telephones, dishwashers, computers, or televisions that we use to interact, communicate, or complete tasks. These items would be called course tools within the course shell. These content areas and tools are available within our course shells and we can use them in the same ways. While as administrators, we won't take a deep dive into all these tools; we should know that they are available and instructors use them within their courses. Blackboard Learn offers many different ways to create courses, but to help simplify our discussion, we will classify those ways in two categories, basic and advanced. This article will discuss the course creation options that we classify as basic. Course names and course IDs When we get ready to create a course in Blackboard Learn, the system requires a few items. It requires a course name and a course ID. The first one should be self-explanatory. If you are teaching a course on "Underwater Basket Weaving" (a hobby I highly recommend), you would simply place this information into the course name. Now the course ID is a bit trickier. Think of it like a barcode that you can find on your favorite cereal. That barcode is unique and tells the checkout scanner the item you have purchased. The course ID has a similar function in Blackboard Learn. It must be unique; so if you plan to have multiple courses on "Underwater Basket Weaving", you will need to figure out a way to express the differences in each course ID. We just talked about how each course ID in Blackboard has to be unique. We as administrators will find that most Blackboard Learn instances we deal with have numerous course shells. Providing multiple courses to the users might become difficult. So we should consider creating a course ID naming convention if one isn't already in place. Our conversation will not tell you which naming convention will be best for your organization, but here are some helpful tips for us to start with: Use a symbol to separate words, acronyms, and numbers from one another. Some admins may use an underscore, period, or dash. However, whitespace, percent, ampersand, less than, greater than, equals, or plus characters are not accepted within course IDs. If you plan to collect reporting data from your instance, make sure to include the term or session and department in the course ID. Collect input from people and teams within your organization who will enroll and support users. Their feedback about a course's naming convention will help it be successful. Many organizations use a student information system, (SIS), which manages the enrollment process.   Default course properties The first item in our Course Settings area allows us to set up several of the default access options within our courses. The Default Course Properties page covers when and who has access to a course by default. Available by Default: This option gives us the ability to have a course available to enrolled students when it is created. Most administrators will have this set to No, since the instructor may not want to immediately give access to the course. Allow Guests by Default and Allow Observers by Default: The next options allow us to set guest and observer access to created courses by default. Most administrators normally set these to No because the guest access and observer role aren't used by their organizations. Default Enrollment Options: We can set default enrollment options to either allow the instructor or system administrator to enroll students or allow the student to self enroll. If we choose the former, we can give the student the ability to e-mail the instructor to request access. If we set Self Enrollment, we can set dates when this option is available and even set a default access code for students to use when they can self enroll. Now that we have these two options for default enrollment, most administrators would suggest setting the default course enrollment option to instructors or system administrators, which will allow instructors to set self enrollment within their own course. Default Duration: The Continuous option allows the course to run continuously with no start or end date set. Select Dates sets specific start and end dates for all courses. The last option called Days from the Date of Enrollment sets courses to run for a specific number of days after the student was enrolled within our Blackboard Learn environment. This is helpful if a student self enrolls in a self-paced course with a set number of days to complete it. Pitfalls of setting start and end dates When using the Start and End dates to control course duration, we may find that all users enrolled within the course will lose access. Course themes and icons If we are using the Blackboard 2012 theme, we have the ability to enable course themes within our Blackboard instance. These themes are created by Blackboard and can be applied to an instructor's course by clicking on the theme icon, seen in the following screenshot, in the upper-right corner of the content area while in a course. They have a wide variety of options, but currently administrators cannot create custom course themes. We can also select which icon sets courses will use by default in our Blackboard instance. These icon themes are created by Blackboard and will appear beside different content items and tools within the course. In the following screenshot, we can see some of the icons that make up one of the sets. Unlike the course themes, these icons will be enforced across the entire instance. Course Tools The Course Tools area offers us the ability to set what tools and content items are available within courses by default. We can also control these settings along with organizations and system tools by clicking on the Tools link under the Tools and Utilities module. Let's review what tools are available and how to enable and disable them within our courses. The options we use to set course tools are exactly same as those used in the Tools area we just mentioned. Use the information provided here to set tool availability with a page. Let's take a more detailed look into the default availability setting within this page. We have four options for each tool. Every tool has the same options. Default On: A course automatically has this tool available to users, but an instructor or leader can disable the tool within it Default Off: Users in a course will not have access to this tool by default, but the instructor or leader can enable it Always On: Instructors or leaders are unable to turn this tool off in their course or organization Always Off: Users do not see this tool in a course or organization, nor can the instructor or leader turn it on within the course Once we make the changes, we must click on the Submit button. Quick Setup Guide The Quick Setup Guide page was introduced into Blackboard 9.1 Service Pack 8. As seen in the following screenshot, it offers instructors the basic introduction into the course if they have never used Blackboard before. Most of the links are to the content from the On Demand area of the Blackboard website. We as administrators can disable this from appearing when an instructor enters the course. If we leave the guide enabled, we can add custom text to the guide, which can help educate instructors about changes, help, and support available from our organization. Custom images We can continue to customize default look and feel of our course shells with images in the course entry point and at the top of the menu. We might use these images to spotlight that our organization has been honored with an award. Here we find an example of how these images would look. Two images can be located at the bottom of the course entry page, which is the page we see after entering a course. Another image can be located at the top of the course menu. This area also allows us to make these images linkable to a website. Here's an example. Default course size limits We can also create a default course size limit for the course and the course export and archive packages within this area. Course Size Limits allows administrators to control storage space, which may be limited in some instances. When a course size limit is within 10 percent of being reached, the administrator and instructor get an e-mail notification. This notification is triggered by the disk usage task that runs once a day. After getting the notification, the instructor can remove content from the course, or the administrator can increase the course quota for that specific course. Maximum Course disk size: This option sets the amount of disk space a course shell can use for storage. This includes all course and student files within the course shell. Maximum Course Package Size: This sets the maximum amount of content from the Course Files area included in a course copy, export, or archive. Grade Center settings This area allows us to set default controls over the Grade History portion of the Grade Center. Grade history is exactly what it says. It keeps a history of the changes within the Grade Center. Most administrators recommend having grade history enabled by default because of the historical benefits. There may be a discussion within your organization to permit instructors to disable this feature within their course or clear the history altogether. Course menu and structures The course menu offers the main navigation for any course user. Our organization can create a default course menu layout for all new course shells created based on the input from instructional designers and pedagogical experts. As seen in the following screenshot, we simply edit the default menu that appears on this page. As administrators, we should pay close attention when creating a default course menu. Any additions or removals to the default menu are automatically changed without clicking on the Submit or Cancel buttons, and are applied to any courses created from that point forward. Blackboard recently introduced course structures. If enabled, these pre-built course menus are available to the instructor within their course's control panel. The course structures fall into a number of different course instruction scenarios. An example of the course structure selection interface is shown in the following screenshot:
Read more
  • 0
  • 0
  • 1238

article-image-developing-entity-metadata-wrappers
Packt
07 Aug 2013
8 min read
Save for later

Developing with Entity Metadata Wrappers

Packt
07 Aug 2013
8 min read
(For more resources related to this topic, see here.) Introducing entity metadata wrappers Entity metadata wrappers, or wrappers for brevity, are PHP wrapper classes for simplifying code that deals with entities. They abstract structure so that a developer can write code in a generic way when accessing entities and their properties. Wrappers also implement PHP iterator interfaces, making it easy to loop through all properties of an entity or all values of a multiple value property. The magic of wrappers is in their use of the following three classes: EntityStructureWrapper EntityListWrapper EntityValueWrapper The first has a subclass, EntityDrupalWrapper, and is the entity structure object that you'll deal with the most. Entity property values are either data, an array of values, or an array of entities. The EntityListWrapper class wraps an array of values or entities. As a result, generic code must inspect the value type before doing anything with a value, in order to prevent exceptions from being thrown. Creating an entity metadata wrapper object Let's take a look at two hypothetical entities that expose data from the following two database tables: ingredient recipe_ingredient The ingredient table has two fields: iid and name. The recipe_ingredient table has four fields: riid, iid , qty , and qty_unit. The schema would be as follows: Schema for ingredient and recipe_ingredient tables To load and wrap an ingredient entity with an iid of 1 and, we would use the following line of code: $wrapper = entity_metadata_wrapper('ingredient', 1); To load and wrap a recipe_ingredient entity with an riid of 1, we would use this line of code: $wrapper = entity_metadata_wrapper('recipe_ingredient', 1); Now that we have a wrapper, we can access the standard entity properties. Standard entity properties The first argument of the entity_metadata_wrapper function is the entity type, and the second argument is the entity identifier, which is the value of the entity's identifying property. Note, that it is not necessary to supply the bundle, as identifiers are properties of the entity type. When an entity is exposed to Drupal, the developer selects one of the database fields to be the entity's identifying property and another field to be the entity's label property. In our previous hypothetical example, a developer would declare iid as the identifying property and name as the label property of the ingredient entity. These two abstract properties, combined with the type property, are essential for making our code apply to multiple data structures that have different identifier fields. Notice how the phrase "type property" does not format the word "property"? That is not a typographical error. It is indicating to you that type is in fact the name of the property storing the entity's type. The other two, identifying property and label property are metadata in the entity declaration. The metadata is used by code to get the correct name for the properties on each entity in which the identifier and label are stored. To illustrate this, consider the following code snippet: $info = entity_get_info($entity_type);$key = isset($info['entity keys']['name']) ? $info['entity keys']['name'] : $info['entity keys']['id'];return isset($entity->$key) ? $entity->$key : NULL; Shown here is a snippet of the entity_id() function in the entity module. As you can see, the entity information is retrieved at the first highlight, then the identifying property name is retrieved from that information at the second highlight. That name is then used to retrieve the identifier from the entity. Note that it's possible to use a non-integer identifier, so remember to take that into account for any generic code. The label property can either be a database field name or a hook. The entity exposing developer can declare a hook that generates a label for their entity when the label is more complicated, such as what we would need for recipe_ingredient. For that, we would need to combine the qty, qty_unit, and the name properties of the referenced ingredient. Entity introspection In order to see the properties that an entity has, you can call the getPropertyInfo() method on the entity wrapper. This may save you time when debugging. You can have a look by sending it to devel module's dpm() function or var_dump: dpm($wrapper->getPropertyInfo());var_dump($wrapper->getPropertyInfo()); Using an entity metadata wrapper The standard operations for entities are CRUD: create, retrieve, update, and delete. Let's look at each of these operations in some example code. The code is part of the pde module's Drush file: sites/all/modules/pde/pde.drush.inc. Each CRUD operation is implemented in a Drush command, and the relevant code is given in the following subsections. Before each code example, there are two example command lines. The first shows you how to execute the Drush command for the operation. ; the second is the help command. Create Creation of entities is implemented in the drush_pde_entity_create function. Drush commands The following examples show the usage of the entity-create ( ec) Drush command and how to obtain help documentation for the command: $ drush ec ingredient '{"name": "Salt, pickling"}'$ drush help ec Code snippet $entity = entity_create($type, $data);// Can call $entity->save() here or wrap to play and save$wrapper = entity_metadata_wrapper($type, $entity);$wrapper->save(); In the highlighted lines we create an entity, wrap it, and then save it. The first line uses entity_create, to which we pass the entity type and an associative array having property names as keys and their values. The function returns an object that has Entity as its base class. The save() method does all the hard work of storing our entity in the database. No more calls to db_insert are needed! Whether you use the save() method on the wrapper or on the Entity object really depends on what you need to do before and after the save() method call. For example, if you need to plug values into fields before you save the entity, it's handy to use a wrapper. Retrieve The retrieving (reading) of entities is implemented in the drush_pde_print_entity() function. Drush commands The following examples show the usage of the entity-read (er) Drush command and how to obtain help documentation for the command. $ drush er ingredient 1$ drush help er Code snippet $header = ' Entity (' . $wrapper->type();$header .= ') - ID# '. $wrapper->getIdentifier().':';// equivalents: $wrapper->value()->entityType()// $wrapper->value()->identifier()$rows = array();foreach ($wrapper as $pkey => $property) { // $wrapper->$pkey === $property if (!($property instanceof EntityValueWrapper)) { $rows[$pkey] = $property->raw() . ' (' . $property->label() . ')'; } else { $rows[$pkey] = $property->value(); }} On the first highlighted line, we call the type() method of the wrapper, which returns the wrapped entity's type. The wrapped Entity object is returned by the value() method of the wrapper. Using wrappers gives us the wrapper benefits, and we can use the entity object directly! The second highlighted line calls the getIdentifier() method of the wrapper. This is the way in which you retrieve the entity's ID without knowing the identifying property name. We'll discuss more about the identifying property of an entity in a moment. Thanks to our wrapper object implementing the IteratorAggregate interface , we are able to use a foreach statement to iterate through all of the entity properties. Of course, it is also possible to access a single property by using its key. For example, to access the name property of our hypothetical ingredient entity, we would use $wrapper->name. The last three highlights are the raw(), label(), and value() method calls. The distinction between these is very important, and is as follows: raw(): This returns the property's value straight from the database. label(): This returns value of an entity's label property. For example, name. value(): This returns a property's wrapped data: either a value or another wrapper. Finally, the highlighted raw() and value() methods retrieve the property values for us. These methods are interchangeable when simple entities are used, as there's no difference between the storage value and property value. However, for complex properties such as dates, there is a difference. Therefore, as a rule of thumb, always use the value() method unless you absolutely need to retrieve the storage value. The example code is using the raw() method only so we that can explore it, and all remaining examples in this book will stick to the rule of thumb. I promise! Storage value: This is the value of a property in the underlying storage media. for example, database. Property value: This is the value of a property at the entity level after the value is converted from its storage value to something more pleasing. For example, date formatting of a Unix timestamp. Multi-valued properties need a quick mention here. Reading these is quite straightforward, as they are accessible as an array. You can use Array notation to get an element, and use a foreach to loop through them! The following is a hypothetical code snippet to illustrate this: $output = 'First property: ';$output .= $wrapper->property[0]->value();foreach ($wrapper->property as $vwrapper) { $output .= $vwrapper->value();} Summary This article delved into development using entity metadata wrappers for safe CRUD operations and entity introspection. Resources for Article: Further resources on this subject: Microsoft SQL Server 2008 R2 MDS: Creating and Using Models [Article] EJB 3 Entities [Article] ADO.NET Entity Framework [Article]
Read more
  • 0
  • 0
  • 2125
Banner background image

article-image-working-sample-controlling-mouse-hand
Packt
06 Aug 2013
10 min read
Save for later

Working sample for controlling the mouse by hand

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) Getting ready Create a project in Visual Studio and prepare it for working with OpenNI and NiTE. How to do it... Copy ReadLastCharOfLine() and HandleStatus() functions to the top of your source code (just below the #include lines). Then add following lines of code: class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); if (!HandleStatus(status) || !newFrame.isValid()) return; const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() -1 ; i >= 0 ; --i){ if (hands[i].isTracking()){ if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); SetCursorPos(curX, curY); } } break; } } } }; Then locate the following line: int _tmain(int argc, _TCHAR* argv[]) { Add the following inside this function: nite::Status status = nite::STATUS_OK; status = nite::NiTE::initialize(); if (!HandleStatus(status)) return 1; printf("Creating hand tracker ...rn"); nite::HandTracker hTracker; status = hTracker.create(); if (!HandleStatus(status)) return 1; MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); printf("Reading data from hand tracker ...rn"); ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; How it works... Both the ReadLastCharOfLine() and HandleStatus() functions are present here too. These functions are well known to you and don't need any explanation. Then in the second part, we declared a class/struct that we are going to use for capturing the new data available event from the nite::HandTracker object. But the definition of this class is a little different here. Other than the onNewFrame() method, we defined a number of variables and a constructor method for this class too. We also changed its name to MouseController to be able to make more sense of it. class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; As you can see, our class is still a child class of nite::HandTracker::NewFrameListener because we are going use it to listen to the nite::HandTracker events. Also, we defined six variables in our class. startPosX and startPosY are going to hold the initial position of the active hand whereas curY and curX are going to hold the position of the mouse when in motion. The handId variable is responsible for holding the ID of the active hand and desktopRecthold for holding the size of the desktop so that we can move our mouse only in this area. These variables are all private variables; this means they will not be accessible from the outside of the class. Then we have the class's constructor method that initializes some of the preceding variables. Refer to the following code: public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } In the constructor, we set both startPosX and startPosY to -1 and then store the current position of the mouse in the curX and curY variables. Then we set the handId variable to -1 to know-mark that there is no active hand currently, and retrieve the value of desktopRect using two Windows API methods, GetDesktopWindow() and GetWindowRect(). The most important tasks are happening in the onNewFrame() method. This method is the one that will be called when new data becomes available in nite::HandTracker; after that, this method will be responsible for processing this data. As the running of this method means that new data is available, the first thing to do in its body is to read this data. So we used the nite::HandTracker::readFrame() method to read the data from this object: void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); When working with nite::HandTracker, the first thing to do after reading the data is to handle gestures if you expect any. We expect to have Hand Raise to detect new hands and click gesture to perform the mouse click: const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } As you can see, we retrieved the list of all the gestures using nite::HandTrackerFrameRef::getGestures() and then looped through them, searching for the ones that are in the completed state. Then if they are the nite::GESTURE_CLICK gesture, we need to perform a mouse click. We used the SendInput() function from the Windows API to do it here. But if the recognized gesture wasn't of the type nite::GESTURE_CLICK, it must be a nite::GESTURE_HAND_RAISE gesture; so, we need to request for the tracking of this newly recognized hand using the nite::HandTracker::startHandTracking() method. The next thing is to take care of the hands being tracked. To do so, we first need to retrieve a list of them using the nite::HandTrackerFrameRef::getHands() method and then loop through them. This can be done using a simple for loop as we used for the gestures. But as we want to read this list in a reverse order, we need to use a reverse for loop. The reason we need to read this list in the reverse order is that we always want the last recognized hand to control the mouse: const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() - 1 ; i >= 0 ; --i){ Then we need to make sure that the current hand is under-tracking because we don't want an invisible hand to control the mouse. The first hand being tracked is the one we want, so we will break the looping there, of course, after the processing part, which we will remove from the following code to make it clearer. if (hands[i].isTracking()){ . . . break; } Speaking of processing, in the preceding three lines of code (with periods) we have another condition. This condition is responsible for finding out if this hand is the same one that had control of the mouse in the last frame. If it is a new hand (either it is a newly recognized hand or it is a newly active hand), we need to save its current position in the startPosX and startPosY variables. if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } If it was the same hand, we have another condition. Do we have the startPosX and startPosY variables already or do we not have them yet? If we have them, we can calculate the mouse's movement. But first we need to calculate the position of the hand relative to the depth frame. }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); Once the process of conversation ends, we need to calculate the new position of the mouse depending on how the hand's position changes. But we want to define a safe area for it to be static when small changes happen. So we calculate the new position of the mouse only if it has moved by more than 10 pixels in our depth frame: if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; As you can see in the preceding code, we also divided the changes by 3 because we didn't want it to move too fast. But before setting the position of the mouse, we need to first make sure that the new positions are in the screen view port using the desktopRect variable: curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); After calculating everything, we can set the new position of the mouse using SetCursorPos() from the Windows API: SetCursorPos(curX, curY); Step three and four are not markedly differently. In this step, we have the initialization process; this includes the initialization of NiTE and the creation of the nite::HandTracker variable. status = nite::NiTE::initialize(); . . . nite::HandTracker hTracker; status = hTracker.create(); Then we should add our newly defined structure/class as a listener to nite::HandTracker so that nite::HandTracker can call it later when a new frame becomes available: MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); Also, we need to have an active search for a hand gesture because we need to locate the position of the hands and we also have to search for another gesture for the mouse click. So we need to call the nite::HandTracker::startGestureDetection() method twice for both the Click (also known as push) and Hand Raise gestures here: hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); At the end, we will wait until the user presses the Enter key to end the app. We do nothing more in our main thread except just waiting. Everything happens in another thread. ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; Summary In this article, we learnt how to write a working example for using nite::HandTracker, controlled the position of the mouse cursor using the NiTE hand tracker feature, and simulated a click event Resources for Article: Further resources on this subject: Getting started with Kinect for Windows SDK Programming [Article] Kinect in Motion – An Overview [Article] Active Directory migration [Article]
Read more
  • 0
  • 0
  • 1152

article-image-media-module
Packt
06 Aug 2013
10 min read
Save for later

The Media module

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) While there are many ways to build image integration into Drupal, they may all stem from different requirements and also each option should be carefully reviewed. Browsing around over 300 modules available in the Media category in Drupal's modules search for Drupal 7 (http://drupal.org/project/modules) may have you confused as to where to begin. We'll take a look at the Media module (http://drupal.org/project/media) which was sponsored by companies such as Acquia, Palantir, and Advomatic and was created to provide a solid infrastructure and common APIs for working with media assets and images specifically. To begin, download the 7.x-2.x version of the Media module (which is currently regarded as unstable but it is fairly different from 7.x-1.x which will be replaced soon enough) and unpack it to the sites/all/modules directory like we did before. The Media module also requires the File entity (http://drupal.org/project/file_entity) module to further extend how files are managed within Drupal by providing a fieldable file entity, display mods, and more. Use the 7.x-2.x unstable version for the File entity module too and download and unpack as always. To enable these modules navigate to the top administrative bar and click on Modules , scrolling to the bottom of the page we see the Media category with a collection of modules, toggle on all of them (Media field and Media Internet sources), and click on Save configuration . Adding a media asset field If you've noticed something missing in the rezepi content type fields earlier, you were right—what kind of recipes website would this be without some visual stimulation? Yes, we mean pictures! To add a new field, navigate to Structure | Content Types | rezepi | manage fields (/admin/structure/types/manage/rezepi/fields). Name the new field Picture and choose Image as the FIELD TYPE and Media file selector for the WIDGET select box and click on Save . As always, we are about to configure the new field settings, but a step before that presents first global settings for this new field, which is okay to leave as they are, so we will continue, and click on Save field settings . In the general field settings most defaults are suitable, except we want to toggle on the Required field setting and make sure the Allowed file extensions for uploaded files setting lists at least some common image types, so set it to PNG, GIF, JPG, JPEG . Click on Save settings to finalize and we've updated the rezepi content type, so let's start using it. When adding a rezepi, the form for filling up the fields should be similar to the following: The Picture field we defined to use as an image no longer has a file upload form element but rather a button to Select media . Once clicked on it, we can observe multiple tabbed options: For now, we are concerned only with the Upload tab and submit our picture for this rezepi entry. After browsing your local folder and uploading the file, upon clicking Save we are presented with the new media asset form: Our picture has been added to the website's media library and we can notice that it's no longer just a file upload somewhere, but rather it's a media asset with a thumbnail created and even has a way to configure the image HTML input element's attributes. We'll proceed with clicking on Save and once more on the add new content form too, to finalize this new rezepi submission. The media library To further explore the media asset tabs that we've seen before, we will edit the recently created rezepi entry and try to replace the previously uploaded picture with another. In the node's edit form, click on the Picture field's Select media button and browse the Library tab which should resemble the following: The Library tab is actually just a view (you can easily tell by the down-arrow and gear icons to the right of the screen) that lists all the files in your website. Furthermore, this view is equipped with some filters such as the filename, media type, and even sorting options. Straight-away, we can notice that our picture for the rezepi that was created earlier shows up there which is because it has been added as a media asset to the library. We can choose to use it again in further content that we create in the website. Without the media module and it's media assets management, we had to use the file field which only allowed to upload files to our content but never to re-use content that we, or other users, had created previously. Aside from possibly being annoying, this also meant that we had to duplicate files if we needed the same media file for more than one content type. The numbered images probably belong to some of the themes that we experimented before and the last two files are the images we've uploaded to our memo content type. Because these files were not created when the Media module was installed, they lack some of the metadata entries which the Media module keeps to better organize media assets. To manage our media library, we can click on Content from the top administrative bar which shows all content that has been created in your Drupal site. It features filtering and ordering of the columns to easily find content to moderate or investigate and even provides some bulk action updates on several content types. More important, after enabling the Media module we have a new option to choose from in the top right tabs, along with Content and Comments , we now have Files . The page lists all file uploads, both prior to the Media module as well as afterwards, and clearly states the relevant metadata such as media type, size, and the user who uploaded this file. We can also choose from List view or Thumbnail view using the top right tab options, which offers a nicer view and management of our content. The media library management page also features option to add media assets right from this page using the Add file and Import files links. While we've already seen how adding a single media file works, adding a bunch of files is something new. The Import files option allows you to specify a directory on your web server which contains media files and import them all to your Drupal website. After clicking on Preview , it will list the full paths to the files that were detected and will ask you to confirm and thus continue with the import process. Once that's successfully completed, you can return to the files thumbnail view (/admin/content/file/thumbnails) and edit the imported files, possibly setting some title text or removing some entries. You might be puzzled as to what's the point of importing media files directory from the server's web directory, after all, this would require one to have transferred the files there via FTP, SCP, or some other method, but definitely this is somewhat unconventional these days. Your hunch is correct, the import media is a nice to have feature but it's definitely not a replacement for bulk uploads of files from the web interface which Drupal should support and we will later on learn about adding this capability. When using the media library to manage these files, you will probably ask yourself first, before deleting or replacing an image, where is it actually being used? For that reason, Drupal's internal file handling keeps track of which entity makes use of each file and the Media module exposes this information via the web interface for us. Any information about a media asset is available in its Edit or View tabs, including where is it being used. Let's navigate through the media library to find the image we created previously for the rezepi entry and then click on Edit in the rightmost OPERATIONS column. In the Edit page, we can click on the USAGE tab at the top right of the page to get this information: We can tell which entity type is using this file, see the title of the node that it's being used for with a link to it, and finally the usage count. Using URL aliases If you are familiar with Drupal's internal URL aliases then you know that Drupal employs a convention of /node/<NID>[/ACTION], where NID is replaced by the node ID in the database and ACTION may be one of edit, view, or perhaps delete. To see this for yourself, you can click on one of the content items that we've previously created and when viewing it's full node display observe the URL in your browser's address bar. When working with media assets, we can employ the same URL alias convention for files too using the alias /file/<FID>[/ACTION]. For example, to see where the first file you've uploaded is being used, navigate in your browser to /file/1/usage. Remote media assets If we had wanted to replace the picture for this rezepi by specifying a link to an image that we've encountered in a website, maybe even our friend's personal blog, the only way to have done that without the Media module was to download it and upload using the file field's upload widget. With the Media module, we can specify the link for an image hosted and provided by a remote resource using the Web tab. I've Googled some images and after finding my choice for a picture, I simply copy-and-paste the image link to the URL input text as follows: After clicking on Submit , the image file will be downloaded to our website's files directory and the Media module will create the required metadata and present the picture's settings form before replacing our previous picture: There are plenty of modules such as Media: Flickr (http://drupal.org/project/media_flickr) which extends on the Media module by providing integration with remote resources for images and even provides support for a Flickr's photoset or slideshow. Just to list a few other modules: Media: Tableau (http://drupal.org/project/media_tableau) for integrating with the Tableau analytics platform Media: Slideshare (http://drupal.org/project/media_slideshare) for integrating with presentations at Slideshare website Media: Dailymotion (http://drupal.org/project/media_dailymotion) for integrating with the Dailymotion videos sharing website The only thing left for you is to download them from http://drupal.org/modules and start experimenting! Summary In this article, we dived into deep water with creating our very own content type for a food recipe website. In order to provide better user experience when dealing with images in Drupal sites, we learned about the prominent Media module and its extensive support for media resources such as providing a media library and key integration with other modules such as the Media Gallery. Resources for Article : Further resources on this subject: Installing and Configuring Drupal Commerce [Article] Drupal 7 Fields/CCK: Using the Image Field Modules [Article] Drupal 7 Preview [Article]
Read more
  • 0
  • 0
  • 1437

article-image-working-blocks
Packt
31 Jul 2013
20 min read
Save for later

Working with Blocks

Packt
31 Jul 2013
20 min read
(For more resources related to this topic, see here.) Creating a custom block type Creating block types is a great way to add custom functionality to a website. This is the preferred way to add things like calendars, dealer locators, or any other type of content that is visible and repeatable on the frontend of the website. Getting ready The code for this recipe is available to download from the book's website for free. We are going to create a fully functioning block type that will display content on our website. How to do it... The steps for creating a custom block type are as follows: First, you will need to create a directory in your website's root /blocks directory. The name of the directory should be underscored and will be used to refer to the block throughout the code. In this case, we will create a new directory called /hello_world. Once you have created the hello_world directory, you will need to create the following files: controller.php db.xml form.php add.php edit.php view.php view.css Now, we will add code to each of the files. First, we need to set up the controller file. The controller file is what powers the block. Since this is a very basic block, our controller only will contain information to tell concrete5 some details about our block, such as its name and description. Add the following code to controller.php: class HelloWorldBlockController extends BlockController { protected $btTable = "btHelloWorld"; protected $btInterfaceWidth = "300"; protected $btInterfaceHeight = "300"; public function getBlockTypeName() { return t('Hello World'); } public function getBlockTypeDescription() { return t('A basic Hello World block type!'); } } Notice that the class name is HelloWorldBlockController. concrete5 conventions dictate that you should name your block controllers with the same name as the block directory in camel case (for example: CamelCase) form, and followed by BlockController. The btTable class variable is important, as it tells concrete5 what database table should be used for this block. It is important that this table doesn't already exist in the database, so it's a good idea to give it a name of bt (short for "block type") plus the camel cased version of the block name. Now that the controller is set up, we need to set up the db.xml file. This file is based off of the ADOXMLS format, which is documented at http://phplens.com/lens/adodb/docs-datadict.htm#xmlschema. This XML file will tell concrete5 which database tables and fields should be created for this new block type (and which tables and fields should get updated when your block type gets updated). Add the following XML code to your db.xml file: lt;?xml version="1.0"?> <schema version="0.3"> <table name="btHelloWorld"> <field name="bID" type="I"> <key /> <unsigned /> </field> <field name="title" type="C" size="255"> <default value="" /> </field> <field name="content" type="X2"> <default value="" /> </field> </table> </schema> concrete5 blocks typically have both an add.php and edit.php file, both of which often do the same thing: show the form containing the block's settings. Since we don't want to repeat code, we will enter our form HTML in a third file, form.php, and <?php $form = Loader::helper('form'); ?> <div> <label for="title">Title</label> <?php echo $form->text('title', $title); ?> </div> <div> <label for="content">Content</label> <?php echo $form->textarea('content', $content); ?> </div> Once that is all set, add this line of code to both add.php and edit.php to have this HTML code appear when users add and edit the block: <?php include('form.php') ?> Add the following HTML to your view.php file: <h1><?php echo $title ?></h1> <div class="content"> <?php echo $content ?> </div> Finally, for a little visual appeal, add the following code to view.css: content { background: #eee; padding: 20px; margin: 20px 0; border-radius: 10px; } Now all of the files have been filled with the code to make our Hello World block function. Now we need to install this block in concrete5 so we can add it to our pages. To install the new block, you will need to sign into your concrete5 website and navigate to /dashboard/blocks/types/. If you happen to get a PHP fatal error here, clear your concrete5 cache by visiting /dashboard/system/optimization/clear_cache (it is always a good idea to disable the cache while developing in concrete5). At the top of the Block Types screen, you should see your Hello World block, ready to install. Click on the Install button. Now the block is installed and ready to add to your site! How it works... Let's go through the code that we just wrote, step-by-step. In controller.php, there are a few protected variables at the top of the class. The $btTable variable tells concrete5 which table in the database holds the data for this block type. The $btInterfaceWidth and $btInterfaceHeight variables determine the initial size of the dialog window that appears when users add your block to a page on their site. We put the block's description and name in special getter functions for one reason, to potentially support for translations down the road. It's best practice to wrap any strings that appear in concrete5 in the global t() function. The db.xml file tells concrete5 what database tables should be created when this block gets installed. This file uses the ADOXMLS format to generate tables and fields. In this file, we are telling concrete5 to create a table called btHelloWorld. That table should contain three fields, an ID field, the title field, and the content field. The names of these fields should be noted, because concrete5 will require them to match up with the names of the fields in the HTML form. In form.php, we are setting up the settings form that users will fill out to save the block's content. We are using the Form Helper to generate the HTML for the various fields. Notice how we are able to use the $title and $content variables without them being declared yet. concrete5 automatically exposes those variables to the form whenever the block is added or edited. We then include this form in the add.php and edit.php files. The view.php file is a template file that contains the HTML that the end users will see on the website. We are just wrapping the title in an <h1> tag and the content in a <div> with a class of .content. concrete5 will automatically include view.css (and view.js, if it happens to exist) if they are present in your block's directory. Also, if you include an auto.js file, it will automatically be included when the block is in edit mode. We added some basic styling to the .content class and concrete5 takes care of adding this CSS file to your site's <head> tag. Using block controller callback functions The block controller class contains a couple of special functions that get automatically called at different points throughout the page load process. You can look into these callbacks to power different functionalities of your block type. Getting ready To get started, you will need a block type created and installed. See the previous recipe for a lesson on creating a custom block type. We will be adding some methods to controller.php. How to do it... The steps for using block controller callback functions are as follows: Open your block's controller.php file. Add a new function called on_start(): public function on_start() { } Write a die statement that will get fired when the controller is loaded. die('hello world'); Refresh any page containing the block type. The page should stop rendering before it is complete with your debug message. Be sure to remove the die statement, otherwise your block won't work anymore! How it works... concrete5 will call the various callback functions at different points during the page load process. The on_start() function is the first to get called. It is a good place to put things that you want to happen before the block is rendered. The next function that gets called depends on how you are interacting with the block. If you are just viewing it on a page, the view() function gets called. If you are adding or editing the block, then the add() or edit() functions will get called as appropriate. These functions are a good place to send variables to the view, which we will show how to do in the next recipe. The save() and delete() functions also get called automatically at this point, if the block is performing either of those functions. After that, concrete5 will call the on_before_render() function. This is a good time to add items to the page header and footer, since it is before concrete5 renders the HTML for the page. We will be doing this later on in the article. Finally, the on_page_view() function is called. This is actually run once the page is being rendered, so it is the last place where you have the code executed in your block controller. This is helpful when adding HTML items to the page. There's more... The following functions can be added to your controller class and they will get called automatically at different points throughout the block's loading process. on_start on_before_render view add edit on_page_view save delete For a complete list of the callback functions available, check out the source for the block controller library, located in /concrete/core/libraries/block_controller.php. Sending variables from the controller to the view A common task in MVC programming is the concept of setting variables from a controller to a view. In concrete5, blocks follow the same principles. Fortunately, setting variables to the view is quite easy. Getting ready This recipe will use the block type that was created in the first recipe of this article. Feel free to adapt this code to work in any block controller, though. How to do it... In your block's controller, use the set() function of the controller class to send a variable and a value to the view. Note that the view doesn't necessarily have to be the view.php template of your block. You can send variables to add.php and edit.php as well. In this recipe, we will send a variable to view.php. The steps are as follows: Open your block's controller.php file. Add a function called view() if it doesn't already exist: public function view() { } Set a variable called name to the view. $this->set('name', 'John Doe'); Open view.php in your block's directory. Output the value of the name variable. <div class="content"> <?php echo $name ?> </div> Adding items to the page header and footer from the block controller An important part of block development is being able to add JavaScript and CSS files to the page in the appropriate places. Consider a block that is using a jQuery plugin to create a slideshow widget. You will need to include the plugin's JavaScript and CSS files in order for it to work. In this recipe, we will add a CSS <link> tag to the page's <head> element, and a JavaScript <script> tag to bottom of the page (just before the closing </body> tag). Getting ready This recipe will continue working with the block that was created in the first recipe of this article. If you need to download a copy of that block, it is included with the code samples from this book's website. This recipe also makes a reference to a CSS file and a JavaScript file. Those files are available for download in the code on this book's website as well. How to do it... The steps for adding items to the page header and footer from the block controller are as follows: Open your block's controller.php file. Create a CSS file in /css called test.css. Set a rule to change the background color of the site to black. body { background: #000 !important; } Create a JavaScript file in /js called test.js. Create an alert message in the JavaScript file. alert('Hello!'); In controller.php, create a new function called on_page_view(). public function on_page_view() { } Load the HTML helper: $html = Loader::helper('html'); Add a CSS file to the page header: $this->addHeaderItem($html->css('testing.css')); Add a JavaScript file to the page footer: $this->addFooterItem($html->javascript('test.js')); Visit a page on your site that contains this block. You should see your JavaScript alert as well as a black background. How it works... As mentioned in the Using block controller callback function recipe, the ideal place to add items to the header (the page's <head> tag) and footer (just before the closing </body> tag) is the on_before_render() callback function. The addHeaderItem and addFooterItem functions are used to place strings of text in those positions of the web document. Rather than type out <script> and <link> tags in our PHP, we will use the built-in HTML helper to generate those strings. The files should be located in the site's root /css and /js directories. Since it is typically best practice for CSS files to get loaded first and for JavaScript files to get loaded last, we place each of those items in the areas of the page that make the most sense. Creating custom block templates All blocks come with a default view template, view.php. concrete5 also supports alternative templates, which users can enable through the concrete5 interface. You can also enable these alternative templates through your custom PHP code. Getting ready You will need a block type created and installed already. In this recipe, we are going to add a template to the block type that we created at the beginning of the article. How to do it... The steps for creating custom block templates are as follows: Open your block's directory. Create a new directory in your block's directory called templates/. Create a file called no_title.php in templates/. Add the following HTML code to no_title.php: <div class="content"> <?php echo $content ?> </div> Activate the template by visiting a page that contains this block. Enter edit mode on the page and click on the block. Click on "Custom Template".     Choose "No Title" and save your changes. There's more... You can specify alternative templates right from the block controller, so you can automatically render a different template depending on certain settings, conditions, or just about anything you can think of. Simply use the render() function in a callback that gets called before the view is rendered. public function view() { $this->render('templates/no_title'); } This will use the no_title.php file instead of view.php to render the block. Notice that adding the .php file extension is not required. Just like the block's regular view.php file, developers can include view.css and view.js files in their template directories to have those files automatically included on the page. See also The Using block controller callback functions recipe The Creating a custom block type recipe Including JavaScript in block forms When adding or editing blocks, it is often desired to include more advanced functionality in the form of client-side JavaScript. concrete5 makes it extremely easy to automatically add a JavaScript file to a block's editor form. Getting ready We will be working with the block that was created in the first recipe of this article. If you need to catch up, feel free to download the code from this book's website. How to do it... The steps for including JavaScript in block forms are as follows: Open your block's directory. Create a new file called auto.js. Add a basic alert function to auto.js: alert('Hello!'); Visit a page that contains your block. Enter edit mode and edit the block. You should see your alert message appear as shown in the following screenshot: How it works... concrete5 automatically looks for the auto.js file when it enters add or edit mode on a block. Developers can use this to their advantage to contain special client-side functionality for the block's edit mode. Including JavaScript in the block view In addition to being able to include JavaScript in the block's add and edit forms, developers can also automatically include a JavaScript file when the block is viewed on the frontend. In this recipe, we will create a simple JavaScript file that will create an alert whenever the block is viewed. Getting ready We will continue working with the block that was created in the first recipe of this article. How to do it... The steps for including JavaScript in the block view are as follows: Open your block's directory. Create a new file called view.js. Add an alert to view.js: alert('This is the view!'); Visit the page containing your block. You should see the new alert appear. How it works... Much like the auto.js file discussed in the previous recipe, concrete5 will automatically include the view.js file if it exists. This allows developers to easily embed jQuery plugins or other client-side logic into their blocks very easily. Including CSS in the block view Developers and designers working on custom concrete5 block types can have a CSS file automatically included. In this recipe, we will automatically include a CSS file that will change our background to black. Getting ready We are still working with the block that was created earlier in the article. Please make sure that block exists, or adapt this recipe to suit your own concrete5 environment. How to do it... The steps for including CSS in the block view are as follows: Open your block's directory. Create a new file called view.css, if it doesn't exist. Add a rule to change the background color of the site to black: body { background: #000 !important; } Visit the page containing your block. The background should now be black! How it works... Just like it does with JavaScript, concrete5 will automatically include view.css in the page's header if it exists in your block directory. This is a great way to save some time with styles that only apply to your block. Loading a block type by its handle Block types are objects in concrete5 just like most things. This means that they have IDs in the database, as well as human-readable handles. In this recipe, we will load the instance of the block type that we created in the first recipe of this article. Getting ready We will need a place to run some arbitrary code. We will rely on /config/site_post.php once again to execute some random code. This recipe also assumes that a block with a handle of hello_world exists in your concrete5 site. Feel free to adjust that handle as needed. How to do it... The steps for loading a block type by its handle are as follows: Open /config/site_post.php in your preferred code editor. Define the handle of the block to load: $handle = 'hello_world'; Load the block by its handle: $block = BlockType::getByHandle($handle); Dump the contents of the block to make sure it loaded correctly: print_r($block); exit; How it works... concrete5 will simply query the database for you when a handle is provided. It will then return a BlockType object that contains several methods and properties that can be useful in development. Adding a block to a page Users can use the intuitive concrete5 interface to add blocks to the various areas of pages on the website. You can also programmatically add blocks to pages using the concrete5 API. Getting ready The code in this article can be run anywhere that you would like to create a block. To keep things simple, we are going to use the /config/site_post.php file to run some arbitrary code. This example assumes that a page with a path of /about exists on your concrete5 site. Feel free to create that page, or adapt this recipe to suit your needs. Also, this recipe assumes that /about has a content area called content. Again, adapt according to your own website's configuration. We will be using the block that was created at the beginning of this article. How to do it... The steps for adding a block to a page are as follows: Open /config/site_post.php in your code editor. Load the page that you would like to add a block to: $page = Page::getByPath('/about'); Load the block by its handle: $block = BlockType::getByHandle('hello_world'); Define the data that will be sent to the block: $data = array( 'title' => 'An Exciting Title', 'content' => 'This is the content!' ); Add the block to the page's content area: $page->addBlock($block, 'content', $data); How it works... First you need to get the target page. In this recipe, we get it by its path, but you can use this function on any Page object. Next, we need to load the block type that we are adding. In this case, we are using the one that was created earlier in the article. The block type handle is the same as the directory name for the block. We are using the $data variable to pass in the block's configuration options. If there are no options, you will need to pass in an empty array, as concrete5 does not allow that parameter to be blank. Finally, you will need to know the name of the content area; in this case, the content area is called "content". Getting the blocks from an area concrete5 pages can have several different areas where blocks can be added. Developers can programmatically get an array of all of the block objects in an area. In this recipe, we will load a page and get a list of all of the blocks in its main content area. Getting ready We will be using /config/site_post.php to run some arbitrary code here. You can place this code wherever you find appropriate, though. This example assumes the presence of a page with a path of /about, and with a content area called content. Make the necessary adjustments in the code as needed. How to do it... The steps for getting the blocks from an area are as follows: Open /config/site_post.php in your code editor. Load the page by its path: $page = Page::getByPath('/about'); Get the array of blocks in the page's content area. $blocks = $page->getBlocks('content'); Loop through the array, printing each block's handle. foreach ($blocks as $block) { echo $block->getBlockTypeHandle().'<br />'; } Exit the process. exit; How it works... concrete5 will return an array of block objects for every block that is contained within a content area. Developers can then loop through this array to manipulate or read the block objects. Summary This article discussed how to create custom block types and integrate blocks in your own website using concrete5's blocks. Resources for Article : Further resources on this subject: Everything in a Package with concrete5 [Article] Creating Your Own Theme [Article] concrete5: Mastering Auto-Nav for Advanced Navigation [Article]
Read more
  • 0
  • 0
  • 1517
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-accessing-and-using-rdf-data-stanbol
Packt
30 Jul 2013
6 min read
Save for later

Accessing and using the RDF data in Stanbol

Packt
30 Jul 2013
6 min read
(For more resources related to this topic, see here.) Getting ready To start with, we need a Stanbol instance and Node.js. Additionally, we need the file rdfstore-js, which can be installed by executing the following command line: > npm install rdfstore How to do it... We create a file rdf-client.js with the following code: var rdfstore = require('rdfstore'); var request = require('request'); var fs = require('fs'); rdfstore.create(function(store) { function load(files, callback) { var filesToLoad = files.length; for (var i = 0; i < files.length; i++) { var file = files[i] fs.createReadStream(file).pipe( request.post( { url: 'http://localhost:8080/enhancer?uri=file: ///' + file, headers: {accept: "text/turtle"} }, function(error, response, body) { if (!error && response.statusCode == 200) { store.load( "text/turtle", body, function(success, results) { console.log('loaded: ' + results + " triples from file" + file); if (--filesToLoad === 0) { callback() } } ); } else { console.log('Got status code: ' + response.statusCode); } })); } } load(['testdata.txt', 'testdata2.txt'], function() { store.execute( "PREFIX enhancer:<http://fise.iks-project. eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. FILTER (lang(?label) = "en") }", function(success, results) { if (success) { console.log("*******************"); for (var i = 0; i < results.length; i++) { console.log(results[i].label.value + " in " + results[i].source.value); } } }); }); }); Create the data files:Our client loads two files. We use a simple testdata.txt file having the content: "The Stanbol enhancer can detect famous cities such as Paris and people such as Bob Marley." And a second testdata2.txt file with the following content: "Bob Marley never had a concert in Vatican City." We execute the code using Node.js command line: > node rdf-client.js The output is: loaded: 159 triples from file testdata2.txt loaded: 140 triples from file testdata2.txt ******************* Vatican City in file:///testdata2.txt Bob Marley in file:///testdata2.txt Bob Marley in file:///testdata.txt Paris, Texas in file:///testdata.txt Paris in file:///testdata.txt This time we see the labels of the entities and the file in which they appear. How it works… Unlike the usual clients, this client no longer analyses the returned JavaScript Object Notation (JSON) but processes the returned data as RDF. An RDF document is a directed graph. The following screenshot shows some RDF rendered as graph by the W3C We can create such an image by selecting RDF/XML as the output format on localhost:8080/enhancer , copying and pasting the XML generated, and running the engines on some text to www.w3.org/RDF/Validator/ , where we can request that triples and graphs be generated from it. Triples are the other way to look at RDF. An RDF graph (or document) is a set of triples of the form– subject-predicate-object, where subject and object are the nodes (vertices) and predicate is the arc (edge). Every triple is a statement describing a property of its subject: <urn:enhancement-f488d7ce-a1b7-faa6-0582-0826854eab5e> <http://fise. iks-project.eu/ontology/entity-reference> <http://dbpedia.org/resource/ Bob_Marley>. <http://dbpedia.org/resource/Bob_Marley> <http://www.w3.org/2000/01/rdf-schema#label> "Bob Marley"@en . There are two triples saying that an enhancement referenced Bob Marley and that the English label for Bob Marley is "Bob Marley". All the arches and most of the nodes are labeled by an Internationalized Resource Identifier (IRI), which defines a superset of the good old URLs including non-Latin characters. RDF can be serialized in many different formats. The two triples in the preceding command lines use the N-TRIPLES syntax. RDF/XML expresses (serializes) RDF graphs as XML documents. Originally, RDF/XML was referred to as the canonical serialization for RDF. Unfortunately, this caused some people to believe RDF would be somehow related to XML and thus inherit its flaws. A serialization format designed specifically for RDF that doesn't encode RDF into an existing format is Turtle. Turtle allows both explicit listing of triples as in N-TRIPLES but also supports various ways of expressing the graphs in a more concise and readable fashion. The JSON-LD, expresses RDF graphs in JSON. As this specification is currently still work in progress (see json-ld.org/), different implementations are incompatible and thus, for this example, we switched the Accept-Header to text/turtle. Another change in the code performing the request is that we added a uri query-parameter to the requested URL: 'http://localhost:8080/enhancer?uri=file:///' + file,   This defines the IRI naming used as a name for the uploaded content in the result graph. If this parameter is not specified, the enhancer will generate an IRI which is based on creating a hash of the content. But this line in the output would be less helpful: Paris in urn:content-item-sha1-3b16820497aae806f289419d541c770bbf87a796 Roughly the first half of our code takes care of sending the files to Stanbol and storing the returned RDF. We define a function load that asynchronously enhances a bunch of files and invokes a callback function when all files have successfully been loaded. The second half of the code is the function that's executed once all files have been processed. At this point, we have all the triples loaded in the store. We could now programmatically access the triples one by one, but it's easier to just query for the data we're interested in. SPARQL is a query language a bit similar to SQL but designed to query triple stores rather than relational databases. In our program, we have the following query (slightly simplified here): PREFIX enhancer:<http://fise.iks-project.eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. } The most important part is the section between curly brackets. This is a graph pattern that is like a graph, but with some variables instead of values. On execution, the SPARQL engine will check for parts of the RDF matching this pattern and return a table with a row for each selected value and a row for every matching value combination. In our case, we iterate through the result and output the label of the entity and the document in which the entity was referenced. There's more... The advantage of RDF is that many tools can deal with the data, ranging from command line tools such as rapper (librdf.org/raptor/rapper.html) for converting data to server applications, which allow to store large amounts of RDF data and build applications on top of it. Summary In this recipe, the advantage of using RDF (model-based) over the conventional JSON (syntax-based)method is explained. In the article, a client was created, rdf-client.js, which loaded two files, testdata.txt and testdata2.txt, and then were executed using Node.js command prompt. An RDF was rendered using W3C in the form of triples. Later, using SPARQL the triples were queried to extract the required information. Resources for Article : Further resources on this subject: Installing and customizing Redmine [Article] Web Services in Apache OFBiz [Article] Geronimo Architecture: Part 2 [Article]
Read more
  • 0
  • 1
  • 1401

article-image-events
Packt
25 Jul 2013
2 min read
Save for later

Events

Packt
25 Jul 2013
2 min read
(For more resources related to this topic, see here.) What is a payload? The payload of an event, the event object, carries any necessary state from the producer to the consumer and is nothing but an instance of a Java class. An event object may not contain a type variable, such as <T>.   We can assign qualifiers to an event and thus distinguish an event from other events of an event object type. These qualifiers act like selectors, narrowing the set of events that will be observed for an event object type. There is no distinction between a qualifier of a bean type and that of an event, as they are both defined with @Qualifier. This commonality provides a distinct advantage when using qualifiers to distinguish between bean types, as those same qualifiers can be used to distinguish between events where those bean types are the event objects. An event qualifier is shown here: @Qualifier @Target( { FIELD, PARAMETER } ) @Retention( RUNTIME ) public @interface Removed {} How do I listen for an event? An event is consumed by an observer method , and we inform Weld that our method is used to observe an event by annotating a parameter of the method, the event parameter , with @Observes . The type of event parameter is the event type we want to observe, and we may specify qualifiers on the event parameter to narrow what events we want to observe. We may have an observer method for all events produced about a Book event type, as follows: public void onBookEvent(@Observes Book book) { ... } Or we may choose to only observe when a Book is removed, as follows: public void onBookRemoval(@Observes @Removed Book book) { ... } Any additional parameters on an observer method are treated as injection points. An observer method will receive an event to consume if: The observer method is present on a bean that is enabled within our application The event object is assignable to the event parameter type of the observer method
Read more
  • 0
  • 0
  • 950

article-image-merchandising-success
Packt
25 Jul 2013
17 min read
Save for later

Merchandising for Success

Packt
25 Jul 2013
17 min read
(For more resources related to this topic, see here.) Shop categories Creating product categories, like most things in PrestaShop, is easy and we will cover that soon. First we need to plan the ideal category structure, and this demands a little thought. Planning your category structure You should think really hard about the following questions: What is your business key – general scope or specific? Remember, if the usability is complex for you, it will be difficult to get future customers. So what will make the navigation simple and intuitive for your customers? What structure will support any plan you might have for expanding the range in the future? What do your competitors use? What could you do to make your structure better for your customers than anybody else's? When you have worked it out, we will create the category structure and then we will create the content (images and descriptions) for your category pages. First you need to consider what categories you want for your product range. Here are some examples: If your business is geared to the general scope, then it could be something like: Books Electronics Home and garden Fashion, jewelry, and beauty However, if your business is a closed market, for example electronics, then it could be something like: Cameras and photography Mobile and home phones Sound and vision Video games and consoles You get the idea. My examples don't have categories, subcategories, or anything deeper just for the sake of it. There are no prizes for compartmentalizing. If you think a fairly fat structure is what your customer wants, then that is what you should do. If you are thinking, "Hang on, I don't have any categories let alone any subcategories," don't panic. If your research and common sense says you should only have a few categories without any subcategories, then stick to it. Simplicity is the most important thing. Pleasing your customer and making your shop intuitive for your customer will make you more money than obscure compartmentalizing of your products. Creating your categories Have your plan close at hand. Ideally, have it written down or, if it is very simple, have it clearly in your head. Enough of the theory, it is now time for action. Time for action – how to create product categories Make sure that you are logged into your PrestaShop back office. We will do this in two steps. First we will create your structure as per your plan, then in the next Time for action section, we will implement the category descriptions. Let's get on with the structure of your categories: Click on Catalog and you will see the categories. Click on one. Now click on the green + symbol to add a new subcategory. PrestaShop defines even your top-level categories as subcategories because the home category is considered to be the the top-level category. Just type in the title of your first main category. Don't worry about the other options. The descriptions are covered in a minute and the rest is to do with the search engines. You have created your first category. Now that you are back to the home category, you can click on the green button again to create your next main category. To do so, save as before and remember to check the Home radio button, when you are ready, to create your next main category. Repeat until all top-level categories are created. Have a quick look at your shop front to make sure you like what you see. Here is a screenshot from the PrestaShop demo store: Now for the subcategories. We will create one level at a time as earlier. So we will create all the subcategories before creating any categories within subcategories. In your home category, you will have a list of your main categories. Click on the first one in the list that requires a subcategory. Now click on the create subcategory + icon. Type the name of your subcategory, leaving the other options, and click on Save. Go back to the main category if you want to create another subcategory. Play around with clicking in and out of categories and subcategories until you get used to how PrestaShop works. It isn't complicated, but it is easy to get lost and start creating stuff in the wrong place. If this happened to you, just click on the bin icon to delete your mistake. Then pay close attention to the category or subcategory you are in and carry on. You can edit the category order from the main catalog page by selecting the box of the category you want to move and then clicking an up or down arrow. Finish creating your full category structure. Play with the category and subcategory links on your shop front to see how they work and then move on. What just happened? Superb! Your category structure is done and you should be fairly familiar with navigating around your categories in your control panel. Now we can add the category and subcategory descriptions. I left it empty until now because as you might have noticed, the category creation palaver can be a bit fiddly and it makes sense to keep it as straightforward as possible. Here are some tips for writing good category descriptions followed by a quick Time for action section for entering descriptions into the category itself. Creating content for your categories and subcategories I see so many shops online with really dull category descriptions. Category descriptions should obviously describe but they should also sell! Here are a few tips for writing some enticing descriptions: Keep them short—two paragraphs at the most. People do not visit your website to read. The detail should be in the products themselves. Similar to a USP, category descriptions should be a combination of fact and emotive description that focuses on the benefit to the customer. Try and be as specific as you can about each category and subcategory so that each description is accurate and relevant in its own right. For example, don't let the category steal all the glory from a subcategory. It is very important for SEO. Time for action – adding category descriptions Be ready with the text for all your categories or you can, of course, type them as you go: Go to Catalog and then on the first categories' Edit button. Enter your category description and click on Save. Click on the subcategories of your first category. Then enter and save a description for each (if any). Navigate to the second main category and enter a description. Repeat the same for each of the subcategories in turn. Reiterate the preceding steps for each category. What just happened? You now have a fully functioning category structure. Now we can go on to look at adding some of your products. Adding products Click on the Catalog tab and then click on product. It is pretty similar to category. In the Time for action section, I will cover what to enter in each box as a separate item. However, I will skip over a few items like meta tags because they are best dealt with on a site-wide basis separately. The other important option is the product description. This deserves special treatment because it needs to be effective at selling your product. With the categories, I specifically showed you how to create the structure before filling in the descriptions because I know others who have got into a muddle in the past. It is less likely, but still possible, to get into a bit of a muddle with the products as well. This is especially true if you have lots of them. Perhaps you should be the judge of whether to fill in your catalog before adding descriptions or add descriptions as you go. So here is a handy guide to create great product descriptions. It will help you to decide whether you should fill product descriptions at the same time as the rest of the details, or whether you should just enter the product title and revisit them later to fill in the rest of the details. Product descriptions that sell Don't fall into the trap of simply describing your products. It might be true that a potential customer does need to know the dry facts like sizes and other uninspiring information, but don't put this information in the brief description or description boxes. PrestaShop provides a place for bare facts—the Features tab (there will be more on this soon). The brief description and description boxes that will be described in more detail soon are there to sell to your customers —to increase their interest to a level that makes them "want" the product. It actually suggests they pop it in their cart and buy it. The way you do this is with a very simple and age-old formula that actually works. And,of course, having whetted your appetite, it would be rude not to tell you about it. So here it goes. Actually selling the product Don't just tell your customers about your product, sell them the product. Explain to them why they should buy it! Use the FAB technique—feature, advantage, benefit: Tell the customer about a feature: This teddy bear is made from a new fiber and wool mix This laptop has the brand new i7 processor made by Intel This guide was written by somebody who has survived cancer And the advantage that feature gives them: So it is really, really soft and fluffy! i7 is the very first processor series with a DDR3 integrated memory controller! So all the information and advice is real and practical Then emphasize the real emotive benefit this gives them: Which means your little boy or girl is going to feel safe, loved, and secure with this wonderful bear Meaning that this laptop gives your applications, up to a 30 percent performance boost over every other processor series ever made Giving you or your loved one the very best chance of beating with cancer and having more precious time they have with the people they love Don't just stop at one feature. Highlight the most important features. By most important features, of course I mean the features that lead to the best most emotive and personal benefits. Not too many though. If your product has loads of benefits, then try and pick just the best ones. Three is perfect. Three really is a magic number. All the best things come in threes and scientific research actually proves that thoughts or ideas presented in threes influence human emotion the most. If you must have more than three features, summarize them in a quick bulleted list. Three is good: Soft, strong, and very long Peace, love, and understanding Relieves pain, and clears your nose without drowsiness Ask for the sale When you have used the FAB technique, ask the customer to part with their money! Say something like, "Select the most suitable option for you and click on Add to cart" or "Remember that abc is the only xyz with benefit 1, benefit 2, and benefit 3.Order yours now!" Create some images with GIMP If you have a favorite photo editor then great. If you haven't, then I suggest you use GIMP. It's cool, easy, and free: www.gimp.org. Time for action – how to add a product to PrestaShop Let's add some products: Click on catalog and then click on product. Click on the Add a new product link. You will see the following screenshots. Okay, I admit it. It does look a little bit daunting. But actually it is not that difficult. Much of it is optional, and even more we will revisit after further discussion. So don't despair. There is a table of explanations for you after the screenshots. Field Explanation Name The short name/description of your product. There is a brief description and a full description box later, but perhaps a bit more than a short name should go here. For example, 50 cm golden teddy bear-extra fluffy version. Status Choose Enabled or Disabled. If your product is for sale as soon as you're open, click Enabled. If your product is discontinued or needs to be removed from sale for any reason, click Disabled. Reference An optional unique reference for your product. For example, 50cmFT - xfluff. EAN13 The European Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Jan The Japanese Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. UPC The USA and Canadian Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Visibility If you want to show the item on the catalog, only on the search or everywhere. Type You can chose if there is a physical product, pack, or a downloadable product. Options To make the product available/unavailable to order. To show or hide the price. To enable/disable the online message. Condition If the item is brand new, second hand, or refurbished. Short description Here you need to add a brief description about the item. This text will be shown on the catalog. Description When a customer clicks on the item, he will read this text. Tags Leave blank for now. Fill in your product page as described previously. Click on the Images tab at the top of the product page. Browse to the image you created earlier and upload it. Note that PrestaShop will compress the image for you. It is worth having a look at the final image and maybe varying the amount (if any) that you apply when creating your product images. Click on Save and then go and admire your product in your store front. Repeat until all your products are done, but don't forget to check how things look from the customer's point of view. Visit the category and product pages to check whether things are like the way you had expected them to be. If you have a huge range that is going to take you a long time, then consider just entering your key products. Proceed with this to get the money coming in and add the rest of your range in a bit over the course of time. What just happened? Now you have something to actually sell, let's go and showcase some of your products. Here is how to make some of your products stand out from the crowd. Highlighting products Next is a list of the different ways to promote elements of your range. There is also an explanation of each option and how to do it, as well. New products So you have just found some great new products. How do you let your visitors know about it? You could put an announcement on your front page. But what if a potential customer doesn't visit your front page or perhaps misses the announcement? Welcome to the new products module. Time for action – how to highlight your newest products The following are the quick steps to enable and configure the highlighting of any new products you add. Once this is set up, it will happen automatically, now and in the future. Click on the Modules tab and scroll down to the New products block module. Click on Install. Scroll back down to the module you just installed and click on Configure. Choose a number of products to be showcased and click on Save. Don't forget to have a look at your shop front to see how it works. Click around a few different pages and see how the highlighted product alternates. What just happened? Now you are done with new products and they will never go unnoticed. Specials Special refers to the price. This is the traditional special offer that customers know and love. Time for action – creating a special offer The following steps help us create special offers and make sure they will never go unnoticed: Click on the Catalog tab and navigate to the category or subcategory that contains the product you want to make available as a special offer. Click on the Products to go to its details page. Click on Prices and go to the Specific prices section. Click on Add a new specific price. You can enter an actual monetary amount in the first box or a percentage in the second box. Monetary amounts work well for individual discounts and percentages work well as part of a wider sale. But this is not a hard-and-fast rule. So choose what you think your customers might prefer. Click on Save. Now go and have a look at the category that the product is in and click on the product as well. You'll notice the smart enticing manner that PrestaShop uses to highlight the offer. You can have as many or as few special offers as you like. But what if you wanted to really push a product offer or a wider sale? Yes, you guessed it, there's a module. Click on the Modules tab and scroll down to Specials block and click on Install. Getting the hang of this? Thought so. Go and have a look at the effect on your store. What just happened? Your first sale is underway. Recently viewed What's this then? When customers browse products, they forget what they have seen or how to find it again. By prominently displaying a module with their most recent viewings, they can comfortably click back and forth comparing until they have made a buying decision. Now you don't need me to tell you how to set this up. Go to the module, switch it on, and you're done. Best sellers This is just what it says. Not necessarily an offer or anything else is special about it. But if it sells, well there must be something worth talking about it. Install the best sellers module in the usual place to highlight these items. Accessories I love accessories. It's all about add-on sales. Ever been to a shop to buy a single item and come out with several? Electrical retailers are brilliant at this. Go in for a PC and come out with a printer, scanner, camera, ink, paper, and the list goes on. Is it because their customers are stupid? Of course they are not! It is because they offer compelling or essential accessories that are relevant to the sale. By creating accessories, you will get a new tab at the bottom of each relevant product page along with PrestaShop making suggestions at key points of the sale. All we have to do is tell PrestaShop what is an accessory to our various products and PrestaShop will do the rest. Time for action – creating an accessory Accessories are products. So any product can be an accessory of any other product. All you have to do is decide what is relevant to what. Just think about appropriate accessories for your products and read on. The quick guide for creating accessories are as follows: Click on the Catalog tab, then click on product. Find the product you think should have some accessories. Click on it to edit it by navigating to Associations on the page and find the Accessories section, as shown in the following screenshot: Find the product that you wish to be an accessory by typing the first letters of the product name and selecting it. Save your amended product. You can add as many accessories to each product as you like. Go and have a look at your product on your shop front and notice the Accessories tab. What just happened? You just learned how to accessorize. It's silly not to accessorize, not because it costs you nothing, but because a few clicks could significantly increase your turnover. Now we can go on to explore more product ideas.
Read more
  • 0
  • 0
  • 806

article-image-choosing-lync-2013-clients
Packt
25 Jul 2013
5 min read
Save for later

Choosing Lync 2013 Clients

Packt
25 Jul 2013
5 min read
(For more resources related to this topic, see here.) What clients are available? At the moment, we are writing a list that includes the following clients: Full client, as a part of Office 2013 Plus The Lync 2013 app for Windows 8 Lync 2013 for mobile devices The Lync Basic 2013 version A plugin is needed to enable Lync features on a virtual desktop. We need the full Lync 2013 client installation to allow Lync access to the user. Although they are not clients in the traditional sense of the word, our list must also include the following ones: The Microsoft Lync VDI 2013 plugin Lync Online (Office 365) Lync Web App Lync Phone Edition Legacy clients that are still supported (Lync 2010, Lync 2010 Attendant, and Lync 2010 Mobile) Full client (Office 2013) This is the most complete client available at the moment. It includes full support for voice, video, IM (similarly to the previous versions), and integration for the new features (for example, high-definition video, the gallery feature to see multiple video feeds at the same time, and chat room integration). In the following screenshot, we can see a tabbed conversation in Lync 2013: Its integration with Office implies that the group policies for Lync are now part of the Office group policy's administrative templates. We have to download the Office 2013 templates from the Microsoft site and install the package in order to use them (some of the settings are shown in the following screenshot): Lync is available with the Professional Plus version of Office 2013 (and with some Office 365 subscriptions). Lync 2013 app for Windows 8 The Lync 2013 app for Windows 8 (also called Lync Windows Store app) has been designed and optimized for devices with a touchscreen (with Windows 8 and Windows RT as operating systems). The app (as we can see in the following screenshot) is focused on images and pictures, so we have a tile for each contact we want in our favorites. The Lync Windows Store app supports contact management, conversations, and calls, but some features such as Persistent Chat and the advanced management of Enterprise Voice, are still an exclusive of the full client. Also, talking about conferencing, we will not be able to act as the presenter or manage other participants. The app is integrated with Windows 8, so we are able to use Search to look for Lync contacts (as shown in the following screenshot): Lync 2013 for mobile devices The Lync 2013 client for mobile devices is the solution Microsoft offers for the most common tablet and smartphone systems (excluding those tablets using Windows 8 and Windows RT with their dedicated app). It is available for Windows phones, iPad/iPhone, and for Android. The older version of this client was basically an IM application, and that is something that somehow limited the interest in the mobile versions of Lync. The 2013 version that we are talking about includes support for VOIP and video (using Wi-Fi networks and cellular data networks), meetings, and for voice mail. From an infrastructural point of view, enabling the new mobile client means to apply the Lync 2013 Cumulative Update 1 (CU1) on our Front End and Edge servers and publish a DNS record (lyncdiscover) on our public name servers. If we have had previous experience with Lync 2010 mobility, the difference is really noticeable. The lyncdiscover record must be pointed to the reverse proxy. Reverse proxy deployment requires for a product to be enabled to support Lync mobility, and a certificate with the lyncdiscover's public domain name needs to be included. Lync Basic 2013 version Lync Basic 2013 is a downloadable client that provides basic functionalities. It does not provide support for advanced call features, multiparty videos or galleries, and skill-based searches. Lync Basic 2013 is dedicated to companies with Lync 2013 on-premises, and it is for Office 365 customers that do not have the full client included with their subscription. A client will look really similar to the full one, but the display name on top is Lync Basic as we can see in the following screenshot: Microsoft Lync VDI 2013 plugin As we said before, the VDI plugin is not a client; it is software we need to install to enable Lync on virtual desktops based on the most used technologies, such as Microsoft RDS, VMware View, and XenDesktop. The main challenge of a VDI scenario is granting the same features and quality we expect from a deployment on a physical machine. The plugin uses "Media Redirection", so that audio and video originate and terminate on the plugin running on the thin client. The user is enabled to connect conferencing/telephony hardware (for example microphones, cams, and so on) to the local terminal and use the Lync 2013 client installed on the virtual desktop as it was running locally. The plugin is the only Lync software installed at the end-user workplace. The details of the deployment (Deploying the Lync VDI Plug-in ) are available at http://technet.microsoft.com/en-us/library/jj204683.aspx. Resources for Article : Further resources on this subject: Innovation of Communication and Information Technologies [Article] DPM Non-aware Windows Workload Protection [Article] Installing Microsoft Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 1370
article-image-implementing-document-management
Packt
17 Jul 2013
19 min read
Save for later

Implementing Document Management

Packt
17 Jul 2013
19 min read
(For more resources related to this topic, see here.) Managing spaces A space in Alfresco is nothing but a folder, which contains content as well as sub-spaces. Space users are the users invited to a space to perform specific actions, such as editing content, adding content, discussing a particular document, and so on. The exact capability a given user has within a space is a function of their role or rights. Consider the capability of creating a sub-space. By default, to create a sub-space, one of the following must apply: The user is the administrator of the system The user has been granted the Contributor role. The user has been granted the Coordinator role. The user has been granted the Collaborator role. Similarly, to edit space properties, a user will need to be the administrator or be granted a role that gives them rights to edit the space. These roles include Editor, Collaborator, and Coordinator.  Space is a smart folder Space is a folder with additional features such as security, business rules, workflow, notifications, local search, and special views. These additional features which make a space a smart folder are explained as follows: Space security: You can define security at the space level. You can specify a user or a group of users, who may perform certain actions on content in a space. For example, on the Marketing Communications space in intranet, you can specify that only users of the marketing group can add the content and others can only see the content. Space business rules: Business rules, such as transforming content from Microsoft Word to Adobe PDF and sending notifications when content gets into a space can be defined at space level. Space workflow: You can define and manage content workflow on a space. Typically, you will create a space for the content to be reviewed, and a space for approved content. You will create various spaces for dealing with the different stages the work flows through, and Alfresco will manage the movement of the content between those spaces. Space events: Alfresco triggers events when content gets into a space, or when content goes out of a space, or when content is modified within a space. You can capture such events at space level and trigger certain actions, such as sending e-mail notifications to certain users. Space aspects: Aspects are additional properties and behavior, which could be added to the content, based on the space in which it resides. For example, you can define a business rule to add customer details to all the customer contract documents in your intranet's Sales space. Space search: Alfresco search can be limited to a space. For example, if you create a space called Marketing, you can limit the search for documents within the Marketing space, instead of searching the entire site. Space syndication: Space content can be syndicated by applying RSS feed scripts on a space. You can apply RSS feeds on your News space, so that other applications and websites can subscribe for news updates. Space content: Content in a space can be versioned, locked, checked-in and checked-out, and managed. You can specify certain documents in a space to be versioned and others not. Space network folder: Space can be mapped to a network drive on your local machine, enabling you to work with the content locally. For example, using CIFS interface, space can be mapped to the Windows network folder. Space dashboard view: Content in a space can be aggregated and presented using special dashboard views. For example, the Company Policies space can list all the latest policy documents which are updated for the past one month or so. You can create different views for Sales, Marketing and Finance departmental spaces. Importance of space hierarchy Like regular folders, a space can have child spaces (called sub-spaces) and sub-spaces can further have sub-spaces of their own. There is no limitation on the number of hierarchical levels. However, the space hierarchy is very important for all the reasons specified above in the previous section. Any business rule and security defined at a space is applicable to all the content and sub-spaces underlying that space by default. Use the created system users, groups, and spaces for various departments as per the example. Your space hierarchy should look similar to the following screenshot: A space in Alfresco enables you to define various business rules, a dashboard view, properties, workflow, and security for the content belonging to each department. You can decentralize the management of your content by giving access to departments at individual space levels. The example of the intranet space should contain sub-spaces, as shown in the preceding screenshot. If you have not already created spaces, you must do it now by logging in as administrator. Also, it is very important to set security (by inviting groups of users to these spaces). Editing a space Using a web client, you can edit the spaces you have added previously. Note that you need to have edit permissions on the spaces to edit them. Editing space properties Every space listed will have clickable actions, as shown in the following screenshot: These clickable actions will be dynamically generated for each space based on the current user's permissions on that space. If you have copy permission on a space, you will notice the copy icon as a clickable action for that space. On clicking the View Details action icon, the detailed view of a space will be displayed, as shown in the next screenshot: The detailed view page of a space allows you to select a dashboard view for viewing and editing existing space properties, to categorize the space, to set business rules, and to run various actions on the space, as shown in the preceding screenshot. To edit space properties, click on the Edit Space Properties icon, shown in the preceding screenshot. You can change the name of the space and other properties as needed. Deleting space and its contents From the list of space actions, you can click on the Delete action to delete the space. You need to be very careful while deleting a space as all the business rules, sub-spaces, and the entire content within the space will also be deleted. Moving or copying space by using the clipboard From the list of space actions, you can click on the Cut action to move a space to the clipboard. Then you can navigate to any space hierarchy, assuming that you have the required permissions to do so, and paste this particular space, as required. Similarly, you can use the Copy action to copy the space to some other space hierarchy. This is useful if you have an existing space structure (such as a marketing project or engineering project), and you would like to replicate it along with the data it contains. The copied or moved space will be identical in all aspects to the original (source) space. When you copy a space, the space properties, categorization, business rules, space users, entire content within the space, and all sub-spaces along with their content will also be copied. Creating a shortcut to a space for quick access If you need to frequently access a space, you can create a shortcut (similar to the Favorite option in Internet browsers) to that space, in order to reach the space in just one click. From the list of space actions, you can click on the Create Shortcut action to create a shortcut to the existing space. Shortcuts are listed in the left-hand side shelf. Consider a scenario where after creating the shortcut, the source space is deleted. The shortcuts are not automatically removed as there is a possibility for the user to retrieve the deleted space. What will happen when you click on that shortcut link in the Shelf? If the source space is not found (deleted by user), then the shortcut will be removed with an appropriate error message. Choosing a default view for your space There are four different out-of-the-box options available (as shown in the screenshot overleaf). These options support the display of the space's information: Details View: This option provides listings of sub-spaces and content, in horizontal rows. Icon View: This option provides a title, description, timestamp, and action menus for each sub-space and content item present in the current space. Browse View: Similar to the preceding option, this option provides title, description, and list of sub-spaces for each space. Dashboard View: This option is disabled and appears in gray. This is because you have not enabled the dashboard view for this space. In order to enable dashboard view for a space, you need to select a dashboard view (Refer to the icon shown in the preceding screenshot). Sample space structure for a marketing project Let us say you are launching a new marketing project called Product XYZ Launch. Go to the Company Home | Intranet | Marketing Communications space and create a new space called Product XYZ Launch and create various sub-spaces as needed. You can create your own space structure within the marketing project space to manage content. For example, you can have a space called 02_Drafts to keep all the draft marketing documents and so on. Managing content Content could be of any type, as mentioned at the start of this article. By using the Alfresco web client application, you can add and modify content and its properties. You can categorize content, lock content for safe editing, and can maintain several versions of the content. You can delete content, and you can also recover the deleted content. This section uses the space you have already created as a part of your Intranet sample application. As a part of sample application, you will manage content in the Intranet | Marketing Communications space. Because you have secured this space earlier, only the administrator (admin) and users belonging to the Marketing group (Peter Marketing and Harish Marketing) can add content in this space. You can log in as Peter Marketing to manage content in this space. Creating content A web client provides two different interfaces for adding content. One can be used to create inline editable content, such as HTML, text, and XML, and the other can be used to add binary content, such Microsoft office files and scanned images. You need to have either administrator, contributor, collaborator, or coordinator roles on a space to create content within that space.  Creating text documents HTML, text, and XML To create an HTML file in a space, follow these steps: Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. On the header, click on Create | Create Content. The first pane of the Create Content wizard appears. You can track your progress through the wizard from the list of steps at the left of the pane. Provide name of the HTML file, select HTML as Content Type and click on the Next button. The Enter Content pane of the wizard appears, as shown in the next screenshot. Note that Enter Content is now highlighted in the list of steps at the left of the pane:   You can see that there is a comprehensive set of tools to help you format your HTML document. Enter some text, using some of the formatting features. If you know HTML, you can also use the HTML editor by clicking on the HTML icon. The HTML source editor is displayed. Once you update the HTML content, click on the Update button to return to the Enter Content pane in the wizard, with the contents updated. After the content is entered and edited in the Enter Content pane, click on Finish. You will see the Modify Content Properties screen, which can used to update the metadata associated with the content. Give some filename with .html as extension. Also, you will notice that then Inline Editing checkbox is selected by default. Once you are done with editing the properties, click on the OK button to return to the 02_Drafts space, with your newly created file inserted. You can launch the newly created HTML file by clicking on it. Your browser launches most of the common files, such as HTML, text, and PDF. If the browser could not recognize the file, you will be prompted with the windows dialog box containing the list of applications, from which you must choose an application. This is the normal behavior if you try to launch a file on any Internet page. Uploading binary files – Word, PDF, Flash, Image, and Media Using a web client, you can upload content from your hard drive. Choose a file from your hard disk that is not an HTML or text file. I chose Alfresco_CIGNEX.docx from my hard disk for the sample application. Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. To upload a binary file in a space, follow these steps: In the space header, click on the Add Content link. The Add Content dialog appears. To specify the file that you want to upload, click Browse. In the File Upload dialog box, browse to the file that you want to upload. Click Open. Alfresco inserts the full path name of the selected file in the Location textbox. Click on the Upload button to upload the file from your hard disk to the Alfresco repository. A message informs you that your upload was successful, as shown in the following screenshot. Click OK to confirm. Modify the Content Properties dialog appears. Verify the pre-populated properties and add information in the textboxes. Click OK to save and return to the 02_Drafts space. The file that you uploaded appears in the Content Items pane. Alfresco extracts the file size from the properties of the disk file, and includes the value in the size row. Editing content You can edit the content in Alfresco in three different ways: by using the Edit Online, Edit Offline, and Update actions. Note that you need to have edit permissions on the content to edit them. Online editing of HTML, text, and XML HTML files and plain text files can be created and edited online. If you have edit access to a file, you will notice a small pencil (Edit Online) icon, as shown in the following screenshot: Clicking on the pencil icon will open the file in its editor. Each file type is edited in its own WYSIWYG editor. Once you select to edit online, a working copy of the file will be created for editing, whereas the original file will be locked, as shown in the next screenshot. The working copy can be edited further as needed by clicking on the Edit Online button. Once you are done with editing, you can commit all the changes to the original document by clicking on the Done Editing icon. For some reason, if you decided to cancel editing of a document and discard any changes, you can do that by clicking on the Cancel Editing button given below. If you cancel editing of a document, the associated working copy will be deleted and all changes to it since it was checked out will be lost. The working copy can be edited by any user who has edit access to the document or the folder containing the document. For example, if user1 created the working copy and user2 has edit access to the document, and then both user1 and user2 can edit the working copy. Consider a scenario where user1 and user2 are editing the working copy simultaneously. If user1 commits the changes first, then the edits done by user2 will be lost. Hence, it is important to follow best practices in editing the working copy. Some of these best practices are listed here for your reference: Securing the edit access to the working copy to avoid multiple users simultaneously editing the file Saving the working copy after each edit to avoid losing the work done Following the process of allowing only the owner of the document edit the working copy. If others need to edit, they can claim the ownership Triggering the workflow on working copy to confirm the changes before committing Offline editing of files If you wish to download the files to your local machine, edit it locally, and then upload the updated version to Alfresco, then you might consider using the Edit Offline option (pencil icon). Once you click on the Edit Offline button, the original file will be locked automatically and a working copy of the file will be created for download. Then you will get an option to save the working copy of the document locally on your laptop or personal computer. If you don't want to automatically download the files for offline editing, you can turn off this feature. In order to achieve this, click on the User Profile icon in the top menu, and uncheck the option for Offline Editing, as shown here: The working copy can be updated by clicking on the Upload New Version button. Once you have finished editing the file, you can commit all the changes to the original document by clicking on the Done Editing icon. Or you can cancel all the changes by clicking on the Cancel Editing button. Uploading updated content If you have edit access to a binary file, you will notice the Update action icon in the drop-down list for the More actions link. Upon clicking on the Update icon, the Update pane opens. Click on the Browse button to upload the updated version of the document from your hard disk. It is always a good practice to check out the document and update the working copy rather than directly updating the document. Checking the file out avoids conflicting updates by locking the document, as explained in the previous section. Content actions Content will have clickable actions, as shown in the upcoming screenshot. These clickable actions (icons) will be dynamically generated for a content based on the current user's permissions for that content. For example, if you have copy permission for the content, you will notice the Copy icon as a clickable action for that content. Deleting content Click on the Delete action, from the list of content actions, to delete the content. Please note that when content is deleted, all the previous versions of that content will also be deleted. Moving or copying content using the clipboard From the list of content actions, as shown in the preceding screenshot, you can click on the Cut action to move content to the clipboard. Then, you can navigate to any space hierarchy and paste this particular content as required. Similarly, you can use the Copy action to copy the content to another space. Creating a shortcut to the content for quick access If you have to access a particular content very frequently, you can create a shortcut (similar to the way you can with Internet and Windows browser's Favorite option) to that content, in order to reach the content in one click. From the list of content actions, as shown in the preceding screenshot, you can click on the Create Shortcut action to create a shortcut to the existing content. Shortcuts are listed in the left-hand side Shelf. Managing content properties Every content item in Alfresco will have properties associated with it. Refer to the preceding screenshot to see the list of properties, such as Title, Description, Author, Size, and Creation Date. These properties are associated with the actual content file, named Alfresco_CIGNEX.docx. The content properties are stored in a relational database and are searchable using Advanced Search options. What is Content Metadata? Content properties are also known as Content Metadata. Metadata is structured data, which describes the characteristics of the content. It shares many similar characteristics with the cataloguing that takes place in libraries. The term "Meta" derives from the Greek word denoting a nature of a higher order or more fundamental kind. A metadata record consists of a number of predefined elements representing specific attributes of content, and each element can have one or more values. Metadata is a systematic method for describing resources, and thereby improving access to them. If access to the content will be required, then it should be described using metadata, so as to maximize the ability to locate it. Metadata provides the essential link between the information creator and the information user. While the primary aim of metadata is to improve resource discovery, metadata sets are also being developed for other reasons, including: Administrative control Security Management information Content rating Rights management Metadata extractors Typically, in most of the content management systems, once you upload the content file, you need to add the metadata (properties), such as title, description, and keywords to the content manually. Most of the content, such as Microsoft Office documents, media files, and PDF documents contain properties within the file itself. Hence, it is double the effort, having to enter those values again in the content management system along with the document. Alfresco provides built-in metadata extractors for popular document types to extract the standard metadata values from a document and populate the values automatically. This is very useful if you are uploading the documents through FTP, CIFS, or WebDAV interface, where you do not have to enter the properties manually, as Alfresco will transfer the document properties automatically. Editing metadata To edit metadata, you need to click the Edit Metadata icon () in content details view. Refer the Edit Metadata icon shown in the screenshot, which shows a detailed view of the Alfresco_CIGNEX.docx file. You can update the metadata values, such as Name and Description for your content items. However, certain metadata values, such as Creator, Created Date, Modifier, and Modified Date are read-only and you cannot change them. Certain properties, such as Modifier and Modified Date will be updated by Alfresco automatically, whenever the content is updated. Adding additional properties Additional properties can be added to the content in two ways. One way is to extend the data model and define more properties in a content type.  The other way is to dynamically attach the properties and behavior through Aspects. By using aspects, you can add additional properties, such as Effectivity, Dublin Core Metadata, and Thumbnailable, to the content. 
Read more
  • 0
  • 0
  • 1206

article-image-making-your-store-look-amazing
Packt
10 Jul 2013
6 min read
Save for later

Making Your Store Look Amazing

Packt
10 Jul 2013
6 min read
(For more resources related to this topic, see here.) Looks are everything on the web. If your store doesn't look enticing and professional to your customers then everything else is a waste. This article looks at how to make your VirtueMart look stunning. There are many different approaches to creating a hot-looking store. The one that is best for you or your client will depend upon your budget and your skill set. The sections in this article will cater to all budgets and skill sets. For example, we will cover the very simple task of finding and installing a free Joomla! template or installing a VirtueMart theme. Then we will look at the pros and cons of using two different professional frameworks namely Warp and Gantry. In the middle of all this, we will also look at the stunningly versatile Artisteer design software that won't quite give you the perfect professional job but does a very fine job of letting you choose just about every aspect of your design without any CSS/coding skills. Removing the Joomla! branding at the footer With each version of Joomla! and VirtueMart being better than the last one in terms of looks and performance, it is not unheard of to launch your store with the default looks of Joomla! and VirtueMart. The least you will probably want to do is remove the Powered by Joomla!® link at the footer of your store. This will make your store appear entirely your own and perhaps have a minor benefit to SEO as well by removing the outbound link. Getting ready Log in to your Joomla! control panel. This section was tested using the Beez_20 template but should work on any template where the same message appears. We will also be using the Firefox web browser search function but again, this is almost identical in other browsers. Identify the message to be removed on the frontend of your site as shown in the following screenshot: How to do it... This is going to be nice and easy so let's get started and perform the following steps: Navigate to Extensions | Template Manager from the main Joomla! drop-down menu as shown in the following screenshot: Now click on the Templates link (it is the one next door to the Styles link) as shown in the following screenshot: Scroll down until you see Beez_20 details and Files click on it as shown in the following screenshot: Now scroll down and click on Edit main page template . Next press Ctrl + F on your keyboard to bring up the Firefox search bar and enter <div id="footer"> as your search term. Firefox will present you with the following code: Delete everything between <p> and </p> both inclusive. Click on Save & Close . How it works... Check your Joomla! home page. We now have a nice clean and empty footer. We can add Joomla! and VirtueMart modules or just leave it empty. Installing a VirtueMart template In this section we will look at how to install a theme to make your store look great with a couple of clicks. There are a few things to consider first. Is your website just a store? That is, are all your pages going to be VirtueMart pages? If the answer is yes then this is definitely the section for you. Alternatively you might just have a few shop pages in amongst an extensive Joomla! based content site. If this is the case then you might be better off installing a Joomla! template and then setting VirtueMart to use that. If this describes your situation then the next section, Installing a Joomla! template is more appropriate for you. And there is a third option as well. You have content pages and a large number of VirtueMart pages. In this situation some experimentation and planning is required. You will either need to choose a Joomla! template that you are happy with for everything or a Joomla! template and a VirtueMart theme which look good together. Or you could use two templates. This last scenario is covered in the Creating and installing a template with Artisteer design software section. Getting ready Find a template which is either free or paid and download the files from the template provider's site (they will be in the form of a single compressed archive) on your computer. How to do it... Installing a VirtueMart template has never been as easy as it is in VirtueMart 2. Perform the following steps for the same: Navigate to Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button and you are done! How it works... The VirtueMart template is now installed. Take a look at your shiny new store. Installing a Joomla! Template As there is clearly something of a supply problem when it comes to VirtueMart-specific free templates, this section will look at installing a regular Joomla! template and using it in your VirtueMart store. Installing a Joomla! template is a very easy thing to do. But if you have never done it before read on. Getting ready Check the resources appendix for a choice of places to get free and paid templates. Download your chosen template on your desktop. It should be in the form of a ZIP file. Log in to your Joomla! admin area and read on. How to do it... This simple section is in two steps. First we upload the template then we set it as the active template. Select Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button. Now select Extensions | Template Manager . Click on the checkbox of the template you just installed and then click on Make Default . How it works... So what we did was to install the template through the usual Joomla! installation mechanism and once the template was installed we simply told Joomla! to use it. That's it. You can now go and assign all your modules to your new template.
Read more
  • 0
  • 0
  • 1007

article-image-understanding-express-routes
Packt
10 Jul 2013
10 min read
Save for later

Understanding Express Routes

Packt
10 Jul 2013
10 min read
(For more resources related to this topic, see here.) What are Routes? Routes are URL schema, which describe the interfaces for making requests to your web app. Combining an HTTP request method (a.k.a. HTTP verb) and a path pattern, you define URLs in your app. Each route has an associated route handler, which does the job of performing any action in the app and sending the HTTP response. Routes are defined using an HTTP verb and a path pattern. Any request to the server that matches a route definition is routed to the associated route handler. Route handlers are middleware functions, which can send the HTTP response or pass on the request to the next middleware in line. They may be defined in the app file or loaded via a Node module. A quick introduction to HTTP verbs The HTTP protocol recommends various methods of making requests to a Web server. These methods are known as HTTP verbs. You may already be familiar with the GET and the POST methods; there are more of them, about which you will learn in a short while. Express, by default, supports the following HTTP request methods that allow us to define flexible and powerful routes in the app: GET POST PUT DELETE HEAD TRACE OPTIONS CONNECT PATCH M-SEARCH NOTIFY SUBSCRIBE UNSUBSCRIBE GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS, CONNECT, and PATCH are part of the Hyper Text Transfer Protocol (HTTP) specification as drafted by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). M-SEARCH, NOTIFY, SUBSCRIBE, and UNSUBSCRIBE are specified by the UPnP Forum. There are some obscure HTTP verbs such as LINK, UNLINK, and PURGE, which are currently not supported by Express and the underlying Node HTTP library. Routes in Express are defined using methods named after the HTTP verbs, on an instance of an Express application: app.get(), app.post(), app.put(), and so on. We will learn more about defining routes in a later section. Even though a total of 13 HTTP verbs are supported by Express, you need not use all of them in your app. In fact, for a basic website, only GET and POST are likely to be used. Revisiting the router middleware This article would be incomplete without revisiting the router middleware. The router middleware is very special middleware. While other Express middlewares are inherited from Connect, router is implemented by Express itself. This middleware is solely responsible for empowering Express with Sinatra-like routes. Connect-inherited middlewares are referred to in Express from the express object (express.favicon(), express.bodyParser(), and so on). The router middleware is referred to from the instance of the Express app (app.router)  To ensure predictability and stability, we should explicitly add router to the middleware stack: app.use(app.router); The router middleware is a middleware system of its own. The route definitions form the middlewares in this stack. Meaning, a matching route can respond with an HTTP response and end the request flow, or pass on the request to the next middleware in line. This fact will become clearer as we work with some examples in the upcoming sections. Though we won't be directly working with the router middleware, it is responsible for running the whole routing show in the background. Without the router middleware, there can be no routes and routing in Express. Defining routes for the app we know how routes and route handler callback functions look like. Here is an example to refresh your memory: app.get('/', function(req, res) { res.send('welcome'); }); Routes in Express are created using methods named after HTTP verbs. For instance, in the previous example, we created a route to handle GET requests to the root of the website. You have a corresponding method on the app object for all the HTTP verbs listed earlier. Let's create a sample application to see if all the HTTP verbs are actually available as methods in the app object: var http = require('http'); var express = require('express'); var app = express(); // Include the router middleware app.use(app.router); // GET request to the root URL app.get('/', function(req, res) { res.send('/ GET OK'); }); // POST request to the root URL app.post('/', function(req, res) { res.send('/ POST OK'); }); // PUT request to the root URL app.put('/', function(req, res) { res.send('/ PUT OK'); }); // PATCH request to the root URL app.patch('/', function(req, res) { res.send('/ PATCH OK'); }); // DELETE request to the root URL app.delete('/', function(req, res) { res.send('/ DELETE OK'); }); // OPTIONS request to the root URL app.options('/', function(req, res) { res.send('/ OPTIONS OK'); }); // M-SEARCH request to the root URL app['m-search']('/', function(req, res) { res.send('/ M-SEARCH OK'); }); // NOTIFY request to the root URL app.notify('/', function(req, res) { res.send('/ NOTIFY OK'); }); // SUBSCRIBE request to the root URL app.subscribe('/', function(req, res) { res.send('/ SUBSCRIBE OK'); }); // UNSUBSCRIBE request to the root URL app.unsubscribe('/', function(req, res) { res.send('/ UNSUBSCRIBE OK'); }); // Start the server http.createServer(app).listen(3000, function() { console.log('App started'); }); We did not include the HEAD method in this example, because it is best left for the underlying HTTP API to handle it, which it already does. You can always do it if you want to, but it is not recommended to mess with it, because the protocol will be broken unless you implement it as specified. The browser address bar isn't capable of making any type of request, except GET requests. To test these routes we will have to use HTML forms or specialized tools. Let's use Postman, a Google Chrome plugin for making customized requests to the server. We learned that route definition methods are based on HTTP verbs. Actually, that's not completely true, there is a method called app.all() that is not based on an HTTP verb. It is an Express-specific method for listening to requests to a route using any request method: app.all('/', function(req, res, next) { res.set('X-Catch-All', 'true'); next(); }); Place this route at the top of the route definitions in the previous example. Restart the server and load the home page. Using a browser debugger tool, you can examine the HTTP header response added to all the requests made to the home page, as shown in the following screenshot: Something similar can be achieved using a middleware. But the app.all() method makes it a lot easier when the requirement is route specified. Route identifiers So far we have been dealing exclusively with the root URL (/) of the app. Let's find out how to define routes for other parts of the app. Routes are defined only for the request path. GET query parameters are not and cannot be included in route definitions. Route identifiers can be string or regular expression objects. String-based routes are created by passing a string pattern as the first argument of the routing method. They support a limited pattern matching capability. The following example demonstrates how to create string-based routes: // Will match /abcd app.get('/abcd', function(req, res) { res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); }); // Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); }); // Will match /abxyzcd app.get('/ab*cd', function(req, res) { res.send('ab*cd'); }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); }); The characters ?, +, *, and () are subsets of their Regular Expression counterparts.   The hyphen (-) and the dot (.) are interpreted literally by string-based route identifiers. There is another set of string-based route identifiers, which is used to specify named placeholders in the request path. Take a look at the following example: app.get('/user/:id', function(req, res) { res.send('user id: ' + req.params.id); }); app.get('/country/:country/state/:state', function(req, res) { res.send(req.params.country + ', ' + req.params.state); } The value of the named placeholder is available in the req.params object in a property with a similar name. Named placeholders can also be used with special characters for interesting and useful effects, as shown here: app.get('/route/:from-:to', function(req, res) { res.send(req.params.from + ' to ' + req.params.to); }); app.get('/file/:name.:ext', function(req, res) { res.send(req.params.name + '.' + req.params.ext.toLowerCase()); }); The pattern-matching capability of routes can also be used with named placeholders. In the following example, we define a route that makes the format parameter optional: app.get('/feed/:format?', function(req, res) { if (req.params.format) { res.send('format: ' + req.params.format); } else { res.send('default format'); } }); Routes can be defined as regular expressions too. While not being the most straightforward approach, regular expression routes help you create very flexible and powerful route patterns. Regular expression routes are defined by passing a regular expression object as the first parameter to the routing method. Do not quote the regular expression object, or else you will get unexpected results. Using regular expression to create routes can be best understood by taking a look at some examples. The following route will match pineapple, redapple, redaple, aaple, but not apple, and apples: app.get(/.+app?le$/, function(req, res) { res.send('/.+ap?le$/'); }); The following route will match anything with an a in the route name: app.get(/a/, function(req, res) { res.send('/a/'); }); You will mostly be using string-based routes in a general web app. Use regular expression-based routes only when absolutely necessary; while being powerful, they can often be hard to debug and maintain. Order of route precedence Like in any middleware system, the route that is defined first takes precedence over other matching routes. So the ordering of routes is crucial to the behavior of an app. Let's review this fact via some examples. In the following case, http://localhost:3000/abcd will always print "abc*" , even though the next route also matches the pattern: app.get('/abcd', function(req, res) { res.send('abcd'); }); app.get('/abc*', function(req, res) { res.send('abc*'); }); Reversing the order will make it print "abc*": app.get('/abc*', function(req, res) { res.send('abc*'); }); app.get('/abcd', function(req, res) { res.send('abcd'); }); The earlier matching route need not always gobble up the request. We can make it pass on the request to the next handler, if we want to. In the following example, even though the order remains the same, it will print "abc*" this time, with a little modification in the code. Route handler functions accept a third parameter, commonly named next, which refers to the next middleware in line. We will learn more about it in the next section: app.get('/abc*', function(req, res, next) { // If the request path is /abcd, don't handle it if (req.path == '/abcd') { next(); } else { res.send('abc*'); } }); app.get('/abcd', function(req, res) { res.send('abcd'); }); So bear it in mind that the order of route definition is very important in Express. Forgetting this will cause your app behave unpredictably. We will learn more about this behavior in the examples in the next section.
Read more
  • 0
  • 0
  • 3367
article-image-improving-snake-game
Packt
08 Jul 2013
41 min read
Save for later

Improving the Snake Game

Packt
08 Jul 2013
41 min read
The game Two new features were added to this second version of the game. First, we now keep track of the highest score achieved by a player, saving it through local storage. Even if the player closes the browser application, or turns off the computer, that value will still be safely stored in the player's hard drive, and will be loaded when the game starts again. Second, we use session storage to save the game state every time the player eats a fruit in the game, and whenever the player kills the snake. This is used as an extra touch of awesomeness, where after the player loses, we display a snapshot of all the individual level ups the player achieved in that game, as well as a snapshot of when the player hit a wall or run the snake into itself, as shown in the following screenshot: At the end of each game, an image is shown of each moment when the player acquired a level up, as well as a snapshot of when the player eventually died. This images are created through the canvas API (calling the toDataURL function), and the data that composes each image is saved throughout the game, and stored using the web storage API. With a feature such as this in place, we make the game much more fun, and potentially much more social. Imagine how powerful it would be if the player could post, not only his or her high score to their favorite social network website, but also pictures of their game at key moments. Of course, only the foundation of this feature is implemented in this article (in other words, we only take the snapshots of these critical moments in the game). Adding the actual functionality to send that data to a real social network application is left as an exercise for the reader. A general description and demonstration of each of the APIs used in the game are given in the following sections. For an explanation of how each piece of functionality was incorporated into the final game, look at the code section. For the complete source code for this game, check out the book's page from Packt Publishing's website. Web messaging Web messaging allows us to communicate with other HTML document instances, even if they're not in the same domain. For example, suppose our snake game, hosted at http://snake.fun-html5-games.com, is embedded into a social website through iframe (let's say this social website is hosted at http://www.awesome-html5-games.net). When the player achieves a new high score, we want to post that data from the snake game directly into the host page (the page with iframe from which the game is loaded). With the web messaging API, this can be done natively, without the need for any server-side scripting whatsoever. Before web messaging, documents were not allowed to communicate with documents in other domains mostly because of security. Of course, web applications can still be vulnerable to malicious external applications if we just blindly take messages from any application. However, the web messaging API provides some solid security measures to protect the page receiving the message. For example, we can specify the domains that the message is going to, so that other domains cannot intercept the message. On the receiving end, we can also check the origin from whence the message came, thus ignoring messages from any untrusted domains. Finally, the DOM is never directly exposed through this API, providing yet another layer of security. How to use it Similar to web workers, the way in which two or more HTML contexts can communicate through the web messaging API is by registering an event handler for the on-message event, and sending messages out by using the postMessage function: code1 The first step to using the web messaging API is to get a reference to some document with whom we wish to communicate. This can be done by getting the contentWindow property of an iframe reference, or by opening a new window and holding on to that reference. The document that holds this reference is called the parent document, since this is where the communication is initiated. Although a child window can communicate with its parent, this can only happen when and for as long as this relationship holds true. In other words, a window cannot communicate with just any window; it needs a reference to it, either through a parent-child relationship, or through a child-parent relationship. Once the child window has been referenced, the parent can fire messages to its children through the postMessage function. Of course, if the child window hasn't defined a callback function to capture and process the incoming messages, there is little purpose in sending those messages in the first place. Still, the parent has no way of knowing if a child window has defined a callback to process incoming messages, so the best we can do is assume (and hope) that the child window is ready to receive our messages. The parameters used in the postMessage function are fairly similar to the version used in web workers. That is, any JavaScript value can be sent (numbers, strings, Boolean values, object literals, and arrays, including typed arrays). If a function is sent as the first parameter of postMessage (either directly, or as part of an object), the browser will raise a DATA_CLONE_ERR: DOM Exception 25 error. The second parameter is a string, and represents the domain that we allow our message to be received by. This can be an absolute domain, a forward slash (representing the same origin domain as the document sending the message), or a wild card character (*), representing any domain. If the message is received by a domain that doesn't match the second parameter in postMessage, the entire message fails. When receiving the message, the child window first registers a callback on the message event. This function is passed a MessageEvent object, which contains the following attributes: event.data: It returns the data of the message event.origin: It returns the origin of the message, for server-sent events and cross-document messaging event.lastEventId: It returns the last event ID string, for server-sent events event.sourceReturns: It is the WindowProxy of the source window, for cross-document messaging event.portsReturns: It is the MessagePort array sent with the message, for cross-document messaging and channel messaging Source: http://www.w3.org/TR/webmessaging/#messageevent As an example of the sort of things we could use this feature for in the real world, and in terms of game development, imagine being able to play our snake game, but where the snake moves through a couple of windows. How creative is that?! Of course, in terms of being practical, this may not be the best way to play a game, but I find it hard to argue with the fact that this would indeed be a very unique and engaging presentation of an otherwise common game. With the help of the web messaging API, we can set up a snake, where the snake is not constrained to a single window. Imagine the possibilities when we combine this clever API with another very powerful HTML5 feature, which just happens to lend itself incredibly well to games – web sockets. By combining web messaging with web sockets, we could play a game of snake, not only across multiple windows, but also with multiple players at the same time. Perhaps each player would control the snake when it got inside a given window, and all players could see all windows at the same time, even though they are each using a separate computer. The possibilities are endless, really. Surprisingly, the code used to set up a multi-window port of snake is incredibly simple. The basic setup is the same, we have a snake that only moves in one direction at a time. We also have one or more windows where the snake can go. If we store each window in an array, we can calculate which screen the snake needs to be rendered in, given its current position. Finding out which screen the snake is supposed to be in, given its world position, is the trickiest part. For example, imagine that each window is 200 pixels wide. Now, suppose there are three windows opened. Each window's canvas is only 200 pixels wide as well, so when the snake is at position 350, it would be printed too far to the right in all of the canvases. So what we need to do is first determine the total world width (canvas width multiplied by the total number of canvases), calculate which window the snake is at (position/canvas width), then convert the position from world space down to canvas space, given the canvas the snake is in. First, lets define our structures in the parent document. The code for this is as follows: code2 When this script loads, we'll need a way to create new windows, where the snake will be able to move about. This can easily be done with a button that spawns a new window when clicked, then adding that window to our array of frames, so that we can iterate through that array, and tell every window where the snake is. The code for this is as follows: code3 Now, the real magic happens in the following method. All that we'll do is update the snake's position, then tell each window where the snake is. This will be done by converting the snake's position from world coordinates to canvas coordinates (since every canvas has the same width, this is easy to do for every canvas), then telling every window where the snake should be rendered within a canvas. Since that position is valid for every window, we also tell each window individually whether or not they should render the information we're sending them. Only the window that we calculate the snake is in, is told to go ahead and render. code4 That's really all there is to it. The code that makes up all the other windows is the same for all of them. In fact, we only open a bunch of windows pointing to the exact same script. As far as each window is concerned, they are the only window opened. All they do is take a bunch of data through the messaging API, then render that data if the shouldDraw flag is set. Otherwise, they just clear their canvas, and sit tight waiting for further instructions from their parent window. code5 Web storage Before HTML5 came along, the only way web developers had to store data on the client was through cookies. While limited in scope, cookies did what they were meant to, although they had several limitations. For one thing, whenever a cookie was saved to the client, every HTTP request after that included the data for that cookie. This meant that the data was always explicitly exposed, and each of those HTTP requests were heavily laden with extra data that didn't belong there. This is especially inefficient when considering web applications that may need to store relatively large amounts of data. With the new web storage API, these issues have been addressed and satisfied. There are now three different options for client storage, all of which solve a different problem. Keep in mind, however, that any and all data stored in the client is still exposed to the client in plain text, and is therefore not meant for a secure storage solution. These three storage solutions are session storage, local storage, and the IndexedDB NoSQL data store. Session storage allows us to store key-value data pairs that persist until the browser is closed (in other words, until the session finishes). Local storage is similar to session storage in every way, except that the duration that the data persists is longer. Even when a session is closed, data stored in a local storage still persists. That data in local storage is only cleared when the user specifically tells the browser to do so, or when the application itself deletes data from the storage. Finally, IndexedDB is a robust data store that allows us to store custom objects (not including objects that contains functions), then query the database for those objects. Of course, with much robustness comes great complexity. Although having a dedicated NoSQL database built in right into the browser may sound exciting, but don't be fooled. While using IndexedDB can be a fascinating addition to the world of HTML, it is also by no means a trivial task for beginners. Compared to local storage and session storage, IndexedDB has somewhat of a steep learning curve, since it involves mastering some complex database concepts. As mentioned earlier, the only real difference between local storage and session storage is the fact that session storage clears itself whenever the browser closes down. Besides that, everything about the two is exactly the same. Thus, learning how to use both will be a simple experience, since learning one also means learning the other. However, knowing when to use one over the other might take a bit more thinking on your part. For best results, try to focus on the unique characteristics and needs of your own application before deciding which one to use. More importantly, realize that it is perfectly legal to use both storage systems in the same application. The key is to focus on a unique feature, and decide what storage API best suits those specific needs. Both the local storage and session storage objects are instances of the class Storage. The interface defined by the storage class, through which we can interact with these storage objects, is defined as follows (source: Web Storage W3C Candidate Recommendation, December 08, 2011, http://www.w3.org/TR/webstorage/): getItem(key): It returns the current value associated with the given key. If the given key does not exist in the list associated with the object then this method must return null. setItem(key, value): It first checks if a key/value pair with the given key already exists in the list associated with the object. If it does not, then a new key/value pair must be added to the list, with the given key and with its value set to value. If the given key does exist in the list, then it must have its value updated to value. If it couldn't set the new value, the method must throw a QuotaExceededError exception. (Setting could fail if, for example, the user has disabled storage for the site, or if the quota has been exceeded.) removeItem(key): It causes the key/value pair with the given key to be removed from the list associated with the object, if it exists. If no item with that key exists, the method must do nothing. clear(): It automatically causes the list associated with the object to be emptied of all key/value pairs, if there are any. If there are none, then the method must do nothing. key(n): It returns the name of the nth key in the list. The order of keys is user-agent defined, but must be consistent within an object so long as the number of keys doesn't change. (Thus, adding or removing a key may change the order of the keys, but merely changing the value of an existing key must not.) If n is greater than or equal to the number of key/value pairs in the object, then this method must return null. The supported property names on a Storage object are the keys of each key/value pair currently present in the list associated with the object. length: It returns the number of key/value pairs currently present in the list associated with the object. Local storage The local storage mechanism is accessed through a property of the global object, which on browsers is the window object. Thus, we can access the storage property explicitly through window.localStorage, or implicitly as simply localStorage. code28 Since only DOMString values are allowed to be stored in localStorage, any other values other than strings are converted into a string before being stored in localStorage. That is, we can't store arrays, objects, functions, and so on in localStorage. Only plain JavaScript strings are allowed. code6 Now, while this might seem like a limitation to the storage API, this is in fact done by design. If your goal is to store complex data types for later use, localStorage wasn't necessarily designed to solve this problem. In those situations, we have a much more powerful and convenient storage solution, which we'll look at soon (that is, IndexedDB). However, there is a way to store complex data (including arrays, typed arrays, objects, and so on) in localStorage. The key lies in the wonderful JSON data format. Modern browsers have the very handy JSON object available in the global scope, where we can access two important functions, namely JSON.stringify and JSON.parse. With these two methods, we can serialize complex data, store that in localStorage, then unserialize the data retrieved from the storage, and continue using it in the application. code7 While this is a nice little trick, you will notice what can be a major limitation: JSON stringify does not serialize functions. Also, if you pay close attention to the way that JSON.stringify works, you will realize that>Person, the result will be a simple object literal with no constructor or prototype information. Still, given that localStorage was never intended to fill the role of object persistence (but rather, simple key-value string pairs), this should be seen as nothing more than a limited, yet very neat trick. Session storage Since the sessionStorage interface is identical to that of localStorage, there is no reason to repeat all of the information just described. For a more in-depth discussion about sessionStorage, look at the two previous sections, and replace the word "local" with "session". Everything mentioned above that applies to local storage is also true for session storage. Again, the only difference between the two is that any data saved on sessionStorage is erased when the session with the client ends (that is, whenever the browser is shut down). Some examples of how to use sessionStorage will be shown below. In the example, we will attempt to store a value in the sessionStorage if that value doesn't already exist. Remember, when we set a key-value pair to the storage, if that key already exists in the storage, then whatever value was associated with that key will be overwritten. If the key doesn't exist, it gets created automatically. code8 Note that we can also query the sessionStorage object for a specific key using the in operator, which returns a Boolean value shown as follows: code9 Finally, although we can check the total amount of keys in the storage through sessionStorage.length, that by itself may not be very useful if we don't know what all the different keys are. Thankfully, the sessionStorage.key function allows us to get a specific key, through which we can then get a hold of the value stored with that key. code10 Thus, we can query sessionStorage for a key at a given position, and receive the string key representing that key. Then, with the key we can get a hold of the value stored with that key. Note, however, that the order in which items are stored within the sessionStorage object is totally arbitrary. While some browsers may keep the list of stored items sorted alphabetically by key value, this is clearly specified in the HTML5 spec as a decision to be left up to browser makers. As exciting as the web storage API might seem so far, there are cases when our needs might be such that serializing and unserializing data, as we use local or session storage, might not be quite sufficient. For example, imagine we have a few hundred (or perhaps, several thousand) similar records stored in local storage (say we're storing enemy description cards that are part of an RPG game). Think about how you would do the following using local storage: Retrieve, in alphabetical order, the first five records stored Delete all records stored that contain a particular characteristic (such as an enemy that doesn't survive in water, for example) Retrieve up to three records stored that contain a particular characteristic (for example, the enemy has a Hit Point score of 42,000 or more) The point is this: any querying that we may want to make against the data stored in local storage or session storage, must be handled by our own code. In other words, we'd be spending a lot of time and effort writing code just to help us get to some data. Let alone the fact that any complex data stored in local or session storage is converted to literal objects, and any and all functions that were once part of those objects are now gone, unless we write even more code to handle some sort of custom unserializing. In case you have not guessed it by now, IndexedDB solves these and other problems very beautifully. At its heart, IndexedDB is a NoSQL database engine that allows us to store whole objects and index them for fast insertions, deletions, and retrievals. The database system also provides us with a powerful querying engine, so that we can perform very advanced computations on the data that we have persisted. The following figure shows some of the similarities between IndexedDB and a traditional relational database. In relational databases, data is stored as a group of rows within a specific table structure. In IndexedDB, on the other hand, data is grouped in broadly-defined buckets known as data stores. The architecture of IndexedDB is somewhat similar to the popular relational database systems used in most web development projects today. One core difference is that, whereas relational databases store data in a database, which is a collection of related tables, an IndexedDB system groups data in databases, which is a collection of data stores. While conceptually similar, in practice these two architectures are actually quite different. Note If you come from a relational database background, and the concept of databases, tables, columns, and rows makes sense to you, then you're well on your way to becoming an IndexedDB expert. As you'll see, there are some significant distinctions between both systems and methodologies. While you might be tempted to simply replace the words data store with tables, know that the difference between the two concepts extends beyond a name difference. One key feature of data stores is that they don't have any specific schema associated with them. In relational databases, a table is defined by its very particular structure. Each column is specified ahead of time, when the table is first created. Then, every record saved in such a table follows the exact same format. In NoSQL databases (which IndexedDB is a type of), a data store can hold any object, with whatever format they may have. Essentially, this concept would be the same as having a relational database table that has a different schema for each record in it. IDBFactory To get started with IndexedDB, we first need to create a database. This is done through an implementation of IDBFactory, which in the browser, is the window.indexedDB object. Deleting a database is also done through the indexedDB object, as we'll see soon. In order to open a database (or create one if it doesn't exist yet), we simply call the indexedDB.open method, passing in a database name, along with a version number. If no version number is supplied, the default version number of one will be used as shown in the following code snippet: code11 As you'll soon notice, every method for asynchronous requests in IndexedDB (such as indexedDB.open, for example), will return a request object of type IDBRequest, or an implementation of it. Once we have that request object, we can set up callback functions on its properties, which get executed as the various events related to them are fired, as shown in the following code snippet: code12 IDBOpenDBRequest As mentioned in the previous section, once we make an asynchronous request to the IndexedDB API, the immediately returned object will be of type IDBRequest. In the particular case of an open request, the object that is returned to us is of type IDBOpenDBRequest. Two events that we might want to listen to on this object were shown in the preceding code snippet (onerror and onsuccess). There is also a very important event, wherein we can create an object store, which is the foundation of this storage system. This event is the onupgradeneeded (that is, on upgrade needed) event. This will be fired when the database is first created and, as you might expect, whenever the version number used to open the database is higher than the last value used when the database was opened, as shown in the following code: code13 The call to createObjectStore made on the database object takes two parameters. The first is a string representing the name of the object store. This store can be thought of as a table in the world of relational databases. Of course, instead of inserting records into columns from a table, we insert whole objects into the data store. The second parameter is an object defining properties of the data store. One important attribute that this object must define is the keyPath object, which is what makes each object we store unique. The value assigned to this property can be anything we choose. Now, any objects that we persist in this data store must have an attribute with the same name as the one assigned to keyPath. In this example, our objects will need to have an attribute of myKey. If a new object is persisted, it will be indexed by the value of this property. Any additional objects stored that have the same value for myKey will replace any old objects with that same key. Thus, we must provide a unique value for this object every time we want a unique object persisted. Alternatively, we can let the browser provide a unique value for this key for us. Again, comparing this concept to a relational database, we can think of the keyPath object as being the same thing as a unique ID for a particular element. Just as most relational database systems will support some sort of auto increment, so does IndexedDB. To specify that we want auto-incremented values, we simply add the flag to the object store properties object when the data store is first created (or upgraded) as shown in the following code snippet: code14 Now we can persist an object without having to provide a unique value for the property myKey. As a matter of fact, we don't even need to provide this attribute at all as part of any objects we store here. IndexedDB will handle that for us. Take a look at the following diagram: Using Google Chrome's developer tools, we can see all of the databases and data stores we have created for our domain. Note that the primary object key, which has whatever name we give it during the creation of our data store, has IndexedDB-generated values, which, as we have specified, are incremented over the last value. With this simple, yet verbose boilerplate code in place, we can now start using our databases and data stores. From this point on, the actions we take on the database will be done on the individual data store objects, which are accessed through the database objects that created them. IDBTransaction The last general thing we need to remember when dealing with IndexDB, is that every interaction we have with the data store is done inside transactions. If something goes wrong during a transaction, the entire transaction is rolled back, and nothing takes effect. Similarly, if the transaction is successful, IndexedDB will automatically commit the transaction for us, which is a pretty handy bonus. To use transaction, we need to get a reference to our database, then request a transaction for a particular data store. Once we have a reference to a data store, we can perform the various functions related to the data store, such as putting data into it, reading data from it, updating data, and finally, deleting data from a data store. code15 To store an item in our data store we need to follow a couple of steps. Note that if anything goes wrong during this transaction, we simply catch whatever error is thrown by the browser, and execution continues uninterrupted because of the try/catch block. The first step to persisting objects in IndexedDB is to start a transaction. This is done by requesting a transaction object from the database we have opened earlier. A transaction is always related to a particular data store. Also, when requesting a transaction, we can specify what type of transaction we'd like to start. The possible types of transactions in IndexedDB are as follows: readwrite This transaction mode allows for objects to be stored into the data store, retrieved from it, updated, and deleted. In other words, readwrite mode allows for full CRUD functionality. readonly This transaction mode is similar to readwrite, but clearly restricts the interactions with the data store to only reading. Anything that would modify the data store is not allowed, so any attempt to create a new record (in other words, persisting a new object into the data store), update an existing object (in other words, trying to save an object that was already in the data store), or delete an object from the data store will result in the transaction failing, and an exception being raised. versionchange This transaction mode allows us to create or modify an object store or indexes used in the data store. Within a transaction of this mode, we can perform any action or operation, including modifying the structure of the database. Getting elements Simply storing data into a black box is not at all useful if we're not able to retrieve that data at a later point in time. With IndexedDB, this can be done in several different ways. More commonly, the data store where we persist the data is set up with one or more indexes, which keep the objects organized by a particular field. Again, for those accustomed to relational databases, this would be similar to indexing/applying a key to a particular table column. If we want to get to an object, we can query it by its unique ID, or we can search the data store for objects that fit particular characteristics, which we can do through indexed values of that object. To create an index on a data store, we must specify our intentions during the creation of the data store (inside the onupgradeneeded callback when the store is first created, or inside a transaction mode versionchange). The code for this is as follows: code16 In the preceding example, we create an index for the task attribute of our objects. The name of this index can be anything we want, and commonly is the same name as the object property to which it applies. In our case, we simply named it taskIndex. The possible settings we can configure are as follows: unique – if true, an object being stored with a duplicate value for the same attribute is rejected multiEntry – if true, and the indexed attribute is an array, each element will be indexed Note that zero or more indexes can be created for a data store. Just like any other database system, indexing your database/data store can really boost the performance of the storage container. However, just adding indexes for the fun it provides is not a good idea, as the size of your data store will grow accordingly. A good data store design is one where the specific context of the data store with respect to the application is taken into account, and each indexed field is carefully considered. The phrase to keep in mind when designing your data stores is the following: measure it twice, cut it once. Although any object can be saved in a data store (as opposed to a relational database, where the data stored must carefully follow the table structure, as defined by the table's schema), in order to optimize the performance of your application, try to build your data stores with the data that it will store in mind. It is true that any data can be smacked into any data store, but a wise developer considers the data being stored very carefully before committing it to a database. Once the data store is set up, and we have at least one meaningful index, we can start to pull data out of the data store. The easiest way to retrieve objects from a data store is to use an index, and query for a specific object, as shown in the following code: code17 The preceding function attempts to retrieve a single saved object from our data store. The search is made for an object with its task property that matches the task name supplied to the function. If one is found, it will be retrieved from the data store, and passed to the store object's request through the event object passed in to the callback function. If an error occurs in the process (for example, if the index supplied doesn't exist), the onerror event is triggered. Finally, if no objects in the data store match the search criteria, the resulting property passed in through the request parameter object will be null. Now, to search for multiple items, we can take a similar approach, but instead we request an IndexedDBCursor object. A cursor is basically a pointer to a particular result from a result set of zero or more objects. We can use the cursor to iterate through every object in the result set, until the current cursor points at no object (null), indicating that there are no more objects in the result set. code18 You will note a few things with the above code snippet. First, any object that goes into our IndexedDB data store is stripped of its DNA, and only a simple hash is stored in its stead. Thus, if the prototype information of each object we retrieve from the data store is important to the application, we will need to manually reconstruct each object from the data that we get back from the data store. Second, observe that we can filter the subset of the data store that we would like to take out of it. This is done with an IndexedDB Key Range object, which specifies the offset from which to start fetching data. In our case, we specified a lower bound of zero, meaning that the lowest primary key value we want is zero. In other words, this particular query requests all of the records in the data store. Finally, remember that the result from the request is not a single result or an array of results. Instead, all of the results are returned one at a time in the form of a cursor. We can check for the presence of a cursor altogether, then use the cursor if one is indeed present. Then, the way we request the next cursor is by calling the continue() function on the cursor itself. Another way to think of cursors is by imagining a spreadsheet application. Pretend that the 10 objects returned from our request each represent a row in this spreadsheet. So IndexedDB will fetch all 10 of those objects to memory, and send a pointer to the first result through the event.target.result property in the onsuccess callback. By calling cursor.continue(), we simply tell IndexedDB to now give us a reference to the next object in the result set (or, in other words, we ask for the next row in the spreadsheet). This goes on until the tenth object, after which no more objects exist in the result set (again, to go along with the spreadsheet metaphor, after we fetch the last row, the next row after that is null – it doesn't exist). As a result, the data store will call the onsuccess callback, and pass in a null object. If we attempt to read properties in this null reference, as though we were working with a real object returned from the cursor, the browser will throw a null pointer exception. Instead of trying to reconstruct an object from a cursor one property at a time, we could abstract this functionality away in a generic form. Since objects being persisted into the object store can't have any functions, we're not allowed to keep such functionality inside the object itself. However, thanks to JavaScript's ability to build an object from a reference to a constructor function, we can create a very generic object builder function as follows: code19 Deleting elements To remove specific elements from a data store, the same principles involved in retrieving data apply. In fact, the entire process looks fairly identical to retrieving data, only we call the delete function on the object store object. Needless to say, the transaction used in this action must be readwrite, since readonly limits the object so that no changes can be done to it (including deletion). The first way to delete an object is by passing the object's primary key to the delete function. This is shown as follows: code20 The difficulty with this first approach is that we need to know the ID of the object. In some cases, this would involve a prior transaction request where we'd retrieve the object based on some easier to get data. For example, if we want to delete all tasks with the attribute of complete set to true, we'd need to query the data store for those objects first, then use the IDs associated with each result, and use those values in the transaction where the objects are deleted. A second way to remove data from the data store is to simply call clear() on the object store object. Again, the transaction must be set to readwrite. Doing this will obliterate every last object in the data store, even if they're all of different types as shown in the following code snippet: code21 Finally, we can delete multiple records using a cursor. This is similar to the way we retrieve objects. As we iterate through the result set using the cursor, we can simply delete the object at whatever position the cursor is currently on. Upon deletion, the reference from the cursor object is set to null as shown in the following code snippet: code22 This is pretty much the same routine as fetching data. The only detail is that we absolutely need to supply an object's key. The key is the value stored in the object's keyPath attribute, which can be user-provided, or auto-generated. Fortunately for us, the cursor object returns at least two references to this key through the cursor.primaryKey property, as well as through the object's own property that references that value (in our case, we chose the keyPath attribute to be named myKey). The two upgrades we added to this second version of the game are simple, yet they add a lot of value to the game. We added a persistent high score engine, so users can actually keep track of their latest record, and have a sticky record of past successes. We also added a pretty nifty feature that takes a snapshot of the game board each time the player scores, as well as whenever the player ultimately dies out. Once the player dies, we display all of the snapshots we had collected throughout the game, allowing the player to save those images, and possibly share it with his or her friends. Saving the high score The first thing you probably noticed about the previous version of this game was that we had a placeholder for a high score, but that number never changed. Now that we know how to persist data, we can very easily take advantage of this, and persist a player's high score through various games. In a more realistic scenario, we'd probably send the high score data to a backend server, where every time the game is served, we can keep track of the overall high score, and every user playing the game would know about this global score. However, in our situation, the high score is local to a browser only, since none of the persistence APIs (local and session storage, as well as IndexedDB) share data across other browsers, or natively to a remote server. Since we want the high score to still exist in a player's browser even a month from now, after the computer has been powered off (along with the browser, of course) multiple times, storing this high score data on sessionStorage would be silly. We could store this single number either in IndexedDB or in localStorage. Since we don't care about any other information associated with this score (such as the date when the score was achieved, and so on), all we're storing really is just the one number. For this reason, I think localStorage is a much better choice, because it can all be done in as few as 5 lines of code. Using IndexedDB would work, but would be like using a cannon to kill a mosquito: code23 This function is pretty straight forward. The two values we pass it are the actual score to set as the new high score (this value will be both saved to localStorage, as well as displayed to the user), and the HTML element where the value will be shown. First, we retrieve the existing value saved under the key high-score, and convert it to a number. We could have used the function parseInt(), but multiplying a string by a number does the same thing, but with a slightly faster execution. Next, we check if that value evaluated to something real. In other words, if there was no high-score value saved in local storage, then the variable score would have been evaluated to undefined multiplied by one, which is not a number. If there is a value saved with the key high-score, but that value is not something that can be converted into a number (such as a string of letters and such), we know that it is not a valid value. In this case, we set the incoming score as the new high score. This would work out in the case where the current persisted value is invalid, or not there (which would be the case the very first time the game loads). Next, once we have a valid score retried from local storage, we check if the new value is higher than the old, persisted value. If we have a higher score, we persist that value, and display it to the screen. If the new value is not higher than the existing value, we don't persist anything, but display the saved value, since that is the real high score at the time. Taking screenshots of the game This feature is not as trivial as saving the user's high score, but is nonetheless very straightforward to implement. Since we don't care about snapshots that we captured more than one game ago, we'll use sessionStorage to save data from the game, in real time as the player progresses. Behind the scenes, all we do to take these snapshots is save the game state into sessionStorage, then at the end of the game we retrieve all of the pieces that we'd been saving, and reconstruct the game at those points in time into an invisible canvas. We then use the canvas.toDataURL() function to extract that data as an image: code24 Each time the player eats a fruit, we call this function, passing it a reference to the snake (our hero in this game), and the fruit (the goal of this game) objects. What we do is really quite simple: we create an array representing the state of the snake and of the fruit at each event that we capture. Each element in this array is a string representing the serialized array that keeps track of where the fruit was, and where each body part of the snake was located as well. First, we check if this object currently exists in sessionStorage. For the first time we start the game, this object will not yet exist. Thus, we create an object that references those two objects, namely the snake and the fruit object. Next, we stringify the buffers keeping track of the locations of the elements we want to track. Each time we add a new event, we simply append to those two buffers. Of course, if the user closes down the browser, that data will be erased by the browser itself, since that's how sessionStorage works. However, we probably don't want to hold on to data from a previous game, so we also need a way to clear out our own data after each game. code25 Easy enough. All we need is to know the name of the key that we use to hold each element. For our purposes, we simply call the snapshots of the snake eating "eat", and the buffer with the snapshot of the snake dying "die". So before each game starts, we can simply call clearEvent() with those two global key values, and the cache will be cleared a new each time. Next, as each event takes place, we simply call the first function we defined, sending it the appropriate data as shown in the following code snippet: code26 Finally, whenever we wish to display all of these snapshots, we just need to create a separate canvas with the same dimensions as the one used in the game (so that the buffers we saved don't go out of bounds), and draw the buffers to that canvas. The reason we need a separate canvas element is because we don't want to draw on the same canvas that the player can see. This way, the process of producing these snapshots is more seamless and natural. Once each state is drawn, we can extract each image, resize it, and display it back to the user as shown in the following code: code27 Observe that we simply draw the points representing the snake and the fruit into that canvas. All of the other points in the canvas are ignored, meaning that we generate a transparent image. If we want the image to have an actual background color (even if it is just white), we can either call fillRect() over the entire canvas surface before drawing the snake and the fruit, or we can traverse each pixel in the pixelData array from the rendering context, and set the alpha channel to 100 percent opaque. Even if we set a color to each pixel by hand, but leave off the alpha channel, we'd have colorful pixels, but 100 percent transparent. Summary In this article we took a few extra steps into the fascinating world of 2D rendering using the long-awaited canvas API. We took advantage of the canvas' ability to export images to make our game more engaging, and potentially more social. We also made the game more engaging and social by adding a persistence layer on top of the game, whereby we were able to save a player's high score. Two other new powerful features of HTML5, web messaging and IndexedDB, were explored in this article, although there were no uses for these features in this version of the game. The web messaging API provides a mechanism for two or more windows to communicate directly through message passing. The exciting bit is that these windows (or HTML contexts) do not need to be in the same domain. Although this could sound like a security issue, there are several systems in place to ensure that cross-document and cross-domain messaging is secure and efficient. The web storage interface brings with it three distinct solutions for long term data persistence on the client. These are session storage, local storage, and IndexedDB. While IndexedDB is a full-blown, built-in, fully transactional and asynchronous NoSQL object store, local and session storage provide a very simple key-value pair storage for simpler needs. All three of these systems introduce great benefits and gains over the traditional cookie-based data storage, including the fact that the total amount of data that can be persisted in the browser is much greater, and none of the data saved in the user's browser ever travels back and forth between the server and the client through HTTP requests. Resources for Article :   Further resources on this subject: Interface Designing for Games in iOS [Article] Unity 3D Game Development: Don't Be a Clock Blocker [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 3687

article-image-installing-drupal
Packt
04 Jul 2013
14 min read
Save for later

Installing Drupal

Packt
04 Jul 2013
14 min read
(For more resources related to this topic, see here.) Assumptions To get Drupal up and running, you will need all of the following: A domain A web host Access to the web host's filesystem or You need a local testing environment, which takes care of the first three things For building sites, either a web host or a local testing environment will meet your needs. A site built on a web-accessible domain can be shared via the Internet, whereas sites built on local test machines will need to be moved to a web host before they can be used for your course.  In these instructions, we are assuming the use of phpMyAdmin, an open source, browser-based tool, for administering your database. A broad range of similar tools exist, and these general instructions can be used with most of these other tools. Information on phpMyAdmin is available at http://www.phpmyadmin.net; information on other browser-based database administration tools can be found at http://en.wikipedia.org/wiki/PhpMyAdmin#Similar_products. The domain The domain is the address on the Web from where people can access your site. If you are building this site as part of your work, you will probably be using the domain associated with your school or organization. If you are hosting this on your own server, you can buy a domain for under US $10.00 a year. Enter purchase domain name in Google, and you will have a plethora of options. The web host Your web host provides you with the server space on which to run your site. Within many schools, your website will be hosted by your school. In other environments, you might need to arrange for your own web host by using a hosting company. In selecting a web host, you need to be sure that they run software that meets or exceeds the recommended software versions. Web server Drupal is developed and tested extensively in an Apache environment. Drupal also runs on other web servers, including Microsoft IIS and Nginx. PHP version Drupal 7 will run on PHP 5.2.5 or higher; however, PHP 5.3 is recommended. The Drupal 8 release will require PHP 5.3.10. MySQL version Drupal 7 will run on MySQL 5.0.15 or higher, and requires the PHP Data Objects ( PDO ) extension for PHP. Drupal 7 has also been tested with MariaDB as a drop-in replacement, and Version 5.1.44 or greater is recommended. PDO is a consistent way for programmers to write code that interacts with the database. You can find out more about PDO and how to install it at http://drupal.org/requirements/pdo. Drupal can technically use any database that PDO supports, but MySQL is by far the most tested and best supported. Third-party modules are required to use Drupal with other database systems. You can find these modules listed at http://drupal.org/project/modules/?f[0]=im_vid_3%3A13158&f[1]=drupal_core%3A103&f[2]=bs_project_sandbox%3A0. FTP and shell access to your web host Your web host should also offer FTP access to your web server. You will need FTP (or SFTP) access in order to upload the Drupal codebase to your web space. Shell access, or SSH access, is not essential for basic site maintenance. However, SSH access can simplify maintaining your site, so contracting with a web host that provides SSH access is recommended. A local testing environment Alternatively, you can set up a local testing environment for your site. This allows you to set up Drupal and other applications on your computer. A local testing environment can be a great tool for learning a piece of software. Fortunately, open source tools can automate the process of setting up your testing environment. PC users can use XAMPP (http://www.apachefriends.org) to set up a local testing environment; Mac users can use MAMP (http://www.mamp.info). If you are working in a local testing environment set up via XAMPP or MAMP, you have all the pieces you need to start working with Drupal: your domain, your web host, the ability to move files into your web directory, and phpMyAdmin. Setting up a local environment using MAMP (Mac only) While Apple's operating system includes most of the programs required to run Drupal, setting up a testing environment can be tricky for inexperienced users. Installing MAMP allows you to create a preconfigured local environment quickly and easily using the following steps: Download the latest version of MAMP from http://www.mamp.info/en/index.html. Note that the paid version of the program will download as well. Feel free to pay for the software if you wish, but the free version will be sufficient for our needs. Navigate to where you downloaded the .zip file, and double-click to unzip it. Once it is unzipped, double click on the .pkg file that was contained in the .zip file. Follow the directions in the wizard until you reach the Installation Type screen. If you want to use only the free version of the program, click on the Customize button: In the Custom Install on "Macintosh HD" window, uncheck the MAMP PRO option and click on the Install button to install the application: Navigate to /Applications/MAMP and open the MAMP application. The Apache and MySQL servers will start, and the start page will open in your default web browser. If the start page opens, MAMP is installed correctly. Setting up a local environment using XAMPP (Windows only) Download the latest version of XAMPP from http://www.apachefriends.org/en/xampp-windows.html#641. Download the .zip version. Navigate to where you downloaded the file, right-click, and select Extract All... . Enter C: as the destination and click on Extract . Navigate to C:xampp and double-click the xampp-control application to start XAMPP Control Panel Application : Click on the Start buttons next to Apache and MySql . Open a web browser, and enter http://localhost or http://127.0.0.1 in the address bar, and you should see the following start page: Navigate to http://localhost/security/index.php, and enter a password for MySQL's root user. Make sure to remember this password or write it down in your notebook because we will need it later. Configuring your local environment for Drupal Now that we have the programs required to run Drupal (Apache, MySQL, and PHP), we need to modify some of their settings to match Drupal's system requirements. PHP configuration As mentioned before, Drupal 7 requires Version 5.2.5 or higher, and as of the writing of this book MAMP includes Version 5.4.4 (or you can switch to Version 5.2.17) and XAMPP includes Version 5.4.7. PHP configuration settings are found in the program's php.ini file. For MAMP, the php.ini file is located in /Applications/MAMP/bin/php/[php version number]/conf, where the php version number is either 5.4.4 or 5.2.17. For XAMPP, the php.ini file is located in C:xamppphp. Open the file in a text editor (not a word processor), find the Resource Limits section of the file and edit the values to match the following values: max_execution_time = 60;max_input_time = 120;memory_limit = 128M;error_reporting = E_ALL & ~E_NOTICE The last line is optional and is used if you want to display error messages in the browser, instead of only in the logs. MySQL configuration As mentioned before, Drupal 7 requires MySQL Version 5.0.15 or higher. MAMP includes Version 5.5.25 and XAMPP includes Version 5.5.27. MySQL's configuration settings are contained in a my.cnf or my.ini file. MAMP does not use a my.cnf file by default, so we need to copy the my-medium.cnf file from the /Applications/MAMP/Library/support-files directory to the /Applications/MAMP/conf folder. After copying the file, rename it to my.cnf. For XAMPP, the my.ini file is located in the C:xamppmysqlbin directory. Open the my.cnf or my.ini file in a text editor, find the following settings and edit them to match the following values: # * Fine Tuning#key_buffer = 16Mkey_buffer_size = 32Mmax_allowed_packet = 16Mthread_stack = 512Kthread_cache_size = 8max_connections = 300## * Query Cache Configuration#query_cache_type = 1query_cache_limit = 15Mquery_cache_size = 46Mjoin_buffer_size = 5M# Sort buffer size for ORDER BY and GROUP BY queries, data# gets spun out to disc if it does not fitsort_buffer_size = 10Minnodb_flush_method = O_DIRECTinnodb_file_per_table = 1innodb_flush_log_at_trx_commit = 2innodb_log_buffer_size = 4Minnodb_additional_mem_pool_size = 20M# num cpu's/cores *2 is a good base line for innodb_thread_concurrencyinnodb_thread_concurrency = 4 After you have made the edits, you have to stop and restart the servers for the changes to take effect. Once you have restarted the servers, we are ready to install Drupal! The most effective way versus the easy way There are many different ways to install Drupal. People familiar with working via the command line can install Drupal very quickly without an FTP client or any web-based tools to create and administer databases. The instructions in this book are geared towards people who would rather not use the command line. These instructions attempt to get you through the technical pieces as painlessly as possible, to speed up the process of building a site that supports teaching and learning. Installing Drupal - the quick version The following steps will get you up and running with your Drupal site. This quick-start version gives an overview of the steps required for most setups. A more detailed version follows immediately after this section. Once you are familiar with the setup process, installing a Drupal site takes between five to ten minutes. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. Using phpMyAdmin, create a database on your server. Write down the name of the database. Using phpMyAdmin, create a user on the database using the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; You will have created the databasename in step 3; write down the username and password values, as you will need them to complete the install. Upload the Drupal codebase to your web folder. Navigate to the URL of your site. Follow the instructions of the install wizard. You will need your databasename (created in step 3), as well as the username and password for your database user (created in step 4). Installing Drupal - the detailed version This version goes over each step in more detail and includes screenshots. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. The Drupal codebase (and all modules and themes) are compressed into a tarball, or a file that is first tarred, and then gzipped. Such compressed files end in .tar.gz. On Macs and Linux machines, tar.gz files can be extracted automatically using tools that come preinstalled with the operating system. On PC's, you can use 7-zip, an open source compression utility available at http://www.7-zip.org. In your web browser, navigate to your system's URL for phpMyAdmin. If you are using a different tool for creating and managing your database, use that tool to create your database and database user. As shown in the following screenshot, create the database on your server. Click on the Create button to create your database. Store your database name in a safe place. You will need to know your database name to complete your installation. To create your database user, click on the SQL tab as shown in the following screenshot. In the text area, enter the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; For databasename, use the name of the database you created in step 4. Replace the username and password with a username and password of your choice. Once you have entered the correct values, click on the Go button to create the user with rights on your database: Store the username and the password of your database user in a safe place. You will need them to complete the installation. Create and/or locate the directory from where you want Drupal to run. In this example, we are running Drupal from within a folder named drupal7; this means that our site will be available at http://ourdomain.org/drupal7. Running Drupal in a subfolder can make things a little trickier. If at all possible, copy the Drupal files directly into your web root. Using your FTP client, upload the Drupal codebase to your web folder: Navigate to the URL of your site. The automatic install wizard will appear on your screen: Click the Save and continue button with the Standard option selected. Click the Save and continue button with the English (built-in) option selected. To complete the Set up database screen, you will need the database name (created in step 4) and the database username and password (created in step 6). Select MySQL, MariaDB, or equivalent as the Database type and then enter these values in their respective text boxes as seen in the following screenshot: Most installs will not need to use any of settings under ADVANCED OPTIONS . However, if your database is located on a server other than localhost, you will need to adjust the settings as shown in the next screenshot. In most basic hosting setups, your database is accessible at localhost . To verify the name or location of your database host, you can use phpMyAdmin (as shown in the screenshot under step 4) or contact an administrator for your web server. For the vast majority of installs, none of the advanced options will need to be adjusted. Click on the Save and continue button. You will see a progress meter as Drupal installs itself on your web server. On the Configure site screen, you can enter some general information about your site, and create the first user account. The first user account has full rights over every aspect of your site. When you have finished with the settings on this page, click on the Save and continue button. When the install is finished, you will see the following splash screen: Additional details on installing Drupal are available in the handbook at http://drupal.org/documentation/install. Enabling core modules For a full description of the modules included in Drupal core, see http://drupal.org/node/1283408. To see the modules included in Drupal core, navigate to Modules or admin/modules. As shown in the following screenshot, the Standard installation profile enables the most commonly used core modules. (For clarity, we have divided the screenshot of the single screen in two parts.) Assigning rights to the authenticated user role Within your Drupal site, you can use roles to assign specific permissions to groups of users. Anonymous users are all people visiting the site who are not site members; all site members (that is, all people with a username and password) belong to the authenticated user role. To assign rights to specific roles, navigate to People | Permissions | Roles or admin/people/permissions/roles. As shown in the preceding screenshot, click on the edit permissions link for authenticated users. The Comment module: Authenticated users can see comments and post comments. These rights have the comments going into a moderation queue for approval, as we haven't checked the Skip comment approval box. The Node module: Authenticated users can see published content. The Search module: Authenticated users can search the site. The User module: Authenticated users can change their own username. Once these options have been selected, click on the Save permissions button at the bottom of the page. Summary In this article, we installed the core Drupal codebase, enabled some core modules, and assigned rights to the authenticated user role. We are now ready to start building a feature-rich site that will help support teaching and learning. In the next article, we will take a look around your new site and begin to get familiar with how to make your site do what you want. Resources for Article : Further resources on this subject: Creating Content in Drupal 7 [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Introduction to Drupal Web Services [Article]
Read more
  • 0
  • 0
  • 1612