Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-getting-your-course-ready-new-semester
Packt
22 Jan 2013
10 min read
Save for later

Getting Your Course Ready for a New Semester

Packt
22 Jan 2013
10 min read
(For more resources related to this topic, see here.) Introduction Getting your course ready for students at the beginning of each semester can be a daunting task. You'll need to verify links to external content, make sure that previous materials have been copied successfully to your new course, and modify the existing assignment dates, among other tasks. You get the point—there are quite a few things you need to take care of before students ever see your course. This article offers recipes for streamlining this process to make setting up your course as stress-free as possible. The first two recipes deal with getting materials into your course, whether you're copying an entire course from a previous semester or importing a compatible course cartridge provided by a textbook publisher. You may be surprised to know that course cartridges created for other Learning Management Systems ( LMSs) , such as Blackboard and Moodle, can often be imported without any trouble! Other recipes in the article focus on making quick work of date changes and external link validation. We'll wrap up the article by previewing everything from the student's view. Please note that the recipes in this article, as well as the rest of the book, are written for Version 10.0 of the Desire2Learn Learning Environment. While many of the recipes are also applicable to earlier versions of the system, you may need to modify the steps to follow along. Copying course materials from a previous semester Copying materials, activities, and settings from one course to another can save you a considerable amount of time when preparing for the start of a new semester. The learning environment's Import /Export /Copy Components tool allows you to easily clone an entire course or select just the parts of the original course that you want to use in a new course. In this recipe, we will discuss copying materials from an existing course within the system. We will use the same tool to import a course cartridge from a publisher in the next recipe. Getting ready The Desire2Learn (D2L) Learning Environment is highly customizable, and each organization that uses it can customize many aspects of the user experience. This recipe assumes that your school has allowed the use of the Import/Export/Copy Components tool for your specific role within the system. In order to complete this recipe, you'll also need access to two courses—an empty course that we will be copying materials to and another one that contains the materials we will be copying. To copy materials from one course to another, your role in both courses needs to allow the use of the Copy Components function. For example, you wouldn't be able to copy quizzes from a class in which you are enrolled as a student into one that you are teaching. How to do it... We will be working with two courses in this recipe – a new, empty course and an existing course that contains the materials to be copied. Remember to start by accessing the destination course or the course that you want to copy materials to. Start by accessing the destination course from My Homepage. Click on the Edit Course link in the course navigation bar. Click the Import/Export/Copy Components link under the Site Resources heading. Select the option Copy Components from Another Org Unit and then click on Start. Locate the course from which we will be copying materials by clicking on the Search for offering link. If needed, use the search tool at the top of the list of courses to help locate the course. You can also click on any of the column headers to sort the list of courses based on that field (clicking twice reverses the order). Check the radio button to the left of the course, and click on the Add Selected button. Within a few seconds, the page updates to display all of the available components from the course we just selected. To clone an entire course, check the Select All Components box, and click on the Continue button. Since we chose to clone an entire course, we can continue on our way by clicking on the Finish button. Depending on the amount of materials being copied and the server load, the copy process may take a few seconds to several minutes. When the Done button becomes active, it means that the process has completed. As each tool finishes copying, you'll see its progress indicator change into a green checkmark. Anything that didn't copy successfully will be noted in the summary. How it works… We start off by accessing the destination course. The Search for Offering screen displays a list of all of the courses you currently have access to copy from. If you've been teaching for a while, this list may be quite large. The search and filtering tools at the top of the course offering list may be helpful if you are having difficulty finding the correct course from the list. In this recipe, we copied all the available components from the source offering by choosing the Select All Components option. However, you can copy individual tools or even individual items within those tools by choosing the Select individual items to copy option. If you decide to copy specific components, then you need to select those items on the Choose Components to Copy screen, as shown in the following screenshot: There's more... If you're copying large course files or complex question libraries, there's a chance that your browser will time out before the copy process is complete. If this happens, there are a few things you can do to complete the task: Break up the copy process into several smaller jobs. If, for example, you're getting error messages while copying Course Files, try only copying half of the files, then return to the tool and try the second half later. The current server load can greatly impact the time it takes to copy components. You may want to try copying the components during an off-peak time. If you experience a browser time-out while copying Course Files, you might want to visit File Manager and look for duplicate or large files in the source course. Deleting unnecessary files can speed up the process significantly. Your Desire2Learn administrator has access to other ways of cloning a course or copying files. If you continue to experience difficulty with the tool, talking with your friendly admin would be a great idea! Importing a publisher's course cartridge Publishers frequently offer complimentary course cartridges to instructors who adopt their textbooks. The content of these cartridges varies greatly, but can include content and files, assessments, web links, and more. In this recipe, we will walk through the process of importing a course cartridge into an existing Desire2Learn Learning Environment course. Getting ready In order to complete this recipe, you'll need either a publisher's cartridge or an export from another Desire2Learn Learning Environment course. These files come in the form of .zip archives. Publishers typically offer different versions of cartridges for several of the major learning management systems. While you may not always find a version of a particular cartridge formatted for the Desire2Learn Learning Environment, you may be surprised to know that versions made for other systems, such as Blackboard 6 and WebCT, will typically work just fine. Check with your system administrator, if you have any difficulties importing a cartridge. You will also need access to the Import/Export/Copy Components tool. You will need to talk with your Desire2Learn system administrator if your role in the current course does not include access to the tool. How to do it... Start off by accessing the destination course from the My Home page. Click on the Edit Course link in the course's navigation bar. Access the Import/Export/Copy Components tool by clicking on the link under the Site Resources heading. Select the option to Import Components. Then, select the from a File option and choose the cartridge to import by clicking on the Choose File button. Click on the Start button after locating and selecting the file: Click on the Continue button on the Preprocessing screen when it becomes available. Import the entire cartridge's contents by choosing the Select All Components checkbox and then clicking on the Continue button. Click on the Continue button on the Confirm Import Selections screen. The process is complete when all of the progress indicators have changed to green checkmarks. Click on Finish, then Done when the components are finished copying. How it works... We start off by accessing the Import/Export/Copy Components tool in the destination course. After selecting the .zip folder to import, the system uploads and pre-processes the archive's manifest file. Depending on the complexity of the cartridge and the size of the archive, this can happen very quickly or it may take quite some time. After the pre-process action is complete, we choose to import the entire cartridge into the course, just as we did in the previous recipe. While this is often the easiest approach, it is possible to pick and choose individual components (such as Quizzes or Grades) or even individual items (such as specific quizzes or grade items), as we will discuss in the following section. Once you verify the components to be imported, it's just a matter of waiting for the progress indicators to become green checkmarks. Any item not able to be imported will be displayed on screen at the end of the process. You probably won't run into too many problems unless you are importing extremely large or complex cartridges, but it is always a good idea to verify that everything was successful before clicking on the Done button. There's more... In the last two scenarios, we have seen examples of copying and importing entire courses. While this is common at the beginning of the semester, there may be times when you will need only certain parts of another course. Suppose, for example, you only want the question library portion of a publisher's course cartridge. Luckily, this is easily accomplished by selecting individual components on the Choose Components to Copy screen instead of the All Components option. In the following screenshot, I have chosen to copy all the available Content items, but only selected Discussions and Dropbox folders: After selecting the components to copy and clicking on the Continue button, I'm prompted to select the individual quizzes I want to copy into my course. Clicking on the Expand All link shows a list of all quizzes, and selecting individual items to be imported is as easy as checking the option next to the item titles. Since I've chosen to also import selected Dropbox folders, I would complete a similar process for selecting those items on the next screen: I should point out one "gotcha" that frequently causes trouble for new users of the Desire2Learn Learning Environment. Items under the Content heading are frequently linked to uploaded documents or system-generated HTML files, which are stored in the File Manager. Unfortunately, selecting the items under Content doesn't copy these associated files, so you need to manually select these files under Course Files. Since this can be a somewhat tricky task depending on how you've organized your files, you may find it easier to copy everything and delete what you do not need. See also The Copying course materials from a previous semester recipe
Read more
  • 0
  • 0
  • 915

article-image-creating-and-configuring-basic-mobile-application
Packt
17 Jan 2013
3 min read
Save for later

Creating and configuring a basic mobile application

Packt
17 Jan 2013
3 min read
(For more resources related to this topic, see here.) How to do it... Follow these steps: Inside your Magento Admin Panel, navigate to Mobile | Manage Apps on the main menu. Click on the Add App button in the top-right corner. The New App screen will be shown. Since we have to create a separate application for each mobile device type, let's choose our first targeted platform. Under the Device Type list, we can choose iPad, iPhone, or Android. For the purpose of this recipe, since the procedure is almost the same for all device types, I will choose Android. After choosing the desired Device Type, click on the Continue button, and click on the General tab under Manage Mobile App. First we have to fill in the box named App Name. Choose an appropriate name for your mobile application and insert it there. Under the Store View list, make sure to choose our earlier defined Store View with updated mobile theme exceptions, our mobile copyright information, and category thumbnail images. Set the Catalog Only App option to No. Click on the Save and Continue Edit button in the top-right corner of the screen. Now you will notice a warning message from Magento that says something like the following: Please upload an image for "Logo in Header" field from Design Tab. Please upload an image for "Banner on Home Screen" field from Design Tab. Don't worry, Magento expects us to add some basic images that we prepared for our mobile app. So let's add them. Click on the Design tab on the left-hand side of the screen. Locate the Logo in Header label and click on the Browse... button on the right to upload the prepared small header logo image. Make sure to upload the image with proper dimensions for the selected device type (iPhone, iPad, or Android). In the same way, click on the Browse... button on the right of the Banner on Home Screen label and choose the appropriate prepared and resized banner image. Now, let's click on the Save and Continue Edit button in order to save our settings. How it works For each device type, we will have to create a new Magento Mobile application in our Magento Mobile Admin Panel. When we once select Device Type and click on the Save button, we are unable to change Device Type later for that application. If we have chosen the wrong Device Type, the only solution is to delete this app and to create a new one with the proper settings. The same applies with our chosen Store View when configuring new app. There's more... When our configuration is saved for the first time, auto-generated App Code will appear on the screen and that will be the code which will uniquely identify our Device Type—the assigned application to be properly recognized with Magento Mobile. For example, defand1 means that this application is the first defined application for the default Store View targeted on android (def = default store view, and=android). How to use mobile application as catalog only Under step 7 we set Catalog Only App to No, but sometimes, if we don't need checkout and payment in our mobile app, but we want to use it just as catalog to show products to our mobile customers, we just need to set the Catalog Only option to Yes. Summary So this is how we create the basic configuration for our mobile app Resources for Article : Further resources on this subject: Integrating Twitter with Magento [Article] Integrating Facebook with Magento [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 2040

article-image-building-your-first-application
Packt
10 Jan 2013
12 min read
Save for later

Building Your First Application

Packt
10 Jan 2013
12 min read
(For more resources related to this topic, see here.) Improving the scaffolding application In this recipe, we discuss how to create your own scaffolding application and add your own configuration file. The scaffolding application is the collection of files that come with any new web2py application. How to do it... The scaffolding app includes several files. One of them is models/db.py, which imports four classes from gluon.tools (Mail, Auth, Crud, and Service), and defines the following global objects: db, mail, auth, crud, and service. The scaffolding application also defines tables required by the auth object, such as db.auth_user. The default scaffolding application is designed to minimize the number of files, not to be modular. In particular, the model file, db.py, contains the configuration, which in a production environment, is best kept in separate files. Here, we suggest creating a configuration file, models/0.py, that contains something like the following: from gluon.storage import Storage settings = Storage() settings.production = False if settings.production: settings.db_uri = 'sqlite://production.sqlite' settings.migrate = False else: settings.db_uri = 'sqlite://development.sqlite' settings.migrate = True settings.title = request.application settings.subtitle = 'write something here' settings.author = 'you' settings.author_email = '[email protected]' settings.keywords = '' settings.description = '' settings.layout_theme = 'Default' settings.security_key = 'a098c897-724b-4e05-b2d8-8ee993385ae6' settings.email_server = 'localhost' settings.email_sender = '[email protected]' settings.email_login = '' settings.login_method = 'local' settings.login_config = '' We also modify models/db.py, so that it uses the information from the configuration file, and it defines the auth_user table explicitly (this makes it easier to add custom fields): from gluon.tools import * db = DAL(settings.db_uri) if settings.db_uri.startswith('gae'): session.connect(request, response, db = db) mail = Mail() # mailer auth = Auth(db) # authentication/authorization crud = Crud(db) # for CRUD helpers using auth service = Service() # for json, xml, jsonrpc, xmlrpc, amfrpc plugins = PluginManager() # enable generic views for all actions for testing purpose response.generic_patterns = ['*'] mail.settings.server = settings.email_server mail.settings.sender = settings.email_sender mail.settings.login = settings.email_login auth.settings.hmac_key = settings.security_key # add any extra fields you may want to add to auth_user auth.settings.extra_fields['auth_user'] = [] # user username as well as email auth.define_tables(migrate=settings.migrate,username=True) auth.settings.mailer = mail auth.settings.registration_requires_verification = False auth.settings.registration_requires_approval = False auth.messages.verify_email = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['verify_email']) + '/%(key)s to verify your email' auth.settings.reset_password_requires_verification = True auth.messages.reset_password = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['reset_password']) + '/%(key)s to reset your password' if settings.login_method=='janrain': from gluon.contrib.login_methods.rpx_account import RPXAccount auth.settings.actions_disabled=['register', 'change_password', 'request_reset_password'] auth.settings.login_form = RPXAccount(request, api_key = settings.login_config.split(':')[-1], domain = settings.login_config.split(':')[0], url = "http://%s/%s/default/user/login" % (request.env.http_host, request.application)) Normally, after a web2py installation or upgrade, the welcome application is tar-gzipped into welcome.w2p, and is used as the scaffolding application. You can create your own scaffolding application from an existing application using the following commands from a bash shell: cd applications/app tar zcvf ../../welcome.w2p * There's more... The web2py wizard uses a similar approach, and creates a similar 0.py configuration file. You can add more settings to the 0.py file as needed. The 0.py file may contain sensitive information, such as the security_key used to encrypt passwords, the email_login containing the password of your smtp account, and the login_config with your Janrain password (http://www.janrain.com/). You may want to write this sensitive information in a read-only file outside the web2py tree, and read them from your 0.py instead of hardcoding them. In this way, if you choose to commit your application to a version-control system, you will not be committing the sensitive information The scaffolding application includes other files that you may want to customize, including views/layout.html and views/default/users.html. Some of them are the subject of upcoming recipes. Building a simple contacts application When you start designing a new web2py application, you go through three phases that are characterized by looking for the answer to the following three questions: What data should the application store? Which pages should be presented to the visitors? How should the page content, for each page, be presented? The answer to these three questions is implemented in the models, the controllers, and the views respectively. It is important for a good application design to try answering those questions exactly in this order, and as accurately as possible. Such answers can later be revised, and more tables, more pages, and more bells and whistles can be added in an iterative fashion. A good web2py application is designed in such a way that you can change the table definitions (add and remove fields), add pages, and change page views, without breaking the application. A distinctive feature of web2py is that everything has a default. This means you can work on the first of those three steps without the need to write code for the second and third step. Similarly, you can work on the second step without the need to code for the third. At each step, you will be able to immediately see the result of your work; thanks to appadmin (the default database administrative interface) and generic views (every action has a view by default, until you write a custom one). Here we consider, as a first example, an application to manage our business contacts, a CRM. We will call it Contacts. The application needs to maintain a list of companies, and a list of people who work at those companies. How to do it... First of all we create the model. In this step we identify which tables are needed and their fields. For each field, we determine whether they: Must contain unique values (unique=True) Contain empty values (notnull=True) Are references (contain a list of a record in another table) Are used to represent a record (format attribute) From now on, we will assume we are working with a copy of the default scaffolding application, and we only describe the code that needs to be added or replaced. In particular, we will assume the default views/layout.html and models/db.py. Here is a possible model representing the data we need to store in models/db_contacts.py: # in file: models/db_custom.py db.define_table('company', Field('name', notnull=True, unique=True), format='%(name)s') db.define_table('contact', Field('name', notnull=True), Field('company', 'reference company'), Field('picture', 'upload'), Field('email', requires=IS_EMAIL()), Field('phone_number', requires=IS_MATCH('[d-() ]+')), Field('address'), format='%(name)s') db.define_table('log', Field('body', 'text',notnull=True), Field('posted_on', 'datetime'), Field('contact', 'reference contact')) Of course, a more complex data representation is possible. You may want to allow, for example, multiple users for the system, allow the same person to work for multiple companies, and keep track of changes in time. Here, we will keep it simple. The name of this file is important. In particular, models are executed in alphabetical order, and this one must follow db.py. After this file has been created, you can try it by visiting the following url: http://127.0.0.1:8000/contacts/appadmin, to access the web2py database administrative interface, appadmin. Without any controller or view, it provides a way to insert, select, update, and delete records. Now we are ready to build the controller. We need to identify which pages are required by the application. This depends on the required workflow. At a minimum we need the following pages: An index page (the home page) A page to list all companies A page that lists all contacts for one selected company A page to create a company A page to edit/delete a company A page to create a contact A page to edit/delete a contact A page that allows to read the information about one contact and the communication logs, as well as add a new communication log Such pages can be implemented as follows: # in file: controllers/default.py def index(): return locals() def companies(): companies = db(db.company).select(orderby=db.company.name) return locals() def contacts(): company = db.company(request.args(0)) or redirect(URL('companies')) contacts = db(db.contact.company==company.id).select( orderby=db.contact.name) return locals() @auth.requires_login() def company_create(): form = crud.create(db.company, next='companies') return locals() @auth.requires_login() def company_edit(): company = db.company(request.args(0)) or redirect(URL('companies')) form = crud.update(db.company, company, next='companies') return locals() @auth.requires_login() def contact_create(): db.contact.company.default = request.args(0) form = crud.create(db.contact, next='companies') return locals() @auth.requires_login() def contact_edit(): contact = db.contact(request.args(0)) or redirect(URL('companies')) form = crud.update(db.contact, contact, next='companies') return locals() @auth.requires_login() def contact_logs(): contact = db.contact(request.args(0)) or redirect(URL('companies')) db.log.contact.default = contact.id db.log.contact.readable = False db.log.contact.writable = False db.log.posted_on.default = request.now db.log.posted_on.readable = False db.log.posted_on.writable = False form = crud.create(db.log) logs = db( db.log.contact==contact.id).select(orderby=db.log.posted_on) return locals() def download(): return response.download(request, db) def user(): return dict(form=auth()) Make sure that you do not delete the existing user, download, and service functions in the scaffolding default.py. Notice how all pages are built using the same ingredients: select queries and crud forms. You rarely need anything else. Also notice the following: Some pages require a request.args(0) argument (a company ID for contacts and company_edit, a contact ID for contact_edit, and contact_logs). All selects have an orderby argument. All crud forms have a next argument that determines the redirection after form submission. All actions return locals(), which is a Python dictionary containing the local variables defined in the function. This is a shortcut. It is of course possible to return a dictionary with any subset of locals(). contact_create sets a default value for the new contact company to the value passed as args(0). The contacts_logs retrieves past logs after processing crud.create for a new log entry. This avoid unnecessarily reloading of the page, when a new log is inserted. At this point our application is fully functional, although the look-and-feel and navigation can be improved.: You can create a new company at: http://127.0.0.1:8000/contacts/default/company_create You can list all companies at: http://127.0.0.1:8000/contacts/default/companies You can edit company #1 at: http://127.0.0.1:8000/contacts/default/company_edit/1 You can create a new contact at: http://127.0.0.1:8000/contacts/default/contact_create You can list all contacts for company #1 at: http://127.0.0.1:8000/contacts/default/contacts/1 You can edit contact #1 at: http://127.0.0.1:8000/contacts/default/contact_edit/1 And you can access the communication log for contact #1 at: http://127.0.0.1:8000/contacts/default/contact_logs/1 You should also edit the models/menu.py file, and replace the content with the following: response.menu = [['Companies', False, URL('default', 'companies')]] The application now works, but we can improve it by designing a better look and feel for the actions. That's done in the views. Create and edit file views/default/companies.html: {{extend 'layout.html'}} <h2>Companies</h2> <table> {{for company in companies:}} <tr> <td>{{=A(company.name, _href=URL('contacts', args=company.id))}}</td> <td>{{=A('edit', _href=URL('company_edit', args=company.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add company', _href=URL('company_create'))}}</td> </tr> </table> response.menu = [['Companies', False, URL('default', 'companies')]] Here is how this page looks: Create and edit file views/default/contacts.html: {{extend 'layout.html'}} <h2>Contacts at {{=company.name}}</h2> <table> {{for contact in contacts:}} <tr> <td>{{=A(contact.name, _href=URL('contact_logs', args=contact.id))}}</td> <td>{{=A('edit', _href=URL('contact_edit', args=contact.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add contact', _href=URL('contact_create', args=company.id))}}</td> </tr> </table> Here is how this page looks: Create and edit file views/default/company_create.html: {{extend 'layout.html'}} <h2>New company</h2> {{=form}} Create and edit file views/default/contact_create.html: {{extend 'layout.html'}} <h2>New contact</h2> {{=form}} Create and edit file: views/default/company_edit.html: {{extend 'layout.html'}} <h2>Edit company</h2> {{=form}} Create and edit file views/default/contact_edit.html: {{extend 'layout.html'}} <h2>Edit contact</h2> {{=form}} Create and edit file views/default/contact_logs.html: {{extend 'layout.html'}} <h2>Logs for contact {{=contact.name}}</h2> <table> {{for log in logs:}} <tr> <td>{{=log.posted_on}}</td> <td>{{=MARKMIN(log.body)}}</td> </tr> {{pass}} <tr> <td></td> <td>{{=form}}</td> </tr> </table> Here is how this page looks: Notice that in the last view, we used the function MARKMIN to render the content of the db.log.body, using the MARKMIN markup. This allows embedding links, images, anchors, font formatting information, and tables in the logs. For details about the MARKMIN syntax we refer to: http://web2py.com/examples/static/markmin.html.
Read more
  • 0
  • 0
  • 3221
Visually different images

article-image-adding-geographic-capabilities-geoplaces-theme
Packt
03 Jan 2013
6 min read
Save for later

Adding Geographic Capabilities via the GeoPlaces Theme

Packt
03 Jan 2013
6 min read
(For more resources related to this topic, see here.) Introducing the GeoPlaces theme The GeoPlaces theme (http://templatic.com/app-themes/geo-places-city-directory-WordPress-theme/), by Templatic (http://templatic.com), is a cool theme that allows you to create and manage a city directory website. For a live demo of the site, visit http://templatic.com/demos/?theme=geoplaces4. An overview of the GeoPlaces theme The GeoPlaces theme is created as an out-of-the-box solution for city directory websites. It allows end users to submit places and events to your site. Best of all, you can even monetize the site by charging a listing fee. Some of the powerful features include the following: Widgetized homepage Menu widgets Featured events and listings Custom fields Payment options Price packages page view Let's now move on to the setting up of the theme. Setting up the GeoPlaces theme We'll start with the installation of the GeoPlaces theme. Installation The steps for installing the GeoPlaces theme are as follows: You will first have to purchase and download your theme (in a zip folder) from Templatic. Unzip the zipped file and place the GeoPlaces folder in your wp-content/themes folder. Log in to your WordPress site, which you have set up, and activate the theme. Alternatively, you can upload the theme by uploading the theme's zip folder via the admin interface, by going to Appearance | Install Themes | Upload. If everything goes well, you should see the following on the navigation bar of your admin page: If you see the previous screenshot in your navigation, than you are ready to move on to the next step. Populating the site with sample data After a successful installation of the theme, you can go ahead and play around with the site by creating sample data. GeoPlaces themes come with a nifty function that allows you to populate your site with sample data. Navigate to wp-admin/themes.php and you should see the following: Notice the message box asking if you want to install and populate your site with sample data. Click on the large green button and sample data will automatically be populated. Once done, you should see the following: You can choose to delete the sample data should you want to. But for now, let's leave the sample data for browsing purposes. Playing with sample data Now that we have populated the site with sample data, its time to explore it. Checking out cities With our site populated with sample data, let's take our WordPress site for a spin: First, navigate to your homepage; you should be greeted by a splash page that looks as follows: Now select New York and you will be taken to a page with a Google Map that looks like the following screenshot: GeoPlaces leverages on the Google Maps API to provide geographic capabilities to the theme. Feel free to click on the map and other places, such as Madison Square Park. If you click on Madison Square Park you will see a page that describes Madison Square Park. More importantly, on the right hand side of the page, you should see something like the following: Notice the Address row? The address is derived from the Google Maps API. How does it work? Let's try adding a place to find out. Adding a place from the frontend Here's how we can add a "place" from the frontend of the site: To add a place, you must first sign in. Sign in from the current page by clicking on the Sign In link found at the top right-hand side of the page. Sign in with your credentials. Notice that you remain on the frontend of the site as opposed to the administration side. Now click on the Add place link found on the upper right-hand side of the webpage. You should see the following: You will be greeted by a long webpage that requires you to fill up various fields that are required for listing a page. You should take note of this, as shown in the following screenshot: Try typing Little Italy in the Address field and click on the Set address on map button. You should notice that the map is now marked, and the Address Latitude and Address Longitude fields are now filled up for you. Your screen for this part of the webpage should now look as follows: The geographically related fields are now filled up. Continue to fill up the other fields, such as the description of this listing, the type of Google map view, special offers, e-mail address, website, and other social media related fields. With these steps, you should have a new place listing in no time. Adding a place from the admin side What you have just done is added a place listing from the frontend, as an end user (although you are logged in as admin). So, how do you add a place listing from the admin side of your WordPress site? Firstly, you need to log in to your site if you have not yet done so. Next, navigate to your admin homepage, and go to Places | Add a Place. You will see a page that resembles the Create a New Post page. Scroll down further and you should notice that the forms filled here are exactly the same as those you see in the frontend of the site. For example, fields for the geographic information are also found on this page: Adding a city from the admin side To add a city, all you have to do is to log in to the admin side of the site via /wpadmin. Once logged in, go to GeoPlaces | Manage City and click on Add City. From there you'll be able to fill up the details of the city. Summary We saw how to manage our WordPress site, covering topics such as populating the site with sample data, adding place listings, and adding a city. You should have a general idea of the geographic capabilities of the theme and how to add a new placelisting. Notice how the theme takes the heavy lifting away by providing built-in geographic functionalities through the Google Maps API. We also understood how themes and plugins can be used to extend WordPress. Resources for Article : WordPress Mobile Applications with PhoneGap: Increasing Traffic to Your Blog with WordPress MU 2.8: Part2 [Article] WordPress 3: Designing your Blog [Article] Adapting to User Devices Using Mobile Web Technology [Article]
Read more
  • 0
  • 0
  • 1901

article-image-components-reusing-rules-conditions-and-actions
Packt
03 Jan 2013
4 min read
Save for later

Components - Reusing Rules, Conditions, and Actions

Packt
03 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Enable the Rules and Rules UI modules on your site. How to do it... Go to Confguration | Workfow | Rules | Components. Add a new component and set the plugin to Condition set (AND). Enter a name for the component and add a parameter Entity | Node. Add a Condition, Data comparison, set the value to the author of the node, set OPERATOR to equals, enter 1 in the Data value field and tick Negate. Add an OR group by clicking on Add or, as shown in the following screenshot: Add a Condition, Node | Content is of type and set it to Article. Add a Condition, Entity | Entity has field, set Entity to node, and select the field, field_image, as shown in the following screenshot: Organize the Conditions so that the last two Conditions are in the OR group we created before. Create a new rule configuration and set the Event to Comment | After saving a new comment. Add a new Condition and select the component that we created. An example is shown in the following screenshot: Select comment:node as the parameter. Add a new Action, System | Show a message on the site and configure the message. How it works... Components require parameters to be specified, that will be used as placeholders for the objects we want to execute a rule configuration on. Depending on what our goal is, we can select from the core Rules data types, entities, or lists. In this example, we've added a Node parameter to the component, because we wanted to see who is the node's author, if it's an article or if it has an image field. Then in our Condition, we've provided the actual object on which we've evaluated the Condition. If you're familiar with programming, then you'll see that components are just like functions; they expect parameters and can be re-used in other scenarios. There's more... The main benefit of using Rules components is that we can re-use complex Conditions, Actions, and other rule configurations. That means that we don't have to configure the same settings over and over again. Instead we can create components and use them in our rule configurations. Other benefits also include exportability: components can be exported individually, which is a very useful addition when using configuration management, such as Features. Components can also be executed on the UI, which is very useful for debugging and can also save a lot of development time. Other component types Apart from Condition sets, there are a few other component types we can use. They are as follows: Action set As the name suggests, this is a set of Actions, executed one after the other. It can be useful when we have a certain chain of Actions that we want to execute in various scenarios. Rule We can also create a rule configuration as a component to be used in other rule configurations. Think about a scenario when you want to perform an action on a list of node references (which would require a looped Action) but only if those nodes were created before 2012. While it is not possible to create a Condition within an Action, we can create a Rule component so we can add a Condition and an Action within the component itself and then use it as the Action of the other rule configuration. Rule set Rule sets are a set of Rules, executed one after the other. It can be useful when we want to execute a chain of Rules when an event occurs. Parameters and provided variables Condition sets require parameters which are input data for the component. These are the variables that need to be specified so that the Condition can evaluate to FALSE or TRUE. Action sets, Rules, and Rule sets can provide variables. That means they can return data after the action is executed. Summary This article explained the benefits of using Rules components by creating a Condition that can be re-used in other rule configurations. Resources for Article : Further resources on this subject: Drupal 7 Preview [Article] Creating Content in Drupal 7 [Article] Drupal FAQs [Article]
Read more
  • 0
  • 0
  • 1692

article-image-augmentedti-application-architecture
Packt
31 Dec 2012
5 min read
Save for later

augmentedTi: The application architecture

Packt
31 Dec 2012
5 min read
(For more resources related to this topic, see here.) An overview The augmentedTi application has been developed to demonstrate Augmented Reality in action; it has been coded using the Appcelerator Titanium Framework. This framework enables a "code once, adapt everywhere" approach to mobile application development. It uses the commonJS architecture at its core and has a set of best practices, which can be read at https://wiki.appcelerator.org/display/guides/Best+Practices. The application follows these guidelines and also implements an MVC style architecture, using a controller, and event driven flow control methodology incorporating localization. At the current time trying to implement a CSS applied look and feel using the frameworks JSS method is not viable. The application gets around the issue of hard coding fonts, colors, and images into the application by using two files—ui/layout.js and ui/images.js. These files contain the look, feel, and images applied throughout the application, and are standalone modules, enabling them to be included in any other modules. The application As you start to explore the application you will see that the main bootstrap file app.js only contains the require of the controller file and the call to the initial function startApp(): var ctl = require('/control/controller'); ctl.startApp(); To implement methodology for separating the code into distinct commonJS modules, the following file structure is applied: i18n/en/strings.xml resources/app.js resources/control/controller.js resources/images resources/services/googleFeed.js location.js resources/tools/augmentedReality.js common.js iosBackgroundService.js persHandler.js ui/images.js layout.js common/activity.js titleBar.js screens/ARScreen.js homeScreen.js The main file which controls the application is controller.js. When an activity is completed, the control is returned here and the next activity is processed. This has an implication with enabling the program flow—application-level event listeners have to be added, using up resources. The application gets around this by creating a single custom event listener, which then calls a function to handle the flow. The fire event is handled within the tools/common.js file by providing a single function to be called, passing the required type and any other parameters: Ti.App.addEventListener('GLOBALLISTENER', function(inParam){ var gblParams = {}; for(var paramKeyIn in inParam) { if(inParam[paramKeyIn]) { gblParams[paramKeyIn] = inParam[paramKeyIn]; }} processGlobalListener(gblParams);}); function launchEvent(inParam){ var evtParams = {}; for(var paramKeyIn in inParam) { if(inParam[paramKeyIn]) { evtParams[paramKeyIn] = inParam[paramKeyIn]; }} Ti.App.fireEvent('GLOBALLISTENER', evtParams);} common.launchEvent({ TYPE : 'ERROR', MESS : 'E0004'}); Throughout the application's commonJS modules, a standard approach is taken, defining all functions and variables as local and exporting only those required at the end of the file: exports.startApp = startApp; In keeping with the commonJS model, the modules are only required when and where they are needed. No application-level global variables are used and each part of the application is split into its own module or set of modules. Within the application where data has to be stored, persistent data is used. It could have been passed around, but the amount of data is small and required across the whole application. The persistent data is controlled through the tools/persHandler.js module, which contains two functions—one for setting and one for getting the data. These functions accept the parameter of the record to update or return. var persNames = { lon : 'longitude', lat : 'latitude', width : 'screenWidth', height : 'screenHeight', bearing : 'bearing' }; function putPersData(inParam){ Ti.App.Properties.setString(persNames[inParam.type], inParam.data); return;} persHandler.putPersData({ type : 'width', data : Ti.Platform.displayCaps.platformWidth }); The application does not use the in-built tab navigation; instead it defines a custom title bar and onscreen buttons. This enables it to work across all platforms with the same look and feel. It also uses a custom activity indicator. Augmented Reality This section explains what Augmented Reality is and the solution provided within the augmentedTi application. With all technology something and somebody has to be first. Mobile computing and especially smart phones are still in their infancy. Resulting in new technologies, applications, and solutions being devised and applied almost daily. Augmented Reality is only now becoming viable, as the devices, technology, and coding solutions are more advanced. In this section a coding solution is given, which shows how to implement location-based Augmented Reality. It should work on most smart phones, and can be coded in most frameworks and native code. The code examples given use the Appcelerator Titanium Framework only. No additional modules or plugins are required. Summary This article dived into the open source code base of the augmentedTi example application, explaining how it has been implemented. Resources for Article : Further resources on this subject: iPhone: Customizing our Icon, Navigation Bar, and Tab Bar [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] Flash Development for Android: Visual Input via Camera [Article]
Read more
  • 0
  • 0
  • 4013
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-advanced-indexing-and-array-concepts
Packt
26 Dec 2012
6 min read
Save for later

Advanced Indexing and Array Concepts

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) Installing SciPy SciPy is the scientific Python library and is closely related to NumPy. In fact, SciPy and NumPy used to be one and the same project many years ago. In this recipe, we will install SciPy. How to do it... In this recipe, we will go through the steps for installing SciPy. Installing from source: If you have Git installed, you can clone the SciPy repository using the following command: git clone https://github.com/scipy/scipy.gitpython setup.py buildpython setup.py install --user This installs to your home directory and requires Python 2.6 or higher. Before building, you will also need to install the following packages on which SciPy depends: BLAS and LAPACK libraries C and Fortran compilers There is a chance that you have already installed this software as a part of the NumPy installation. Installing SciPy on Linux: Most Linux distributions have SciPy packages. We will go through the necessary steps for some of the popular Linux distributions: In order to install SciPy on Red Hat, Fedora, and CentOS, run the following instructions from the command line: yum install python-scipy In order to install SciPy on Mandriva, run the following command line instruction: urpmi python-scipy In order to install SciPy on Gentoo, run the following command line instruction: sudo emerge scipy On Debian or Ubuntu, we need to type the following: sudo apt-get install python-scipy Installing SciPy on Mac OS X: Apple Developer Tools (XCode) is required, because it contains the BLAS and LAPACK libraries. It can be found either in the App Store, or in the installation DVD that came with your Mac, or you can get the latest version from Apple Developer's connection at https://developer.apple.com/technologies/tools/. Make sure that everything, including all the optional packages is installed. You probably already have a Fortran compiler installed for NumPy. The binaries for gfortran can be found at http://r.research.att.com/tools/. Installing SciPy using easy_install or pip: Install with either of the following two commands: sudo pip install scipyeasy_install scipy Installing on Windows: If you have Python installed already, the preferred method is to download and use the binary distribution. Alternatively, you may want to install the Enthought Python distribution, which comes with other scientific Python software packages. Check your installation: Check the SciPy installation with the following code: import scipy print scipy.__version__ print scipy.__file__ This should print the correct SciPy version. How it works... Most package managers will take care of any dependencies for you. However, in some cases, you will need to install them manually. Unfortunately, this is beyond the scope of this book. If you run into problems, you can ask for help at: The #scipy IRC channel of freenode, or The SciPy mailing lists at http://www.scipy.org/Mailing_Lists Installing PIL PIL, the Python imaging library, is a prerequisite for the image processing recipes in this article. How to do it... Let's see how to install PIL. Installing PIL on Windows: Install using the Windows executable from the PIL website http://www.pythonware.com/products/pil/. Installing on Debian or Ubuntu: On Debian or Ubuntu, install PIL using the following command: sudo apt-get install python-imaging Installing with easy_install or pip: At the t ime of writing this book, it appeared that the package managers of Red Hat, Fedora, and CentOS did not have direct support for PIL. Therefore, please follow this step if you are using one of these Linux distributions. Install with either of the following commands: easy_install PILsudo pip install PIL Resizing images In this recipe, we will load a sample image of Lena, which is available in the SciPy distribution, into an array. This article is not about image manipulation, by the way; we will just use the image data as an input. Lena Soderberg appeared in a 1972 Playboy magazine. For historical reasons, one of those images is often used in the field of image processing. Don't worry; the picture in question is completely safe for work. We will resize the image using the repeat function. This function repeats an array, which in practice means resizing the image by a certain factor. Getting ready A prerequisite for this recipe is to have SciPy, Matplotlib, and PIL installed. How to do it... Load the Lena image into an array. SciPy has a lena function , which can load the image into a NumPy array: lena = scipy.misc.lena() Some refactoring has occurred since version 0.10, so if you are using an older version, the correct code is: lena = scipy.lena() Check the shape. Check the shape of the Lena array using the assert_equal function from the numpy.testing package—this is an optional sanity check test: numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) Resize the Lena array. Resize the Lena array with the repeat function. We give this function a resize factor in the x and y direction: resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) Plot the arrays. We will plot the Lena image and the resized image in two subplots that are a part of the same grid. Plot the Lena array in a subplot: matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) The Matplotlib subplot function creates a subplot. This function accepts a 3-digit integer as the parameter, where the first digit is the number of rows, the second digit is the number of columns, and the last digit is the index of the subplot starting with 1. The imshow function shows images. Finally, the show function displays the end result. Plot the resized array in another subplot and display it. The index is now 2: matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() The following screenshot is the result with the original image (first) and the resized image (second): The following is the complete code for this recipe: import scipy.misc import sys import matplotlib.pyplot import numpy.testing # This script resizes the Lena image from Scipy. if(len(sys.argv) != 3): print "Usage python %s yfactor xfactor" % (sys.argv[0]) sys.exit() # Loads the Lena image into an array lena = scipy.misc.lena() #Lena's dimensions LENA_X = 512 LENA_Y = 512 #Check the shape of the Lena array numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) # Get the resize factors yfactor = float(sys.argv[1]) xfactor = float(sys.argv[2]) # Resize the Lena array resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) #Check the shape of the resized array numpy.testing.assert_equal((yfactor * LENA_Y, xfactor * LENA_Y), resized.shape) # Plot the Lena array matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) #Plot the resized array matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() How it works... The repeat function repeats arrays, which, in this case, resulted in changing the size of the original image. The Matplotlib subplot function creates a subplot. The imshow function shows images. Finally, the show function displays the end result. See also The Installing SciPy recipe The Installing PIL recipe
Read more
  • 0
  • 0
  • 2140

article-image-null-15
Packt
26 Dec 2012
6 min read
Save for later

Extending WordPress to the Mobile World

Packt
26 Dec 2012
6 min read
Introducing jQuery Mobile jQuery Mobile (http://jquerymobile.com/) is a unified HTML5-based user interface for most popular mobile device platforms. It is based on jQuery (http://jquery.com/) and jQuery UI (http://jqueryui.com/). Our focus in this section is on jQuery Mobile, so let's get our hands dirty. We'll start by implementing jQuery Mobile using the example we created in Chapter 3, Extending WordPress Using JSON-API. Installing jQuery Mobile and theming Installing jQuery Mobile is straightforward and easy: Open up app_advanced.html and copy and paste the following code directly within the <head> tags: <meta name="viewport" content="width=device-width, initialscale= 1"> <link rel="stylesheet" href="http://code.jquery.com/mobile/1.1.1/ jquery.mobile-1.1.1.min.css" /> <script src="http://code.jquery.com/jquery-1.7.1.min.js"> </script> <script src="http://code.jquery.com/mobile/1.1.1/jquery.mobile- 1.1.1.min.js"> </script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/ jquery.min.js"> </script> Now save your code and open up app_advanced.html in your favourite browser. You should be seeing the following screen: Well, it looks like the webpage has gotten some form of theming, but it looks a little weird. This is because we have not implemented various HTML elements required for jQuery Mobile. Again, as mentioned in the previous chapter, the code sample assumes that your app has Internet access and hence access to jQuery and jQuery Mobile's CDN. This might reduce the app's startup time. To avoid the problem related to having no network or flaky connectivity, one basic thing you can do is to package your app together with a local copy of jQuery and jQuery Mobile. Let us move on to the next section and see how we can fix this. jQuery Mobile page template Let's go back to app_advanced.html and do some editing. Let us focus on the HTML elements found within <body> tags; change them to look like the following code snippet: <div id="main" data-role="page"> <div data-role="header"> <div data-role="controlgroup" data-type="horizontal"> <a href="#" id="previous" data-role="button">Previous</a> <a href="#" id="next" data-role="button">Next</a> <!-- <button type="button" id="create" datarole=" button">Create</button> --> <a href="#create_form" data-role="button" datatransition=" slide">Create</a> </div> </div> <div id="contents" data-role="content"></div> </div> <div data-role="page" id="create_form" data-theme="c"> <div data-role="header" addBackBtn="true"> <a href="#" data-rel="back">Back</a> <h1>Create a new Post</h1> </div> <div id="form" style="padding:15px;"> Title: <br /><input type="text" name="post_title" id="post_ title" /><br /> Content: <br /> <textarea name="post_contents" id="post_contents"></textarea> <br /> <input type="submit" value="Submit" id="create_post"/> <div id="message"></div> </div> </div> Now save your code and open it in your favourite web browser. You should see the following screen: The app now looks great! Feel free to click on the Next button and see how the app works. How does this all work? For a start, check out the highlighted lines of code. In the world of HTML5, the additional lines of HTML code we wrote, such as data-role="page" or data-theme="c", are known as custom data attributes. jQuery Mobile makes use of these specifications to denote the things we need in our mobile web app. For example, data-role="page" denotes that this particular element (in our case, a div element) is a page component. Similarly, datatheme="c" in our case refers to a particular CSS style. For more information about data theme, feel free to check out http://jquerymobile.com/test/docs/content/content-themes.html. Animation effects Now let us try a little bit with animation effects . We can create animation effects by simply leveraging what we know with jQuery. What about jQuery Mobile? There are several animation effects that are distinct to jQuery Mobile, and in this section we will try out animation effects in terms of page transitions. We will create a page transition effect using the following steps: Click on the Create button, and we will get a page transition effect to a new page, where we see our post creation form. On this Create a new Post form, as usual, type in some appropriate text in the Title and Content fields. Finally, click on the Submit button. Let's see how we can achieve the page transition effect: We need to make changes to our code. For the sake of simplicity, delete all HTML code found within your <body> tags in app_advanced.html, and then copy the following code into your <body> tags: <div id="main" data-role="page"> <div data-role="header"> <div data-role="controlgroup" data-type="horizontal"> <a href="#" id="previous" data-role="button">Previous</a> <a href="#" id="next" data-role="button">Next</a> <!-- <button type="button" id="create" datarole=" button">Create</button> --> <a href="#create_form" data-role="button" datatransition=" slide">Create</a> </div> </div> <div id="contents" data-role="content"></div> </div> <div data-role="page" id="create_form" data-theme="c"> <div data-role="header" addBackBtn="true"> <a href="#" data-rel="back">Back</a> <h1>Create a new Post</h1> </div> <div id="form" style="padding:15px;"> Title: <br /><input type="text" name="post_title" id="post_ title" /><br /> Content: <br /> <textarea name="post_contents" id="post_contents"></ textarea> <br /> <input type="submit" value="Submit" id="create_post"/> <div id="message"></div> </div> </div> Take note that we have used the transition="slide" attribute, so we have a "slide" effect. For more details or options, visit http://jquerymobile.com/test/docs/pages/page-transitions.html. Now, save your code and open it in your favorite web browser. Click on the Create button, and you will first see a slide transition, followed by the post creation form, as follows: Now type in some text, and you will see that jQuery Mobile takes care of the CSS effects in this form as well: Now click on the Submit button, and you will see a Success message below the Submit button, as shown in the following screenshot: If you see the Success message, as shown in the earlier screenshot, congratulations! We can now move on to extending our PhoneGap app, which we built in Chapter 4, Building Mobile Applications Using PhoneGap.
Read more
  • 0
  • 0
  • 886

article-image-meet-yii
Packt
07 Dec 2012
7 min read
Save for later

Meet Yii

Packt
07 Dec 2012
7 min read
(For more resources related to this topic, see here.) Easy To run a Yii version 1.x-powered web application, all you need are the core framework files and a web server supporting PHP 5.1.0 or higher. To develop with Yii, you only need to know PHP and object-oriented programming. You are not required to learn any new configuration or templating language. Building a Yii application mainly involves writing and maintaining your own custom PHP classes, some of which will extend from the core, Yii framework component classes. Yii incorporates many of the great ideas and work from other well-known web programming frameworks and applications. So if you are coming to Yii from using other web development frameworks, it is likely that you will find it familiar and easy to navigate. Yii also embraces a convention over configuration philosophy, which contributes to its ease of use. This means that Yii has sensible defaults for almost all the aspects that are used for configuring your application. Following the prescribed conventions, you can write less code and spend less time developing your application. However, Yii does not force your hand. It allows you to customize all of its defaults and makes it easy to override all of these conventions. Efficient Yii is a high-performance, component-based framework that can be used for developing web applications on any scale. It encourages maximum code reuse in web programming and can significantly accelerate the development process. As mentioned previously, if you stick with Yii's built-in conventions, you can get your application up and running with little or no manual configuration. Yii is also designed to help you with DRY development. DRY stands for Don't Repeat Yourself , a key concept of agile application development. All Yii applications are built using the Model-View-Controller (MVC) architecture. Yiienforces this development pattern by providing a place to keep each piece of your MVC code. This minimizes duplication and helps promote code reuse and ease of maintainability. The less code you need to write, the less time it takes to get your application to market. The easier it is to maintain your application, the longer it will stay on the market. Of course, the framework is not just efficient to use, it is remarkably fast and performance optimized. Yii has been developed with performance optimization in mind from the very beginning, and the result is one of the most efficient PHP frameworks around. So any additional overhead that Yii adds to applications written on top of it is extremely negligible. Extensible Yii has been carefully designed to allow nearly every piece of its code to be extended and customized to meet any project requirement. In fact, it is difficult not to take advantage of Yii's ease of extensibility, since a primary activity when developing a Yii application is extending the core framework classes. And if you want to turn your extended code into useful tools for other developers, Yii provides easy-to-follow steps and guidelines to help you create such third-party extensions. This allows you to contribute to Yii's ever-growing list of features and actively participate in extending Yii itself. Remarkably, this ease-of-use, superior performance, and depth of extensibility does not come at the cost of sacrificing its features. Yii is packed with features to help you meet those high demands placed on today's web applications. AJAX-enabled widgets, RESTful and SOAP Web services integration, enforcement of an MVC architecture, DAO and relational ActiveRecord database layer, sophisticated caching, hierarchical role-based access control, theming, internationalization (I18N), and localization (L10N) are just the tip of the Yii iceberg. As of version 1.1, the core framework is now packaged with an official extension library called Zii. These extensions are developed and maintained by the core framework team members, and continue to extend Yii's core feature set. And with a deep community of users who are also contributing by writing Yiiextensions, the overall feature set available to a Yii-powered application is growing daily. A list of available, user-contributed extensions on the Yii framework website can be found at http://www.yiiframework.com/extensions. There is also an unofficial extension repository of great extensions that can be found at http://yiiext.github.com/, which really demonstrates the strength of the community and the extensibility of this framework. MVC architecture As mentioned earlier, Yii is an MVC framework and provides an explicit directory structure for each piece of model, view, and controller code. Before we get started with building our first Yii application, we need to define a few key terms and look at how Yii implements and enforces this MVC architecture. Model Typically in an MVC architecture, the model is responsible for maintaining the state, and should encapsulate the business rules that apply to the data that defines this state. A model in Yii is any instance of the framework class CModel or its child class. A model class is typically comprised of data attributes that can have separate labels (something user friendly for the purpose of display), and can be validated against a set of rules defined in the model. The data that makes up the attributes in the model class could come from a row of a database table or from the fields in a user input form. Yii implements two kinds of models, namely the form model (a CFormModel class) and active record (a CActiveRecord class). They both extend from the same base class CModel. The class CFormModel represents a data model that collects HTML form inputs. It encapsulates all the logic for form field validation, and any other business logic that may need to be applied to the form field data. It can then store this data in memory or, with the help of an active record model, store data in a database. Active Record (AR) is a design pattern used to abstract database access in an objectoriented fashion. Each AR object in Yii is an instance of CActiveRecord or its child class, which wraps a single row in a database table or view, that encapsulates all the logic and details around database access, and houses much of the business logic that is required to be applied to that data. The data field values for each column in the table row are represented as properties of the active record object. View Typically the view is responsible for rendering the user interface, often based on the data in the model. A view in Yii is a PHP script that contains user interface-related elements, often built using HTML, but can also contain PHP statements. Usually, any PHP statements within the view are very simple, conditional or looping statements, or refer to other Yii UI-related elements such as HTML helper class methods or prebuilt widgets. More sophisticated logic should be separated from the view and placed appropriately in either the model, if dealing directly with the data, or the controller, for more general business logic. Controller The controller is our main director of a routed request, and is responsible for taking user input, interacting with the model, and instructing the view to update and display appropriately. A controller in Yii is an instance of CController or a child class thereof. When a controller runs, it performs the requested action, which then interacts with the necessary models, and renders an appropriate view. An action, in its simplest form, is a controller class method whose name starts with the word action
Read more
  • 0
  • 0
  • 1762

article-image-adding-interactivity-and-completing-your-site
Packt
06 Dec 2012
7 min read
Save for later

Adding Interactivity and Completing Your Site

Packt
06 Dec 2012
7 min read
(For more resources related to this topic, see here.) Using jQuery HTML5 Boilerplate provides a handy and safe way to load jQuery. With jQuery, it is vastly simple to work on writing scripts to access elements. If you are writing custom jQuery script either to kick off a plugin you are using or to do some small interaction, put it in the main.js file in the js folder. Using other libraries If you are more comfortable using other libraries, you can also load and use them in a similar way to jQuery. The following is how we load jQuery: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min. js"></script> <script>window.jQuery || document.write('<script src="js/vendor/ jquery-1.8.2.min.js"></script>') </script> Let us say, you want to use another library (like MooTools ), then look up the Google Libraries API to see if that library is available at developers.google.com/speed/libraries/. If it is available, just replace the reference with the appropriate reference from the site. For example, if we want to replace our jQuery link with a link to MooTools, we would simply replace the following code: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min. js"> </script> With the following line of code: <script src="ajax.googleapis.com/ajax/libs/mootools/1.4.5/mootoolsyui- compressed.js"> </script> We will also download Mootools' minified file to the js/vendor folder locally and replace the following code: <script>window.jQuery||document.write('<script src="js/vendor/jquery- 1.7.2.min.js"></script>') </script> With the following line of code: <script>window.jQuery||document.write('<script src="js/vendor/ mootools-core-1.4.5-full-compat-yc.js"></script>') </script> Adding smooth-scroll plugin and interaction If you have not noticed it already, the website we are building is a single page site! All content that is required is found on the same page. The way our site is currently designed, it would mean clicking on one of the site navigation links would scroll roughly to the section that the navigation link refers to. We would like this interaction to be smooth. Let us use jQuery's smooth-scroll plugin to provide this. Let us download the plugin file from the Github repository, hosted on github.com/kswedberg/jquery-smooth-scroll. In it, we find a minimized version of the plugin (jquery.smooth-scroll.min.js) that we shall open in our text editor. Then copy all the code and paste it within the plugins.js file. Let us add a class name js-scrollitem to let us distinguish that this element has a script that will be used on those elements. This way, there will be a lesser chance of accidentally deleting class names that are required for interactions prompted via JavaScript. Now, we shall write the code to invoke this plugin in the main.js file. Open the main.js file in your text editor and type: $('.js-scrollitem').smoothScroll(); This will make all the clickable links that link to sections on the same page within the parent container with class js-scrollitem scroll smoothly with the help of the plugin. If we have used our HTML5 Boilerplate defaults correctly, adding this will be more than sufficient to get started with smooth scrolling. Next, we would like the navigation links in the line up section to open the right-hand side line up depending on which day was clicked on. Right now, in the following screenshot, it simply shows the line up for the first day, and does not do anything else: Let us continue editing the main.js file and add in the code that would enable this. First, let's add the class names that we will use to control the styling, and the hiding/showing behavior within our code. The code for this functionality is as follows: <nav class="t-tab__nav"> <a class="t-tab__navitem--active t-tab__navitemjs-tabitem" href="#day- 1">Day 1</a> <a class="t-tab__navitemjs-tabitem" href="#day-2">Day 2</a> </nav> Now, we shall write the code that will show the element we clicked on. This code is as follows: var $navlinks = $('#lineup .js-tabitem'); var $tabs = $('.t-tab__body'); var hiddenClass = 'hidden'; var activeClass = 't-tab__navitem--active'; $navlinks.click(function() { // our code for showing or hiding the current day's line up $(this.hash).removeClass(hiddenClass); }); By checking how we have done so far, we notice it keeps each day's line up always visible and does not hide them once done! Let us add that too, as shown in the following code snippet: var $navlinks = $('#lineup .js-tabitem'); var $tabs = $('.t-tab__body'); var hiddenClass = 'hidden'; var activeClass = 't-tab__navitem--active'; var $lastactivetab = null; $navlinks.click(function() { var $this = $(this); //take note of what was the immediately previous tab and tab nav that was active $lastactivetab = $lastactivetab || $tabs.not('.' + hiddenClass); // our code for showing or hiding the current day's line up $lastactivetab.addClass(hiddenClass); $(this.hash).removeClass(hiddenClass); $lastactivetab = $(this.hash); return false; } You would notice that the active tab navigation item still seems to suggest it is Day 1! Let us fix that by changing our code to do something similar with the tabbed navigation anchors, as shown in the following code snippet: var $navlinks = $('#lineup .js-tabitem'); var $tabs = $('.t-tab__body'); var hiddenClass = 'hidden'; var activeClass = 't-tab__navitem--active'; var $lastactivetab = null; var $lastactivenav = null; $navlinks.click(function() { var $this = $(this); //take note of what was the immediately previous tab and tab nav that was active $lastactivetab = $lastactivetab || $tabs.not('.' + hiddenClass); $lastactivenav = $lastactivenav || $navlinks.filter('.' + activeClass); // our code for showing or hiding the current day's line up $lastactivetab.addClass(hiddenClass); $(this.hash).removeClass(hiddenClass); $lastactivetab = $(this.hash); // change active navigation item $lastactivenav.removeClass(activeClass); $this.addClass(activeClass); $lastactivenav = $this; return false; }); Bingo! We have our day-by-day line up ready. We now need to ensure our Google Maps iframe renders when users click on the Locate on a map link. We also want to use the same link to hide the map if the users want to do so. First, we add some identifiable features to the anchor element used to trigger the showing/hiding of map and the iframe for the maps, as shown in the following code snippet: <p>The festival will be held on the beautiful beaches of NgorTerrouBi in Dakar.<ahref="#" class="js-map-link">Locate it on a map</a></p><iframe id="venue-map" class="hidden" width="425"height="350" frameborder="0" scrolling="no" marginheight="0"marginwidth="0" src="http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=ngor+terrou+bi,+dakar,+senegal&;aq=&sll=37.0625,-95.677068&sspn=90.404249,95.976562&ie=UTF8&hq=ngor&hnear=Terrou-Bi,+Bd+Martin+Luther+King,+Gueule+Tapee,+Dakar+Region,+Guediawaye,+Dakar+221,+Senegal&t=m&amp;fll=14.751996,-17.513559&fspn=0.014276,0.011716&st=109146043351405611748&rq=1&ev=p&split=1&ll=14.711109,-17.483921&spn=0.014276,0.011716&output=embed"></iframe> Then we use the following JavaScript to trigger the link: $maplink = $('.js-map-link'); $maplinkText = $maplink.text(); $maplink.toggle(function() { $('#venue-map').removeClass(hiddenClass); $maplink.text('Hide Map'); }, function() { $('#venue-map').addClass(hiddenClass); $maplink.text($maplinkText); });
Read more
  • 0
  • 0
  • 1135
article-image-getting-started-couchdb-and-futon
Packt
27 Nov 2012
11 min read
Save for later

Getting Started with CouchDB and Futon

Packt
27 Nov 2012
11 min read
(For more resources related to this topic, see here.) What is CouchDB? The first sentence of CouchDB's definition (as defined by http://couchdb.apache.org/) is as follows: CouchDB is a document database server, accessible through the RESTful JSON API. Let's dissect this sentence to fully understand what it means. Let's start with the term database server. Database server CouchDB employs a document-oriented database management system that serves a flat collection of documents with no schema, grouping, or hierarchy. This is a concept that NoSQL has introduced, and is a big departure from relational databases (such as MySQL), where you would expect to see tables, relationships, and foreign keys. Every developer has experienced a project where they have had to force a relational database schema into a project that really didn't require the rigidity of tables and complex relationships. This is where CouchDB does things differently; it stores all of the data in a self-contained object with no set schema. The following diagram will help to illustrate this: In order to handle the ability for many users to belong to one-to-many groups in a relational database (such as MySQL), we would create a users table, a groups table, and a link table, called users_groups. This practice is common to most web applications. Now look at the CouchDB documents. There are no tables or link tables, just documents. These documents contain all of the data pertaining to a single object. This diagram is very simplified. If we wanted to create more logic around the groups in CouchDB, we would have had to create group documents, with a simple relationship between the user documents and group documents. Let's dig into what documents are and how CouchDB uses them. Documents To illustrate how you might use documents, first imagine that you are physically filling out the paper form of a job application. This form has information about you, your address, and past addresses. It also has information about many of your past jobs, education, certifications, and much more. A document would save all of this data exactly in the way you would see it in the physical form - all in one place, without any unnecessary complexity. In CouchDB, documents are stored as JSON objects that contain key and value pairs. Each document has reserved fields for metadata such as id, revision, and deleted. Besides the reserved fields, documents are 100 percent schema-less, meaning that each document can be formatted and treated independently with as many different variations as you might need. Example of a CouchDB document Let's take a look at an example of what a CouchDB document might look like for a blog post: { "_id": "431f956fa44b3629ba924eab05000553", "_rev": "1-c46916a8efe63fb8fec6d097007bd1c6", "title": "Why I like Chicken", "author": "Tim Juravich", "tags": [ "Chicken", "Grilled", "Tasty" ], "body": "I like chicken, especially when it's grilled." } Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. JSON format The first thing you might notice is the strange markup of the document, which is JavaScript Object Notation (JSON). JSON is a lightweight data-interchange format based on JavaScript syntax and is extremely portable. CouchDB uses JSON for all communication with it. Key-value storage The next thing that you might notice is that there is a lot of information in this document. There are key-value pairs that are simple to understand, such as "title", "author", and "body", but you'll also notice that "tags" is an array of strings. CouchDB lets you embed as much information as you want directly into a document. This is a concept that might be new to relational database users who are used to normalized and structured databases. Reserved fields Let's look at the two reserved fields: _id and _rev. _id is the unique identifier of the document. This means that _id is mandatory, and no two documents can have the same value. If you don't define an _id on creation of a document, CouchDB will choose a unique one for you. _rev is the revision version of the document and is the field that helps drive CouchDB's version control system. Each time you save a document, the revision number is required so that CouchDB knows which version of the document is the newest. This is required because CouchDB does not use a locking mechanism, meaning that if two people are updating a document at the same time, then the first one to save his/her changes first, wins. One of the unique things about CouchDB's revision system is that each time a document is saved, the original document is not overwritten, and a new document is created with the new data, while CouchDB stores a backup of the previous documents in its original form in an archive. Old revisions remain available until the database is compacted, or some cleanup action occurs. The last piece of the definition sentence is the RESTful JSON API. So, let's cover that next. RESTful JSON API In order to understand REST, let's first define HyperText Transfer Protocol (HTTP ). HTTP is the underlying protocol of the Internet that defines how messages are formatted and transmitted and how services should respond when using a variety of methods. These methods consist of four main verbs, such as GET, PUT, POST, and DELETE. In order to fully understand how HTTP methods function, let's first define REST. Representation State Transfer (REST ) is a stateless protocol that accesses addressable resources through HTTP methods. Stateless means that each request contains all of the information necessary to completely understand and use the data in the request, and addressable resources means that you can access the object via a URL. That might not mean a lot in itself, but, by putting all of these ideas together, it becomes a powerful concept. Let's illustrate the power of REST by looking at two examples: Resource GET PUT POST DELETE http://localhost/collection Read a list of all of the items inside of collection Update the Collection with another collection Create a new collection Delete the collection http://localhost/collection/abc123 Read the details of the abc123 item inside of collection Update the details of abc123 inside of collection Create a new object abc123 inside of a collection Delete abc123 from collection By looking at the table, you can see that each resource is in the form of a URL. The first resource is collection, and the second resource is abc123, which lives inside of collection. Each of these resources responds differently when you pass different methods to them. This is the beauty of REST and HTTP working together. Notice the bold words I used in the table: Read, Update, Create, and Delete. These words are actually, in themselves, another concept, and it, of course, has its own term; CRUD. The unflattering term CRUD stands for Create, Read, Update, and Delete and is a concept that REST uses to define what happens to a defined resource when an HTTP method is combined with a resource in the form of a URL. So, if you were to boil all of this down, you would come to the following diagram: This diagram means: In order to CREATE a resource, you can use either the POST or PUT method In order READ a resource, you need to use the GET method In order to UPDATE a resource, you need to use the PUT method In order to DELETE a resource, you need to use the DELETE method As you can see, this concept of CRUD makes it really clear to find out what method you need to use when you want to perform a specific action. Now that we've looked at what REST means, let's move onto the term API , which means Application Programming Interface. While there are a lot of different use cases and concepts of APIs, an API is what we'll use to programmatically interact with CouchDB. Now that we have defined all of the terms, the RESTful JSON API could be defined as follows: we have the ability to interact with CouchDB by issuing an HTTP request to the CouchDB API with a defined resource, HTTP method, and any additional data. Combining all of these things means that we are using REST. After CouchDB processes our REST request, it will return with a JSON-formatted response with the result of the request. All of this background knowledge will start to make sense as we play with CouchDB's RESTful JSON API, by going through each of the HTTP methods, one at a time. We will use curl to explore each of the HTTP methods by issuing raw HTTP requests. Time for action – getting a list of all databases in CouchDB Let's issue a GET request to access CouchDB and get a list of all of the databases on the server. Run the following command in Terminal curl -X GET http://localhost:5984/_all_dbs Terminal will respond with the following: ["_users"] What just happened? We used Terminal to trigger a GET request to CouchDB's RESTful JSON API. We used one of the options: -X, of curl, to define the HTTP method. In this instance, we used GET. GET is the default method, so technically you could omit -X if you wanted to. Once CouchDB processes the request, it sends back a list of the databases that are in the CouchDB server. Currently, there is only the _users database, which is a default database that CouchDB uses to authenticate users. Time for action – creating new databases in CouchDB In this exercise, we'll issue a PUT request , which will create a new database in CouchDB. Create a new database by running the following command in Terminal: curl -X PUT http://localhost:5984/test-db Terminal will respond with the following: {"ok":true} Try creating another database with the same name by running the following command in Terminal: curl -X PUT http://localhost:5984/test-db Terminal will respond with the following: {"error":"file_exists","reason":"The database could not becreated, the file already exists."} Okay, that didn't work. So let's to try to create a database with a different name by running the following command in Terminal: curl -X PUT http://localhost:5984/another-db Terminal will respond with the following: {"ok":true} Let's check the details of the test-db database quickly and see more detailed information about it. To do that, run the following command in Terminal: curl -X GET http://localhost:5984/test-db Terminal will respond with something similar to this (I re-formatted mine for readability): {"committed_update_seq": 1,"compact_running": false,"db_name": "test-db","disk_format_version": 5,"disk_size": 4182,"doc_count": 0,"doc_del_count": 0,"instance_start_time": "1308863484343052","purge_seq": 0,"update_seq": 1} What just happened? We just used Terminal to trigger a PUT method to the created databases through CouchDB's RESTful JSON API, by passing test-db as the name of the database that we wanted to create at the end of the CouchDB root URL. When the database was successfully created, we received a message that everything went okay. Next, we created a PUT request to create another database with the same name test-db. Because there can't be more than one database with the same name, we received an error message We then used a PUT request to create a new database again, named another-db. When the database was successfully created, we received a message that everything went okay. Finally, we issued a GET request to our test-db database to find out more information on the database. It's not important to know exactly what each of these statistics mean, but it's a useful way to get an overview of a database. It's worth noting that the URL that was called in the final GET request was the same URL we called when we first created the database. The only difference is that we changed the HTTP method from PUT to GET. This is REST in action!
Read more
  • 0
  • 0
  • 3419

article-image-web-services-testing-and-soapui
Packt
16 Nov 2012
8 min read
Save for later

Web Services Testing and soapUI

Packt
16 Nov 2012
8 min read
(For more resources related to this topic, see here.) SOA and web services SOA is a distinct approach for separating concerns and building business solutions utilizing loosely coupled and reusable components. SOA is no longer a nice-to-have feature for most of the enterprises and it is widely used in organizations to achieve a lot of strategic advantages. By adopting SOA, organizations can enable their business applications to quickly and efficiently respond to business, process, and integration changes which usually occur in any enterprise environment. Service-oriented solutions If a software system is built by following the principles associated with SOA, it can be considered as a service-oriented solution. Organizations generally tend to build service-oriented solutions in order to leverage flexibility in their businesses, merge or acquire new businesses, and achieve competitive advantages. To understand the use and purpose of SOA and service-oriented solutions, let's have a look at a simplified case study. Case study Smith and Co. is a large motor insurance policy provider located in North America. The company uses a software system to perform all their operations which are associated with insurance claim processing. The system consists of various modules including the following: Customer enrollment and registration Insurance policy processing Insurance claim processing Customer management Accounting Service providers management With the enormous success and client satisfaction of the insurance claims processed by the company during the recent past, Smith and Co. has acquired InsurePlus Inc., one of its competing insurance providers, a few months back. InsurePlus has also provided some of the insurance motor claim policies which are similar to those that Smith and Co. provides to their clients. Therefore, the company management has decided to integrate the insurance claim processing systems used by both companies and deliver one solution to their clients. Smith and Co. uses a lot of Microsoft(TM) technologies and all of their software applications, including the overall insurance policy management system, are built on .NET framework. On the other hand, InsurePlus uses J2EE heavily, and their insurance processing applications are all based on Java technologies. To worsen the problem of integration, InsurePlus consists of a legacy customer management application component as well, which runs on an AS-400 system. The IT departments of both companies faced numerous difficulties when they tried to integrate the software applications in Smith and Co. and InsurePlus Inc. They had to write a lot of adapter modules so that both applications would communicate with each other and do the protocol conversions as needed. In order to overcome these and future integration issues, the IT management of Smith and Co. decided to adopt SOA into their business application development methodology and convert the insurance processing system into a service-oriented solution. As the first step, a lot of wrapper services (web services which encapsulate the logic of different insurance processing modules) were built, exposing them as web services. Therefore the individual modules were able to communicate with each other with minimum integration concerns. By adopting SOA, their applications used a common language, XML, in message transmission and hence a heterogeneous systems such as the .NET based insurance policy handling system in Smith and Co. was able to communicate with the Java based applications running on InsurePlus Inc. By implementing a service-oriented solution, the system at Smith and Co. was able to merge with a lot of other legacy systems with minimum integration overhead. Building blocks of SOA When studying typical service-oriented solutions, we can identify three major building blocks as follows: Web services Mediation Composition Web services Web services are the individual units of business logic in SOA. Web services communicate with each other and other programs or applications by sending messages. Web services consist of a public interface definition which is a central piece of information that assigns the service an identity and enables its invocation. The service container is the SOA middleware component where the web service is hosted for the consuming applications to interact with it. It allows developers to build, deploy, and manage web services and it also represents the server-side processor role in web service frameworks. A list of commonly used web service frameworks can be found at http://en.wikipedia.org/wiki/List_of_web_service_frameworks; here you can find some popular web service middleware such as Windows Communication Foundation (WCF) Apache CXF, Apache Axis2, and so on. Apache Axis2 can be found at http://axis.apache.org/ The service container contains the business logic, which interacts with the service consumer via a service interface. This is shown in the following diagram: Mediation Usually, the message transmission between nodes in a service-oriented solution does not just occur via the typical point-to-point channels. Instead, once a message is received, it can be flowed through multiple intermediaries and subjected to various transformation and conversions as necessary. This behavior is commonly referred to as message mediation and is another important building block in service-oriented solutions. Similar to how the service container is used as the hosting platform for web services, a broker is the corresponding SOA middleware component for message mediation. Usually, enterprise service bus (ESB) acts as a broker in service-oriented solutions Composition In service-oriented solutions, we cannot expect individual web services running alone to provide the desired business functionality. Instead, multiple web services work together and participate in various service compositions. Usually, the web services are pulled together dynamically at the runtime based on the rules specified in business process definitions. The management or coordination of these business processes are governed by the process coordinator, which is the SOA middleware component associated with web service compositions. Simple Object Access Protocol Simple Object Access Protocol (SOAP) can be considered as the foremost messaging standard for use with web services. It is defined by the World Wide Web Consortium (W3C) at http://www.w3.org/TR/2000/NOTE-SOAP-20000508/ as follows: SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that defines a framework for describing what is in a message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing remote procedure calls and responses. The SOAP specification has been universally accepted as the standard transport protocol for messages processed by web services. There are two different versions of SOAP specification and both of them are widely used in service-oriented solutions. These two versions are SOAP v1.1 and SOAP v1.2. Regardless of the SOAP specification version, the message format of a SOAP message still remains intact. A SOAP message is an XML document that consists of a mandatory SOAP envelope, an optional SOAP header, and a mandatory SOAP body. The structure of a SOAP message is shown in the following diagram: The SOAP Envelope is the wrapper element which holds all child nodes inside a SOAP message. The SOAP Header element is an optional block where the meta information is stored. Using the headers, SOAP messages are capable of containing different types of supplemental information related to the delivery and processing of messages. This indirectly provides the statelessness for web services as by maintaining SOAP headers, services do not necessarily need to store message-specific logic. Typically, SOAP headers can include the following: Message processing instructions Security policy metadata Addressing information Message correlation data Reliable messaging metadata The SOAP body is the element where the actual message contents are hosted. These contents of the body are usually referred to as the message payload. Let's have a look at a sample SOAP message and relate the preceding concepts through the following diagram: In this example SOAP message, we can clearly identify the three elements; envelope, body, and header. The header element includes a set of child elements such as <wsa:To>, <wsa:ReplyTo>, <wsa:Address>, <wsa:MessageID>, and <wsa:Action>. These header blocks are part of the WS-Addressing specification. Similarly, any header element associated with WS-* specifications can be included inside the SOAP header element. The <s:Body> element carries the actual message payload. In this example, it is the <p:echoString> element with a one child element. When working with SOAP messages, identification of the version of SOAP message is one of the important requirements. At first glance, you can determine the version of the specification used in the SOAP message through the namespace identifier of the <Envelope> element. If the message conforms to SOAP 1.1 specification, it would be http://schemas.xmlsoap.org/soap/envelope/, otherwise http://www.w3.org/2003/05/soap-envelope is the name space identifier of SOAP 1.2 messages. Alternatives to SOAP Though SOAP is considered as the standard protocol for web services communication, it is not the only possible transport protocol which is used. SOAP was designed to be extensible so that the other standards could be integrated into it. The WS-* extensions such as WS-Security, WS-Addressing, and WSReliableMessaging are associated with SOAP messaging due to this extensible nature. In addition to the platform and language agnosticism, SOAP messages can be transmitted over various transports such as HTTP, HTTPS, JMS, and SMTP among others. However, there are a few drawbacks associated with SOAP messaging. The performance degradations due to heavy XML processing and the complexities associated with the usage of various WS-* specifications are two of the most common disadvantages of the SOAP messaging model. Because of these concerns, we can identify some alternative approaches to SOAP.
Read more
  • 0
  • 0
  • 2370

article-image-graphic-design-working-clip-art-and-making-your-own
Packt
09 Nov 2012
9 min read
Save for later

Graphic Design - Working with Clip Art and Making Your Own

Packt
09 Nov 2012
9 min read
(For more resources related to this topic, see here.) Making symbols from Character Palette into clip art—where to find clip art for iWork Clip art is the collective name for predrawn images, pictures, and symbols that can be quickly added to documents. In standalone products, a separate Clip art folder is often added to the package. iWork doesn't have one — this has been the subject of numerous complaints on the Internet forums. However, even though there is no clip art folder as such in iWork, there are hundreds of clip-art images on Mac computers that come as part of our computers. Unlike MS Office or Open Office that are separate Universes on your machine, iWork (even though we buy it separately) is an integral part of the Mac. It complements and works with applications that are already there, such as iLife (iPhoto), Mail, Preview, Address Book, Dictionaries, and Spotlight. Getting ready So, where is the clip art for iWork? First, elements of the Pages templates can be used as clip art—just copy and paste them. Look at this wrought iron fence post from the Collector Newsletter template. It is used there as a column divider. Select and copy-paste it into your project, set the image placement to Floating, and move it in between the columns or text boxes. The Collector Newsletter template also has a paper clip, a price tag, and several images of slightly rumpled and yellowed sheets of paper that can be used as backgrounds. Images with little grey houses and house keys from the Real Estate Newsletter template are good to use with any project related to property. The index card image from the Back Page of the Green Grocery Newsletter template can be used for designing a cooking recipe, and the background image of a yellowing piece of paper from the Musical Concert poster would make a good background for an article on history. Clip art in many templates is editable and easy to resize or modify. Some of the images are locked or grouped. Under the Arrange menu, select the Unlock and Ungroup options, to use those images as separate graphic elements. Many of the clip-art images are easy to recreate with iWork tools. Bear in mind, however, that some of the images have low resolution and should only be used with small dimensions. You will find various clip-art images in the following locations: A dozen or so attractive clip-art images are in Image Bullets, under the Bullets drop-down menu: Open the Text Inspector, click on the List tab, and choose Image Bullets from the Bullets & Numbering drop-down menu. There, you will find checkboxes and other images. Silver and gold pearls look very attractive, but any of your own original images can also be made into bullets. In Bullets, choose Custom Image | Choose and import your own image. Note that images with shadows may distort the surrounding text. Use them with care or avoid applying shadows. Navigate to Macintosh HD | Library | Desktop Pictures: Double-click on the hard disk icon on your desktop and go to Library | Desktop Pictures. There are several dozen images including the dew drop and the lady bug. These are large files, good enough for using as background images. They are not, strictly speaking, clip art but are worth keeping in mind. Navigate to Home| Pictures | iChat Icons (or HD | Library | Application Support | Apple | iChat Icons): The Home folder icon (a little house) is available in the side panel of any folder on your Mac. This is where documents associated with your account are stored on your computer. It has a Pictures folder with a dozen very small images sitting in the folder called iChat Icons. National flags are stored here as button-like images. The apple image can be found in the Fruit folder. The gems icons, such as the ruby heart, from this folder look attractive as bullets. Navigate to HD | Library | User Pictures: You can find animals, flowers, nature, sports, and other clip-art images in this folder. These are small TIFF files that can be used as icons when a personal account is set up on a Mac. But of course, they can be used as clip art. The Sports folder has a selection of balls, but not a cricket ball, even though cricket may have the biggest following in the world (Britain, South Africa, Australia, India, Pakistan, Bangladesh, Sri Lanka, and many Caribbean countries). But, a free image of the cricket ball from Wikipedia/Wikimedia can easily be made into clip art. There may be several Libraries on your Mac. The main Library is on your hard drive; don't move or rename any folders here. Duplicate images from this folder and use copies. Your personal library (it is created for each account on your machine) is in the Home folder. This may sound a bit confusing, but you don't have to wade through endless folders to find what you want, just use Spotlight to find relevant images on your computer, in the same way that you would use Google to search on the Internet. Character Palette has hundreds of very useful clip-art-like characters and symbols. You can find the Character Palette via Edit | Special Characters. Alternatively, open System Preferences | International | Input Menu. Check the Character Palette and Show input menu in menu bar boxes: Now, you will be able to open the Character Palette from the screen-top menu. Character Palette can also be accessed through Font Panel. Open it with the Command + T keyboard shortcut. Click on the action wheel at the bottom of the panel and choose Characters... to open the Character Palette. Check the Character Palette box to find what you need. Images here range from the familiar Command symbol on the Mac keyboard to zodiac symbols, to chess pieces and cards icons, to mathematical and musical signs, various daily life shapes, including icons of telephones, pens and pencils, scissors, airplanes, and so on. And there are Greek letters that can be used in scientific papers (for instance, the letter ∏). To import the Character Palette symbols into an iWork document, just click-and-drag them into your project. The beauty of the Character Palette characters is that they behave like letters. You can change the color and font size in the Format bar and add shadows and other effects in the Graphics Inspector or via the Font Panel. To use the Character Palette characters as clip art, we need to turn them into images in PDF, JPEG, or some other format. How to do it... Let's see how a character can be turned into a piece of clip art. This applies to both letters and symbols from the Character Palette. Open Character Palette | Symbols | Miscellaneous Symbols. In this folder, we have a selection of scissors that can be used to show, with a dotted line, where to cut out coupons or forms from brochures, flyers, posters, and other marketing material. Click on the scissors symbol with a snapped off blade and drag it into an iWork document. Select the symbol in the same way as you would select a letter, and enlarge it substantially. To enlarge, click on the Font Size drop-down menu in the Format bar and select a bigger size, or use the shortcut key Command + plus sign (hit the plus key several times). Next, turn the scissors into an image. Make a screenshot (Command + Shift + 4) or use the Print dialog to make a PDF or a JPEG. You can crop the image in iPhoto or Preview before using it in iWork, or you can import it straight into your iWork project and remove the white background with the Alpha tool. If Alpha is not in your toolbar, you can find it under Format |Instant Alpha. Move the scissors onto the dotted line of your coupon. Now, the blade that is snapped in half appears to be cutting through the dotted line. Remember that you can rotate the clip art image to put scissors either on the horizontal or on the vertical sides of the coupon. Use other scissors symbols from the Character Palette, if they are more suitable for your project. Store the "scissors" clip art in iPhoto or another folder for future use if you are likely to need it again. There's more... There are other easily accessible sources of clip art. MS Office clip art is compatible If you have kept your old copy of MS Office, nothing is simpler than copy-pasting or draggingand- dropping clip art from the Office folder right into your iWork project. When using clip art, it's worth remembering that some predrawn images quickly become dated. For example, if you put a clip art image of an incandescent lamp in your marketing documents for electric works, it may give an impression that you are not familiar with more modern and economic lighting technologies. Likewise, a clip art image of an old-fashioned computer with a CRT display put on your promotional literature for computer services can send the wrong message, because modern machines use flat-screen displays. Wikipedia/Wikimedia Look on Wikipedia for free generic images. Search for articles about tools, domestic appliances, furniture, houses, and various other objects. Most articles have downloadable images with no copyright restrictions for re-use. They can easily be made into clip art. This image of a hammer from Wikipedia can be used for any articles about DIY (do-it-yourself) projects. Create your own clip art Above all, it is fun to create your own clip art in iWork. For example, take a few snapshots with your digital camera or cell phone, put them in one of iWork's shapes, and get an original piece of clip art. It could be a nice way to involve children in your project.
Read more
  • 0
  • 0
  • 1850
article-image-organizing-your-balsamiq-files
Packt
09 Oct 2012
3 min read
Save for later

Organizing your Balsamiq files

Packt
09 Oct 2012
3 min read
There are two important things to note about organizing your files in Balsamiq: Keep all of your .bmml files together. The assets folder houses everything else, that is, artwork, logos, PDFs, PSDs, symbols, and so on, as shown in the following screenshot: Naming your files Naming your files in Balsamiq is very important. This is because Balsamiq does not automatically remember the order in which you organized your files after you closed them. Balsamiq will reopen them in the order in which they are sitting in a folder. There are, however, two ways you can gain greater control. Alphabetically You could alphabetize your files, although this could pose a problem as you add and delete files, requiring you to carefully name the new files so that they open in the same order as before. While it is a fine solution, the time it takes to ensure proper alphabetization does not seem worth the effort. Numbering The second, and more productive way, to name your files is to not name them at all, but instead to number them. For example, after naming a new .bmml file, add a number to the end of it in sequential order, for example, filename_1, filename_2, filename_3, and so on. Subpages, in turn, become filename_1a, filename_1b, filename_1c, and so on. Keep in mind, however, that if you add, delete, or modify numbered files, you may still have to modify the remaining page numbers accordingly. Nevertheless, I suspect you will find it to be easier than alphabetizing. Another way to number your files can be found on Balsamiq's website. The link to the exact page is a bit long. Go to http://www.balsamiq.com/ and do a search for Managing Projects in Mockups for Desktop. In the article, they recommend an alternate method of numbering your files by 10s, for example, filename_10, filename_20, filename_30, and so on. The idea being that as you add or remove pages, you can do so incrementally, rather than having to do a complete renumbering each time. In other words, you could add numbers between 11 and 19 and still be fine. Keep in mind that if you choose to use single digits, be sure to add a zero before the filename for consistency and to ensure proper file folder organization, for example, filename_05, filename_06, filename_07, and so on. How you name or number your files is completely up to you. These tips are simply recommendations to consider. The bottom line is to find a system for naming your files that works for you and to stick with it. You will be glad you did.
Read more
  • 0
  • 0
  • 1173

article-image-importing-videos-and-basic-editing-mechanics
Packt
01 Oct 2012
8 min read
Save for later

Importing videos and basic editing mechanics

Packt
01 Oct 2012
8 min read
Importing from a tapeless video camera Chances are, if you've bought a video camera in the last few years, it doesn't record to tape; it records to some form of tapeless media. In most consumer and prosumer cameras, this is typically an SD card, but could also be an internal drive, other various solid-state memory cards, or the thankfully short-lived trend of recordable mini DVDs. In the professional world, examples include Compact Flash, P2 cards (usually found in Panasonic models), SxS cards (many Sony and JVC models, Arri Alexa), or some other form of internal flash storage. How to do it... Plug your camera in to your Mac's USB port, or if you're using a higher-end setup with a capture box, plug the box into likely your FireWire or Thunderbolt box. If your camera uses an SD card as its storage medium, you can also simply stick the SD card into your Mac's card reader or external reader. If you are plugging the camera directly in, turn it on, and set it to the device's playback mode. If FCPX is running, it should automatically launch the Import from Camera window. If it does not, click on the Import from Camera icon in the left of the toolbar. You will see thumbnails of all of your camera's clips. You can easily scrub through them simply by passing your mouse over each one. You can import clips one at a time by selecting a range and then clicking on Import Selected… or you can simply highlight them all and click on Import All… . To select a range, simply move your mouse over a clip until you find the point where you want to start and hit I on your keyboard. Then scrub ahead until you reach where you want the clip to end and hit O. Whether you chose to select one, a few, or all your clips, once you click on the Import button you will arrive at the Import options screen. Choose what event you want your clips to live in, choose if you want to transcode the clips, and select any analyses you want FCPX to perform on the clips as it imports them. Click on Import. FCPX begins the import process. You can close the window and begin editing immediately! How it works... The reason you can edit so quickly, even if you're importing a massive amount of footage, is thanks to some clever programming on Apple's part. While it might take a few minutes or even longer to import all the media off of your camera or memory card, FCPX will access the media directly on the original storage device, until it has finished its import process, and then switch over to the newly imported versions. There's more... Creating a camera archive Creating a camera archive is the simplest and best way to make a backup of your raw footage. Tapeless cameras often store their media in really weird-looking ways with complex folder structures. In many cases, FCPX needs that exact folder structure in order to easily import the media. A camera archive essentially takes a snapshot or image of your camera's currently stored media and saves it to one simple file that you can access in FCPX over and over again. This of course also frees you to delete the contents of the memory card or media drive and reuse it for another shoot. In the Camera Import window, make sure your camera is selected in the left column and click on the Create Archive button in the bottom left corner. The resulting window will let you name the archive and pick a destination drive. Obviously, store your archive on an external drive if it's for backup purposes. If you were to keep it on the same drive as your FCPX system and the drive fails, you'd lose your backup as well! The process creates a proprietary disk image with the original file structure of the memory card. FCPX needs the original file structure (not just the video files) in order to properly capture from the card. By default, it stores the archive in a folder called Final Cut Camera Archives on whatever drive you selected. Later when you need to reimport from a camera archive, simply open the Camera Import window again, and if you don't see your needed archive under Camera Archives on the left, click on Open Archive… and find it in the resulting window. To import all or not to import all If you've got the time, there's nothing to stop you from looking at each and every clip one at a time in the Import from Camera window, selecting a range, and then importing that one clip. However, that's going to take you a while as you'll have to deal with the settings window every time you click on the Import button. If you've got the storage space (and most of us do today), just import everything and worry about weeding out the trash later. But what about XYZ format? There are two web pages you should bookmark to keep up to date. One is www.apple.com/finalcutpro/specs/. This web page lists most of the formats FCPX can work with. Expect this list to grow with future versions. The second site is help.apple.com/finalcutpro/cameras/en/index.html. This web site lets you search camera models for compatibility with FCPX. Just because a format isn't listed on Apple's specs page, doesn't mean it's impossible to work with. Many camera manufacturers release plugins which enhance a program's capabilities. One great example is Canon (www.canon.com), who released a plugin for FCPX allowing users to import MXF files from a wide variety of their cameras. Importing MTS, M2TS, and M2T files If you've ever browsed the file structure of a memory card pulled from an AVCHD camera, you'll have seen a somewhat complex system of files and folders and almost nothing resembling a normal video file. Deep inside you're likely to find files with the extension .mts, .m2ts, or .m2t (on some HDV cameras). By themselves, these files are sitting ducks, unable to be read by most basic video playback software or imported directly by FCPX. But somehow, once you open up the Import from Camera window in FCPX, FCPX is able to translate all that apparent gobbledygook from the memory card into movie files. FCPX needs that gobbledygook to import the footage. But what if someone has given you a hard drive full of nothing but these standalone files? You'll need to convert or rewrap (explained in the following section) the clips before heading in to FCPX. Getting ready There are a number of programs out there that can tackle this task, but a highly recommended one is ClipWrap (http://www.divergentmedia.com/clipwrap). There is a trial, but you'll probably want to go ahead and buy the full version. How to do it... Open ClipWrap. Drag-and-drop your video files (ending in .mts, .m2ts, or .m2t) into the main interface. Set a destination for your new files under Movie Destination. Click on the drop-down menu titled Output Format. You can choose to convert the files to a number of formats including ProRes 422 (the same format that is created when you select the Create optimized media option in FCPX). A faster, space-saving option, however, is to leave the default setting, Rewrap (don't alter video samples): Click on Convert. When the process is done, you will have new video files that end in .mov and can be directly imported into FCPX via File | Import | Files. How it works... In the previous exercise, we chose not to transcode/convert the video files into another format. What we did was take the video and audio stream out of one container (.mts, .m2ts, or .m2t) and put it into another (QuickTime, seen as .mov). It may sound crazy at first, but we basically took the birthday present (the video and audio) out of an ugly gift box that FCPX won't even open and put it into a prettier one that FCPX likes. There's more... Other alternatives ClipWrap is far from the only solution out there, but it is definitely one of the best. The appendix of this book covers the basics of Compressor, Apple's compression software which can't convert raw AVCHD files in most cases, but can convert just about any file that QuickTime can play. The software company, iSkySoft, (www.iskysoft.com) makes a large number of video conversion tools for a reasonable price. If you're looking for a fully featured video encoding software package, look no further than Telestream Episode (www.telestream. net) or Sorenson Squeeze (www.sorensonmedia.com). These two applications are expensive, but can take just about any video file format out there and transcode it to almost anything else, with a wide variety of customizable settings. Rewrapping or transcoding As mentioned in step 3 in the previous section, we could have chosen to transcode to ProRes 422 instead of rewrapping. This is a totally fine option, just know the differences: transcoding, takes much longer, it takes up much more file space, but on the plus side, it is Final Cut Pro X's favorite format (because it's native to FCPX, made by Apple for Apple) and you may save time in the actual editing process by working with a faster more efficient codec once inside FCPX. If you chose to rewrap, you still have the option to transcode when you import into FCPX.
Read more
  • 0
  • 0
  • 864